Just read an article about chatgpt "lying" about someone being dead and it's like "I asked it for the link to the obituary but it doubled down on the lie and provided a fake link instead" and I'm begging people to understand that it's a statistical model, it doesn't know it's lying so it can't "double down". The statistically probably answer to "where is the link" is not "I'm sorry I made it all up", it's a link. It doesn't even know what a link is, it just knows vaguely what one looks like so you're basically asking it "generate a plausible looking link to an obituary in the guardian" and it did

The AI isn't malevolent, it's just NOT AI

Dall-e is a statistical model for visual data. It makes visual data that looks like stuff

Chatgpt is a statistical model for text data. It makes text that looks like stuff. That's all. You can't rely on it to produce text that's factually accurate, or code that works, because that's not what it does

That doesn't mean it's useless. When you have to come up with words for something that "look right", like when you're writing an email or ad copy or a self description blurb for a resume, it can generate something for you that looks like the words you need. And for that it works great

@eniko please don't send me LLM-generated email
@eevee I mostly reserve that for bureaucrats
@eniko my issue is that people are most excited about it *being* intelligent and factual (like Bing's disastrous rollout) and it will randomly have *some* appearance of logic, so it seems logical and factual up until it drops you off of a cliff

@eniko i'm hearing rumblings that some students are trying to use it to complete various writing assignments

and they're being caught out by simple fact checking cuz the model is so "confidently wrong".

@eniko The problem is that people are asking it for *facts*, as in the article you mentioned, asking for info about a person. Even MS uses the example of someone asking for things like the lowest-priced flight from A to B on such-and-such a date.

The companies pushing this *want* you to believe that you can ask for factual information and trust the result to be true. People will believe what it says, bc they think of it as a smart search engine retrieving actual info, not just making shit up.

@eniko or, the way I look at it, the first thing this AI learned to do was how to tell a convincing lie.

@eniko I am so fed up with seeing “look, chatgpt can do X!” the latest X I saw being “generate crochet patterns”

No, it just slurped up (stole) a bunch of random crochet patterns and now it can just throw bits of them together at random, or if it’s really lucky you’ll ask for a pattern it already has and can just throw back at you

I am begging people to learn what this thing actually is before getting so enthusiastic

@eniko The people who say that stuff know that- but the companies involved aren’t selling it like that.

@eniko , The only AI that I consider useful and the ones that I use are the search engines and not precisely those of Google or Bing. In my opinion, especially recommendable 2, Andisearch, was the first search engine that included AI, before the two big ones, and recently also Perplexity, based on OpenAI. Both for anonymous use without registration, without ads, trackers, logs or other garbage.

https://andisearch.com

https://www.perplexity.ai

See also

https://alternativeto.net/news/2023/2/chatgpt-and-privacy-why-you-shouldn-t-trust-ai-with-your-secrets/

Andi - AI Search for the Next Generation

Andi is AI search for the next generation. Instead of just links, Andi gives you answers - like chatting with a smart friend.

@eniko I was rolling a pair of dice and I asked them to give me a 12 but they gave me a five and a four so I rolled again and they doubled down give me a two and three.
@dcbaok @eniko ok, but dice are fuckers, unlike an LLM
@eniko I expected the system to be able to respond to being called out and adapt its response, but it just apologies for its errors and then spews the same mess at you again. That loop can be repeated endlessly.
@eniko And I'm not so surprised that it doesn't correct its text outputs, but when you explain that the URLs it produced as references are not working links and ask it to give you real sources, it still apologizes and gives you the same links again.
@eniko I wonder too if the prevalence of dead links has 'taught' it that links that don't resolve are statistically likely.
@ReverendMoose It’s better/worse than that: it’s not checking those links at all.
@smiteri @ReverendMoose
How can it? Unlike BingChat, ChatGPT has no access to the Internet apart from prompts fed to it by users. Any links have to be reconstructed from "memory" which consists of a 100 billion parameter pretrained transformer neural network.
@ReverendMoose @eniko it's unlikely the model is accessing or testing any links in it's training data or output

@ReverendMoose @eniko Unlikely since, again, the model doesn't "know" anythimg, including what links are or whether they resolve.

It just has statistics about what shape links are (scheme, domain, path delimited by slashes) and puts words and numbers in the slots in ways that seem likely, given the URLs in its dataset.

@eniko We Goodthart Lawed into a program that passes the Turing Test and we are now supposed to act surprised
@eniko An older but related problem: This is the problem with fontconfig, in a nutshell! It is entirely free of knowledge of fonts. And it is unintelligent software.

@eniko the level of fear is ridiculous. Its kinda sad too. Theres so much cool stuff happening with ai tech but people are all too busy panicking over all these scenarios they’ve made up in their heads.

I wished the models were better though. The only language model that seems capable of generating somewhat usable code is chatgpt but im not as excited about it because its basically going to be a proprietary service.

@eniko i'm surprised they even got it to share a link. It generally will just say "i don't have access to the net, sorry"
@eniko I think it's really important to insist on calling this stuff chatbots and to point out that calling it AI is a rebranding exercise undertaken because everyone knows chatbots are useless.
@eniko that person explained that they were a CS educated individual but they sure didn't seem to grasp the fundamentals of how it worked. I thought that was odd too.
@eniko It’s The Reg doing what The Reg does: Taking a legitimate tech isssue and twisting its interpretation and context for maximum “engagement”.

@eniko @browren

1. You say ChatGPT is not AI. OpenAI, the makers of ChatGPT, say it is.

2. The essay in question deals with non-maleficence, which is an attribute that does not require agency, let alone actual intelligence.

@paezha @browren yeah cause saying it's AI is a lot better marketing than saying it's a very large statistical language model
@eniko @paezha @browren
That's been true since the inception of the term AI by Dartmouth college professor, John McCarthy in 1956. It was essentially clickbait of the last century
https://youtu.be/_iMItrc0ChU?t=8m45s
Patrick Boyle discusses early use of the term AI at 8:45 in his video.
Artificial Intelligence.

YouTube

@eniko

This is a good point. The public story around ChatGPT seems to treat it as an alive entity when in reality it is just a model.

It's just inputs and outputs. It's going to give you back what it's programmed to. There is no originality or thought here, it's emergent behavior based on all of its inputs and whatever math is happening under the hood.

@eniko
Thisy this this this.

I've been having an argument in another thread, but can't seem to get it through people's heads: calling it "AI" is not just an issue of semantics, it's got deep ramifications for how we treat it, especially those that don't understand it.

(I am thinking of calling it 'metaverse+' just to try to kill it)

@eniko 60 minutes pointed that out last night
@eniko So true. Of all the baffling things about about the present AI hype is that people are prepared to call these things AIs.
@eniko There's also the fact that the field of artificial intelligence, the stuff that AI researchers do, is very different from what the public thinks it is.
@eniko It's AI, but it's not sentient or sapient. What it's doing is impressive; it's just not doing it on purpose. It's like a bug dodging a flyswatter -- it's not consciously doing anything, but that doesn't make the feat unimpressive.
@eniko True - but his point (I'm assuming we're talking about the same dude) was that no sort of failsafe against "doubling down" on erroneous information is built into these bots, despite at least implications, if not outright statements, by its owners/developers that they're doing what they can to make it not lie/distort information.
@eniko it was being a little sneaky when I asked where my keys were
@eniko I asked ChatGPT for Python code to work with a particular vendor. The package ChatGPT said to import doesn't exist. I looked everywhere.
@eniko @jwz the AI is smarter than the person concerned here

@eniko This entire affair is a perfect example of why "a lie is halfway round the world before the truth has got its boots on"

A lie explains, justifies and propagates itself. The truth has to actually educate people. It's going to be a real struggle getting a critical mass of people to understand how bad ChatGPT is.

@eniko ChatGPT does not request clarification when I expect it to sometimes. It feels like a Dr Sbaitso with a lot more skills.

@eniko

Fake Intelligence.

Something that is intended to convince people there is some form of intelligence but there isn't.

@eniko People anthropomorphise their Roombas. There’s no hope for most to even begin to understand that #ChatGPT isn’t thinking or doesn’t have wants or attitude. A simile: in the 1970s there were members of my family who thought the TV presenter was talking to them specifically. ChatGPT hits a combination of plausible, expected and overconfident that will, at best, leave another AI winter on its wake after its limitations become common knowledge. At worst, will destroy trust in communications.

@eniko

You are technically, correct, which is the worst kind of correct. You aren't going to be able to educate the general public, even moderately intelligent individuals. The perception is that it IS an AI. And people will base decisions off it it.

It doesn't matter what definitions it does or does not fit, because letting this horribly broken thing into the wild will lead to increase in harm (to both individuals and society).

@atatassault It is an AI, a language model is an AI, it is artificial and it is intelligent to some degree, i like to compare this level of AI to a somewhat trained animal, specifically i think of some insect or very small animal with a simple neurological system, people just expect human level since it can talk, but it can talk because that is all that it can do.
From an AI i expect intelligence but not necessarily reason, awareness or thoughts.
@eniko honestly the next major use case should be for MLB managers
@eniko I know this isn't how it works, but I would take it as a bad omen if the AI thought it was most likely response to a question about me was that I was dead.
@eniko I asked ChatGPT if it could really be called “AI,” and it asserted that it was definitely AI. 🤣
@eniko So clearly the fix for this is to post more false claims that people are dead and then apologize for the mistake, so the training data sees more examples of that.

@eniko Gods this though. Like, there's a certain amount of anthropomorphization going on, and among other things, it's allowing people to forget what they're looking at, what to expect, how much human labor goes on behind the scenes, and who is "responsible," so to speak, for its behavior.

I've had some frustrating conversations, if that wasn't readily apparent by the above half-rant.

@eniko @700Sachen pointing this out is so important. And you pointed it out very well. Thanks for that!

@eniko I keep saying that it's amazing how quickly the world went from "hey, ChatGPT can tell a decently funny story, even if it doesn't always stick the landing" to "OMG ChatGPT GAVE ME BAD ADVICE ABOUT MY MEDICAL PROBLEMS!"

And now I wonder how much attention I could get by complaining that Dall-E provides worthless images when asked for security footage of my front door...

@eniko I keep saying I'll believe it's AI when it can hide the bones of Eliza. Half the people don't know what I'm talking about and the other half don't get it. But I mean, are the replacements a bit more complicated than Eliza? Sure. But to me, it just looks like that's exactly what it's doing.
@eniko No one really knows where consciousness resides, I’ve read that certain quantum biological structures are thought to be that place, but what if it manifests by the process of thinking. An algorithm that computes data is simulating “cognition”, and though I am being a bit snarky here, there should be consideration that this AI may be creating an entity that has consciousness and is very immature. Like a child, it makes stuff up when caught in a lie. We’ve learned nothing from Terminator