Lol Neil Gaiman says

"ChatGPT doesn't give you information. It gives you information-shaped sentences."

This is one of the better ones I have seen.

If you liked this you will probably also like this: https://hachyderm.io/@shafik/112633916561562275
Shafik Yaghmour (@[email protected])

"ChatGPT is bullshit": https://link.springer.com/content/pdf/10.1007/s10676-024-09775-5.pdf Title is clearly baity but the content is excellent. It focuses on the fact that LLMs goal "simply aim to replicate human speech or writing" not provide information or facts. Then they lay out different types of "bullshit" and determine if what LLMs produce could be fall into any of those types. TL;DR; yes, it does. This feels like a good model for how to think about LLMs. This is different if they are useful and how to use them if at all.

Hachyderm.io

@shafik

Leave it to Neil.

LLMs biggest feat is their ability to side step the uncanny valley.

@ThePowerNap @shafik They don't. Every bit of LLM output is cringe uncanny valley shit. One of the emotional (as opposed to concrete harms or technical aspects) reasons they're so hated.

@dalias @shafik

Like, I agree. But I've met too many people in person who have drank all the coolaid to think it's universal.

@shafik Splendid! Where did you see or ear this one? I simply must send the source to’ one of our teachers!

@skk It was a conscious choice not to reference the site where it came from b/c I don't want to send any more traffic to it.

Maybe @neilhimself will replicate the quote here.

@shafik Oh, I see... He's still active over there? I'm not there anymore, so I thought it was maybe on another place he's active on nowadays. Thank you for sharing the quote, nevertheless!

@shafik

That's way too optimistic.
ChatGPT does not give you information-shaped sentences. It gives you sentence-shaped sentences.

@CGdoppelpunkt @shafik problem is, “sentence-shaped sentences” is also what humans produce much of the time. chatgpt often makes more sense and says things in a clearer way than many of the people i went to school with and worked with.

@shafik @sabbatical

You seemingly know the wrong people.

@CGdoppelpunkt @shafik @sabbatical that's not hard because there are so many.
@hllizi @CGdoppelpunkt @shafik honestly, i’m guilty of it too. it’s all too easy to speak without really saying much.

@CGdoppelpunkt It can also give you excrete-shaped error messages.

@shafik

@shafik

Glue cheese to pizza.

Elephants have two feet.

Put gasoline in your spaghetti sauce.

Google's "A.I." searches give you misinformation-shaped sentences.

@shafik remember how we were all talking about truthiness about eight years ago?
AI: truthiness, the sequel

@smellsofbikes @shafik

Stephen Colbert: "Watch, I can just make bullshit up!"

Some machine learning engineer: "Hey, I could automate that."

ChatGPT is bullshit - Ethics and Information Technology

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

SpringerLink
@shafik @DemocracyMattersALot It is not Artificial Intelligence but Artificial Information !!!!
@shafik my favorite is still that “it is eloquent, not wise”.
@shafik This is a good one. I have said “AI is ELIZA in a trench coat.”
@shafik what’s the source for this, please?
@shafik
AI and ChatGPT are in their infancy. Think of AI as in alpha testing. Not ready for prime time. It will get better. Much better. In the meantime don't trust it except in specialized cases.

@mintyfresh lots of "heavy lifting" here

"infancy", this field has been growing since the 50s. We have iterated over many major techniques LLMs being the most recent but ML and deep learning was the previous two "big deals". Great stuff but decades in the making still.

LLMs flaws won't be fixed quickly. The breakthroughs are mostly measured in terms of decades not years. For "practical" purposes that is an eternity.

Folks will find lots of "useful" ways to use LLMs, won't live up to the hype.

@mintyfresh @shafik I'm skeptical because "AI" experts having been saying this for about sixty years now, and the only real improvement I've seen since then is the volume of output. Make some hype when it's actually ready for prime time, not when it's still just a predictive text engine powered by a table of statistics.

I
especially don't trust "AI" in specialized cases, because those are situations where I know enough to recognize how it is often dangerously wrong.
@shafik at the bike shop we used to say that improperly built or incomplete bicycles were just "bicycle-shaped objects"... Never thought computer science would do that for information

"Human beings don't give you information. They give you information-shaped sentences."

@shafik

@shafik It is surprisingly accurate from a technical perspective too.
@shafik I've been saying "language-shaped imitation-language" but this is much more concise, thank you!
@shafik It's an answer simulator.
@shafik Definitely up there with Ted Chiang's "blurry JPEG of the internet"
Where is Neil Gaiman these days? He used to be around here somewhere.
@shafik I heard something similar: ChatGPT doesn’t answer your question but answers „what would an answer to that question look like?“
@shafik Better than my "offers you a hamburger, gives you a hamburger-shaped object made of plastic & expects you to eat it"...
@shafik I heard Adam Conover refer to it - and similar technologies - as a “word calculator”, and that has stuck with me.
@shafik they shoved LLMs out the door to make products without checking to make sure it was done cooking first
@shafik
same as T on a good day 😈
@shafik ChatGPT is dreaming and can't wake up.

@shafik

I think the reliability and many law or moral related issues could be solved, perhaps even by chaining different LLMs, but the energy consumption would still increase.

@shafik Oooh, I’d love to have a citation for that!
@shafik
Like cheese food instead of actual cheese