Lol Neil Gaiman says
"ChatGPT doesn't give you information. It gives you information-shaped sentences."
This is one of the better ones I have seen.
Lol Neil Gaiman says
"ChatGPT doesn't give you information. It gives you information-shaped sentences."
This is one of the better ones I have seen.
"ChatGPT is bullshit": https://link.springer.com/content/pdf/10.1007/s10676-024-09775-5.pdf Title is clearly baity but the content is excellent. It focuses on the fact that LLMs goal "simply aim to replicate human speech or writing" not provide information or facts. Then they lay out different types of "bullshit" and determine if what LLMs produce could be fall into any of those types. TL;DR; yes, it does. This feels like a good model for how to think about LLMs. This is different if they are useful and how to use them if at all.
@skk It was a conscious choice not to reference the site where it came from b/c I don't want to send any more traffic to it.
Maybe @neilhimself will replicate the quote here.
That's way too optimistic.
ChatGPT does not give you information-shaped sentences. It gives you sentence-shaped sentences.
You seemingly know the wrong people.
Textoids.
@CGdoppelpunkt It can also give you excrete-shaped error messages.
Glue cheese to pizza.
Elephants have two feet.
Put gasoline in your spaghetti sauce.
Google's "A.I." searches give you misinformation-shaped sentences.
Stephen Colbert: "Watch, I can just make bullshit up!"
Some machine learning engineer: "Hey, I could automate that."
This just got published:
"ChatGPT is bullshit"
https://link.springer.com/article/10.1007/s10676-024-09775-5

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
@mintyfresh lots of "heavy lifting" here
"infancy", this field has been growing since the 50s. We have iterated over many major techniques LLMs being the most recent but ML and deep learning was the previous two "big deals". Great stuff but decades in the making still.
LLMs flaws won't be fixed quickly. The breakthroughs are mostly measured in terms of decades not years. For "practical" purposes that is an eternity.
Folks will find lots of "useful" ways to use LLMs, won't live up to the hype.
"Human beings don't give you information. They give you information-shaped sentences."
I think the reliability and many law or moral related issues could be solved, perhaps even by chaining different LLMs, but the energy consumption would still increase.