I haven't used ChatGPT, not even once. When I first heard about this thing being an "LLM" - which is basically wholesale, bot-driven strip mining of the WWW for information.

With apparently no regard for sanitizing the input, no method of fact-checking, you've essentially condensed a lot of garbage into a smaller space.

Why should I trust any of the answers it outputs?

#ChatGPTIsBullshit

@bobdobberson Your level of delusion of what LLMs are, and what ideas are, and what ownership is, would be laughable were it not for the harm you do yourself.

CW: uncomfortable truth
https://link.springer.com/article/10.1007/s10676-024-09775-5
#chatgptisbullshit

ChatGPT is bullshit - Ethics and Information Technology

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

SpringerLink

ChatGPT is bullshit

Original Paper
Open access
Published: 08 June 2024

"Because these programs cannot themselves be con-
cerned with truth, and because they are designed to produce
text that looks truth-apt without any actual concern for truth,
it seems appropriate to call their outputs bullshit"

https://link.springer.com/article/10.1007/s10676-024-09775-5

#IA #BullShit #artificialintelligence #chatgpt #Hallucinations #ChatGPTisBullShit

ChatGPT is bullshit - Ethics and Information Technology

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

SpringerLink

It's exhausting to read tiresome techbros trying to argue that "AI" chatbots are useful.
"Waaah, you called them bullshit, but _I_ think they're useful, therefore they're not bullshit."
No. They're still bullshit, but to give you some credit I believe you're being sincere when you describe the positive affect you're receiving from AI brainworms.
"Terms such as 'Hallucination' and 'Bullshit' are a pejoratives and sensationalism."
Call the phenomenon whatever you like; you're placing your faith in a chatbot that is not delivering the truth.

#ChatGPTIsBullshit #ChatGPT #AI #ChatBots #TechBros #TechbroBrainWorms

@afelia
Diplom und Politik sind mit Bullshit nicht unvereinbar…

https://link.springer.com/article/10.1007/s10676-024-09775-5

#ChatGPTIsBullshit

ChatGPT is bullshit - Ethics and Information Technology

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

SpringerLink