| Media, Entertainment, & Arts Alliance, Australia | https://www.meaa.org |
| IG | https://instagram.com/tahinikill |
| Media, Entertainment, & Arts Alliance, Australia | https://www.meaa.org |
| IG | https://instagram.com/tahinikill |
That "#ChatGPT is bullshit" paper I boosted earlier does a nice job of laying out why the "hallucination" terminology is harmful "what occurs in the case of an #LLM delivering false utterances is not an unusual or deviant form of the process it usually goes through… The very same process occurs when its outputs happen to be true"
https://link.springer.com/article/10.1007/s10676-024-09775-5

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
Musk claims Neuralink patient doing OK with implant, can move mouse with brain
Medical ethicists alarmed by Musk being "sole source of information" on patient.
Fuck you I won't do what you tell me.
1956, Cecil Williams