Anyone who still works for Google, can you tell me what the *fuck* is going on? I am not in the market for automated high-throughput bluffing
@pjie2 Some lovely AI hallucinations.

@alterelefant @pjie2

Not "hallucination", just bullshit !

"The machines are not trying to communicate something they believe or perceive. Their inaccuracy is not due to misperception or hallucination. As we have pointed out, they are not trying to convey information at all. They are bullshitting."
https://link.springer.com/article/10.1007/s10676-024-09775-5

ChatGPT is bullshit - Ethics and Information Technology

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

SpringerLink

@Gergovie @alterelefant @pjie2

I never thought "bullshit" would be one day a valid scientific technical term...

@Lily_and_frog @Gergovie @pjie2 I would like to divide it up into "bulls hit". The AI neural network is convinced it found a "hit", but it stands out to be a "bulls hit".