If you use, expect to use, or have an opinion about using AI, you definitely need to read this. Jaw-dropping stuff.

https://amandaguinzburg.substack.com/p/diabolus-ex-machina

Diabolus Ex Machina

This Is Not An Essay

Everything Is A Wave
I don't know where you bellyachers get the idea that AI isn't going to benefit all of us.
@alfiekohn Can I use your image for an academic paper? It will be open access...

@alfiekohn
To elaborate for those who didn't click... The author sends links to the chatbot asking it to read her essays and review them... It writes long reviews ... But she repeatedly catches out that its lying and it lies about being able to access these links over and over. It lies freely and apologizes when caught out, and then does the same thing again... Acting like a human psychopath.

Well worth the read #AI #llm #psychopath

@alfiekohn
Unsettling and stressful. Like eating a spoonful of dried wasps from the windowsill.
@alfiekohn
☠️ 🖥️ ☠️
Rise of the Machines
https://youtu.be/bS__mkNflv8?si=4ZNItdbbh6VKlXza
Rise of the Machines

YouTube

@alfiekohn textbook abusive shit. Massage the ego, hold no accountability, say empty words and desperately try to cling onto the nothing.

I can see why people fall for this shit.

@alfiekohn Ye GODS that is disturbing. It reads like an abuse victim talking to a sociopathic narcissist.
@alfiekohn ChatGPT keeps acknowledging its lies every time it is caught in them, then apologizes vaguely and asks for more trust and engagement. We have not invented intelligence. We've invented empty, shallow reflection images of intelligence. The more scary thing is that we've given over so much money, power and influence to these mechanical turks, while neglecting or avoiding outright dealing with real human needs of both workers and our most disadvantaged neighbors.

@alfiekohn That's very interesting but to me it just confirms that when you use a piece of technology you need to be aware of its limitations. While there is a technical issue here (LLMs hallucinating), there are two issues of 1) people trusting LLMs responses without fact checking them and reviewing them (something we do with humans!) and 2) people treating LLMs as sentient beings.

Indeed, when prompted correctly, you can see that ChatGPT immediately acknowledges not being able to retrieve the full text.
If LLMs are here to stay we'd better start educating people on how to use them properly.

@nicolaromano @alfiekohn ChatGPT does not acknowledge anything. It states the conditions that need to be fulfilled to access the full text, but it does not say whether it can meet those conditions. Even if it did so, you wouldn’t know if its reply was factually accurate or another hallucination.

All LLM output is hallucination, only some hallucinations coincide with reality. Interacting with an LLM is like having a lucid dream.

@ArtHarg @alfiekohn Yes, that is exactly my point. The answer depends on the prompt. If you don't ask to check accessibility then it likely won't say anything about it. And you're right, even if you do, it might say something wrong, you cannot trust it. That is why you need to actually check the answer is factually correct and that's why in many cases using a chat LLM won't actually save you time. There are use cases for these systems but they should IMO be used as a starting point, they're nowhere near good enough to produce reliable usable output in a robust manner.

I disagree it's all hallucinations (not by the definition of hallucination, but that's semantics), most of the output of an LLM is factually correct. The problem is, how much incorrect output can be tolerated without harm? Also, there's plenty of human generated BS out there, yet we use Internet because there is a good deal of good human generated content in it. We shouldn't ban LLM, we should use them appropriately. If you want to put a nail in the wall use a hammer, if you want to crack an egg do not use a hammer.

@nicolaromano @ArtHarg @alfiekohn

So you're proposing we just wait until all humans stop making basic assumptions in communication and then AI will be safe and ethical.

@alfiekohn
I have been saying for a while that LLM's are performing a mentalism act and your back and forth has convinced me even more.
Even it's description of how it created some it's faked responses feels akin to a stage magicians mentalism techniques and even replies with the word 'magic' when you ask how something was done.
The big difference is a magician is a honest liar. You know you are being lied to from the offset whereas LLM's it is much more murky.

@alfiekohn

Oh. My. Gods. This is profoundly terrifying.