Yessss people are seeing the light about #LLM like #ChatGPT

For the past couple of months, I’ve been working on an idea that I think explains the mechanism of this intelligence illusion.

I now believe that there is even less intelligence and reasoning in these LLMs than I thought before.

Many of the proposed use cases now look like borderline fraudulent pseudoscience to me.

https://softwarecrisis.dev/letters/llmentalist/

The LLMentalist Effect: how chat-based Large Language Models rep…

The new era of tech seems to be built on superstitious behaviour

Out of the Software Crisis
@amydentata Is this an issue unique to LLMs specifically?

@amydentata Yeah, the thing I've been telling people for a while is the hype is honestly overblown in all directions.

LLMs aren't some sort of revolutionary, revelationary technology - and shouldn't be used for anything life critical or important...

But at the same time? They're essentially a text-based magic 8 ball, a slight upgrade on their predecessor Markov chain generators, which were used for many of the same purposes (entertainment on the good end, spam and filler text and business crap on the bad) so, it's not like they'll end the world either - unless someone believes in them enough to ask them "should I drop the bomb"

@amydentata If only people would think deeper about AI. I believe even people in the tech world massively overestimate what AI can do rn. Like they really think we're a year away from AGI just because chatgpt can webscrape, make pictures based on old styles and tell a frog from a cat. Like we don't even understand wtf conciousness is. We don't even have an agreed upon definition. I also think it's because some techy people literally don't look into fields other than their own.

@amydentata I think you'd like this video: https://www.youtube.com/watch?v=9dNVmPepATM&t=3s

I don't agree with everything he says in every video but he's been my favorite YT channel for a minute.

Chat GPT and the Paradoxes of Our Times

YouTube

@amydentata I was initially excited by the title of the article but then disappointed by the contents.

I see real world examples every day that LLMs can do tasks that we previously thought require some intelligence (such as coding, customer support, etc.).

This forces us as humans to take an uncomfortable look at ourselves.

This article seems to react to that discomfort by minimizing what we're seeing instead of reckoning with the reality we're about to face.

@amydentata This whole time, I've been bemused by people surprised that a model that is trained to reproduce informal conversation comes up with empty statements.

It feels like a lot of AI folks underestimate the prevalence of phatic expressions in conversation.

@amydentata I hate the fact that whenever I tell someone that I work in ML, they either think I'm making the singularity or that it's some sort of crypto fad. It's just pattern matching guys, anybody who actually works in the field knows it's limitations.

The real threat isn't AGI, it's prediction models trained on unprocessed, biased and stolen data. Companies like Google and Microsoft know this which is why they try to direct the debate towards science fiction which is harder to regulate.

@amydentata Yes, LLMs are overhyped right now, and way more limited than perceived by the general public, but I still think generative AI is a revolutionary technology with many real-world use cases.

@amydentata this is a fantastic article, thanks!

I can see small but concrete use cases for an LLM. It’s been helpful for small and simple coding assistance. It’s useful when trying to write boilerplate text or as an assistant in editing text.

But yes, for literally any other task, the analogy of a psychic hotline is a brilliant one. It should never be used by anyone who doesn’t have domain experience to identify hallucinations and errors.