What makes LLMs work isn't deep neural networks or attention mechanisms or vector databases or anything like that.

What makes LLMs work is our tendency to see faces on toast.

@jasongorman jokes aside, this is a very good metaphor
@tymwol I'm not even sure it's a metaphor 🙂

@jasongorman @tymwol

It's called the ELIZA effect, and we've known about it since 1966: https://en.wikipedia.org/wiki/ELIZA_effect

ELIZA effect - Wikipedia

@Infrapink @jasongorman @tymwol

I think it's also related to the #PeterPrinciple... that people get promoted to a level of incompetence.

The chat part of #ChatGPT is really important because it allows us to correct the LLM's output and give it another chance to completely fool us. When we're satisfied, the output is above our present ability to detect that it is BS.

By "present ability" I mean we might not feel bothered to check the output, or we genuinely might think it's correct.

@pete @Infrapink @jasongorman @tymwol It should also be noted that the people most in love with the bullshit-generating machine make a living out of generating bullshit themselves.