What makes LLMs work isn't deep neural networks or attention mechanisms or vector databases or anything like that.
What makes LLMs work is our tendency to see faces on toast.
What makes LLMs work isn't deep neural networks or attention mechanisms or vector databases or anything like that.
What makes LLMs work is our tendency to see faces on toast.
It's called the ELIZA effect, and we've known about it since 1966: https://en.wikipedia.org/wiki/ELIZA_effect
@Infrapink @jasongorman @tymwol
I think it's also related to the #PeterPrinciple... that people get promoted to a level of incompetence.
The chat part of #ChatGPT is really important because it allows us to correct the LLM's output and give it another chance to completely fool us. When we're satisfied, the output is above our present ability to detect that it is BS.
By "present ability" I mean we might not feel bothered to check the output, or we genuinely might think it's correct.