I suspect LLMs reinforce the Gell-Mann amnesia effect. Experts who query LLMs about their fields of expertise will *quickly* realize how wrong their output can be, how quick they are to confabulate, and how eager they are to confirm one’s biases. Sometimes, replying “No, that’s wrong, try again” can cause an LLM to generate a completely different—and often opposite—answer to the same query, which makes no sense if the LLM had *actually* worked out an independently coherent answer.
Asking an LLM to comment about a subject you know nothing about—or worse, know a little bit about—is a psychologically dangerous activity. Not only will it confirm your biases, it will do so in a way that *appears* to be objective and independent, using fallacies that lie just beyond your ability to discern. At best, you will be misled. At worst, you will begin spiraling down a path of conspiracy thinking.
Be extremely suspicious of answers that are especially satisfying; you might have just gaslit yourself.
#ai