I suspect LLMs reinforce the Gell-Mann amnesia effect. Experts who query LLMs about their fields of expertise will *quickly* realize how wrong their output can be, how quick they are to confabulate, and how eager they are to confirm oneās biases. Sometimes, replying āNo, thatās wrong, try againā can cause an LLM to generate a completely differentāand often oppositeāanswer to the same query, which makes no sense if the LLM had *actually* worked out an independently coherent answer.
Asking an LLM to comment about a subject you know nothing aboutāor worse, know a little bit aboutāis a psychologically dangerous activity. Not only will it confirm your biases, it will do so in a way that *appears* to be objective and independent, using fallacies that lie just beyond your ability to discern. At best, you will be misled. At worst, you will begin spiraling down a path of conspiracy thinking.
Be extremely suspicious of answers that are especially satisfying; you might have just gaslit yourself.
#ai