"To measure the prevalence and effect of this kind of cognitive surrender to AI, the researchers performed a number of studies based on Cognitive Reflection Tests. These tests are designed to elicit incorrect answers from participants that default to “intuitive” (System 1) thought processes, but to be relatively simple to answer for those who use more “deliberative” (System 2) thought processes.
For their experiments, the researchers provided participants with optional access to an LLM chatbot that had been modified to randomly provide inaccurate answers to the CRT questions about half the time (and accurate answers the other half). The researchers hypothesized that users who frequently consulted the chatbot would let those incorrect answers “override intuitive and deliberative processes,” hurting their overall performance and highlighting the dangers of cognitive surrender.
In one study, an experimental group with access to this modified AI consulted it for help with about 50 percent of the presented CRT problems. When the AI was accurate, those AI users accepted its reasoning about 93 percent of the time. When the AI was randomly “faulty,” though, those users still accepted the AI reasoning a lower (but still high) 80 percent of the time, showing that the mere presence of the AI frequently “displaced internal reasoning,” according to the researchers.
Unsurprisingly, the AI-using experimental group did much better than the “brain-only” control group when the AI provided accurate answers, and much worse than the control when the AI was inaccurate."


