@emilygorcenski The fallacy here is thinking that the gullibility of humans gives us information about the capabilities of AI systems.
One difference between a cold-reading psychic con and language model AI is that some LM AIs currently score higher on law school entrance exams than the average human applicant.
People who don’t like the implications of this tend to reflexively claim that the law school entrance exams must then be a bad test of logical reasoning ability. Which seems to me like an extraordinary claim presented without evidence.

