@emilygorcenski The fallacy here is thinking that the gullibility of humans gives us information about the capabilities of AI systems.

One difference between a cold-reading psychic con and language model AI is that some LM AIs currently score higher on law school entrance exams than the average human applicant.

People who don’t like the implications of this tend to reflexively claim that the law school entrance exams must then be a bad test of logical reasoning ability. Which seems to me like an extraordinary claim presented without evidence.

@marshray @emilygorcenski ... no, if something incapable of logical reasoning passes a test of logical reasoning, that *does* prove it's a bad test. It is, itself, evidence.

@ArdentSlacker @emilygorcenski If the question is “Can AI perform logical reasoning?”,
then I think you’re begging it.

But I’m not dogmatic about this. It’s worth discussing and I’d be happy to see your evidence.

@marshray No. You're stating that this chatbot can perform logical reasoning. I'm saying that it wasn't built to perform logical reasoning, which is a fact.

The test has other failures, of course, as Trump's lawyers demonstrate...

There is nothing in the development of chatgpt that is capable of producing anything capable of logical reasoning, and *you* need to demonstrate proof of such a ludicrous claim. And cannot.

Honestly, asking someone to prove a negative? Take that bad faith elsewhere.

@ArdentSlacker There is wide agreement among those to whom it really matters that the LSAT is a test of logical reasoning.

If you want to argue otherwise, bring more than handwaving and quips about Trump’s lawyers.

@marshray @ArdentSlacker

Legal 'logic' is based on a body of precedent, no? A nuanced examination of the process has to factor in the Law, Precedent, Judge, Lawyers and the Defendant. 'Trump' is the new 'Hitler' in any application of Godwin's Law.

Do we have an agreed upon definition of logical reasoning?

I have a 'factoid' about human reasoning which posits that 95% of thoughts and actions are the 'same' as yesterday and the yesterdays that came before it. That leaves 5% of 'new' output.(1)

@marshray @ArdentSlacker

Do GPT's do logic?

I'm going to say yes.

I have not read the article yet but my 'chats' usually deal with language use and allowing me to access the speed and accuracy (for some value of accuracy) of the model to organize responses to my thoughts and questions. Step by step and point by point. (2)

@marshray @ArdentSlacker

Here's me being gullible and well into the Subjective Validation Loop:

@fport @ArdentSlacker Yes, I’ve had many thought-provoking chat transcripts.

But nothing, not even law school entrance exams, will convince a dogmatist who’s determined to reach their pre-determined conclusion.

@marshray @ArdentSlacker

I actually agree with several and many points in the article. It's the POV that counts. I use little 'ai' because it isn't a full blown reasoning engine. Two things have shaped the 21st century for me, one was the Amiga - if you don't know then there is a 12 part series on Ars Technica - and the other is my lifetime of science fiction reading. Possibilities are always a happy circumstance for me.

I want a super giggles. What it should be by now.