@emilygorcenski The fallacy here is thinking that the gullibility of humans gives us information about the capabilities of AI systems.

One difference between a cold-reading psychic con and language model AI is that some LM AIs currently score higher on law school entrance exams than the average human applicant.

People who don’t like the implications of this tend to reflexively claim that the law school entrance exams must then be a bad test of logical reasoning ability. Which seems to me like an extraordinary claim presented without evidence.

@marshray @emilygorcenski ... no, if something incapable of logical reasoning passes a test of logical reasoning, that *does* prove it's a bad test. It is, itself, evidence.

@ArdentSlacker @emilygorcenski If the question is “Can AI perform logical reasoning?”,
then I think you’re begging it.

But I’m not dogmatic about this. It’s worth discussing and I’d be happy to see your evidence.

@marshray No. You're stating that this chatbot can perform logical reasoning. I'm saying that it wasn't built to perform logical reasoning, which is a fact.

The test has other failures, of course, as Trump's lawyers demonstrate...

There is nothing in the development of chatgpt that is capable of producing anything capable of logical reasoning, and *you* need to demonstrate proof of such a ludicrous claim. And cannot.

Honestly, asking someone to prove a negative? Take that bad faith elsewhere.

@ArdentSlacker There is wide agreement among those to whom it really matters that the LSAT is a test of logical reasoning.

If you want to argue otherwise, bring more than handwaving and quips about Trump’s lawyers.

@marshray @ArdentSlacker

Legal 'logic' is based on a body of precedent, no? A nuanced examination of the process has to factor in the Law, Precedent, Judge, Lawyers and the Defendant. 'Trump' is the new 'Hitler' in any application of Godwin's Law.

Do we have an agreed upon definition of logical reasoning?

I have a 'factoid' about human reasoning which posits that 95% of thoughts and actions are the 'same' as yesterday and the yesterdays that came before it. That leaves 5% of 'new' output.(1)

@marshray @ArdentSlacker

Do GPT's do logic?

I'm going to say yes.

I have not read the article yet but my 'chats' usually deal with language use and allowing me to access the speed and accuracy (for some value of accuracy) of the model to organize responses to my thoughts and questions. Step by step and point by point. (2)

@marshray @ArdentSlacker

Here's me being gullible and well into the Subjective Validation Loop:

@fport @ArdentSlacker Yes, I’ve had many thought-provoking chat transcripts.

But nothing, not even law school entrance exams, will convince a dogmatist who’s determined to reach their pre-determined conclusion.

@marshray @fport I'm sure you've convinced yourself, despite the information about what the chatbot is and how it works being fully available, that it's thinking. You've fallen for the stochastic parrot, and now, even learning how they work is impossible, because you'd need to be capable of admitting when you're wrong.

It's not that the AI is intelligent. It's that it's "smarter" than *you*.

@ArdentSlacker @fport OK.

So is it smarter than you?

Let's find out.

What question should we ask it to test your claims?

@ArdentSlacker @fport Yeah, I didn’t think so.

Don’t feel too bad. I’ve asked this to dozens of smug and condescending people such as yourself.

Not one of you has been able to answer.