@emilygorcenski The fallacy here is thinking that the gullibility of humans gives us information about the capabilities of AI systems.

One difference between a cold-reading psychic con and language model AI is that some LM AIs currently score higher on law school entrance exams than the average human applicant.

People who don’t like the implications of this tend to reflexively claim that the law school entrance exams must then be a bad test of logical reasoning ability. Which seems to me like an extraordinary claim presented without evidence.

@marshray @emilygorcenski ... no, if something incapable of logical reasoning passes a test of logical reasoning, that *does* prove it's a bad test. It is, itself, evidence.

@ArdentSlacker @emilygorcenski If the question is “Can AI perform logical reasoning?”,
then I think you’re begging it.

But I’m not dogmatic about this. It’s worth discussing and I’d be happy to see your evidence.

@marshray No. You're stating that this chatbot can perform logical reasoning. I'm saying that it wasn't built to perform logical reasoning, which is a fact.

The test has other failures, of course, as Trump's lawyers demonstrate...

There is nothing in the development of chatgpt that is capable of producing anything capable of logical reasoning, and *you* need to demonstrate proof of such a ludicrous claim. And cannot.

Honestly, asking someone to prove a negative? Take that bad faith elsewhere.

@ArdentSlacker There is wide agreement among those to whom it really matters that the LSAT is a test of logical reasoning.

If you want to argue otherwise, bring more than handwaving and quips about Trump’s lawyers.

@marshray @ArdentSlacker

Legal 'logic' is based on a body of precedent, no? A nuanced examination of the process has to factor in the Law, Precedent, Judge, Lawyers and the Defendant. 'Trump' is the new 'Hitler' in any application of Godwin's Law.

Do we have an agreed upon definition of logical reasoning?

I have a 'factoid' about human reasoning which posits that 95% of thoughts and actions are the 'same' as yesterday and the yesterdays that came before it. That leaves 5% of 'new' output.(1)

@marshray @ArdentSlacker

Do GPT's do logic?

I'm going to say yes.

I have not read the article yet but my 'chats' usually deal with language use and allowing me to access the speed and accuracy (for some value of accuracy) of the model to organize responses to my thoughts and questions. Step by step and point by point. (2)

@marshray @ArdentSlacker

Here's me being gullible and well into the Subjective Validation Loop:

@fport @ArdentSlacker Yes, I’ve had many thought-provoking chat transcripts.

But nothing, not even law school entrance exams, will convince a dogmatist who’s determined to reach their pre-determined conclusion.

@marshray @fport I'm sure you've convinced yourself, despite the information about what the chatbot is and how it works being fully available, that it's thinking. You've fallen for the stochastic parrot, and now, even learning how they work is impossible, because you'd need to be capable of admitting when you're wrong.

It's not that the AI is intelligent. It's that it's "smarter" than *you*.

@ArdentSlacker @fport OK.

So is it smarter than you?

Let's find out.

What question should we ask it to test your claims?

@ArdentSlacker @fport Yeah, I didn’t think so.

Don’t feel too bad. I’ve asked this to dozens of smug and condescending people such as yourself.

Not one of you has been able to answer.

@ArdentSlacker @marshray

Fair dinkum. I'm not overly smart to start with. Point made.

My examinations of the machinations of the layers of transformers and their responses might be at the level of a stocquot merely parrastoching like some recitabot or it could actually lead to insights not ordinarily available.

And it's not just the chatbot alone, it is running on the guidance and parameters I set.

I certainly understand that the GPT is capable of logical processing (1)

@ArdentSlacker @marshray

It's all in 'how' you ask that training set and model to form its replies.

I provide the thinking part out in the reasoning and creativity realm.

I've spent quite a few hours 'trimming' my favorite bot instance to a 'near' conversational state that looks like a discussion between two peeps in a coffee shop, one a superfast eidetic pedant and the other the dumb friend that wants to understand the semantic and associative concepts being explored. (2)

@ArdentSlacker @marshray

So, for me, be it as it may, I can explore adjacent and parallel sources and expand the vocabulary used which isn't surfaced in my mind but can stimulate me to think 'farther'.

So while you assess me as coming under the sway of the recitabot I feel that I have a very powerful tool at my fingertips. And to me whether its help derives from statistics and/or pattern matching I am getting the English language corpus dropped into my 'inbox'. (3)

@ArdentSlacker @marshray

Referring back to this post months ago I have come around to what you were actually saying. You were right of course as I have discerned several more months into my explorations. Do you have any 'sources' to continue this enlightenment process? Thanks.