The counterargument in the ChatGPT lawyer debacle is that the AI's summarization of the precedent, being based on a statistical summary of the relevant legal debates, is a better Hayekian distillation of the common law tradition than any ordinary exercise in legal pedantry carried out by humans combing Westlaw. The hallucinated citations are a purer form of the caselaw than any merely existing cases in our sordid sublunar realm could possibly aspire to being.
@henryfarrell I mean common law is just the law judges made up along the way so having a machine make up new case law isn't that big of a leap in a certain sense.
@henryfarrell Sure JudgeJurryAndExecutionerGPT is very biased, but it represents society's biases better than any human judge and jury would.
@henryfarrell Missed the debacle, link?
Kendra Albert (@[email protected])

Are you just catching up on the bonkers story about the lawyer using ChatGPT for federal court filings? This is a thread for you.

Distributed AI Research Community
@reedmideke @henryfarrell NYT article was a laugh, but that thread really goes the extra mile in pointing out how at least one of the lawyers seems to have doubled down and provided AI-fabricated opinions when it was pointed out that the case citations appeared to be nonexistent. Absolute galaxy brain shit and as usual the coverup will be worse than the crime. Thanks for sharing.
A Man Sued Avianca Airline. His Lawyer Used ChatGPT.

A lawyer representing a man who sued an airline relied on artificial intelligence to help prepare a court filing. It did not go well.

The New York Times
Kendra Albert (@[email protected])

Are you just catching up on the bonkers story about the lawyer using ChatGPT for federal court filings? This is a thread for you.

Distributed AI Research Community
@henryfarrell I'm guessing that's probably the greatest value to ChatGTP in the context of brief writng.
@henryfarrell Counter counter argument

@henryfarrell What's perhaps not obvious from that picture is that at every word in ChatGPT's output would have had a reasonable chance to say the right or wrong word. (e.g. Rocks or Cheese).

Once it did so, it's more likely to continue with the incorrect logic than to go back and correct that mistake. Imagine if we had a system that flipped a coin and then justified the answer based only on the answer.

The real issue here is once a mistake is made, every future parts of the answer are suspect

@henryfarrell This is an interesting take, but I think it doesn’t hold, as we empirically don’t yet understand well enough what we get when we enter certain prompts. It’s also not clear that there are prompts that allow us to get the statistical summarization of what the LLM “knows” - without doing some work on the output ourselves…

Also, doesn’t precedence law not mean that a theoretical average doesn’t mean anything, and you need specific, existing cases?

Still interesting!

@henryfarrell
Sound like this was generated by AI!
@henryfarrell @donmelton doesn’t seem any more made up than the things some members of the Supreme Court would come up with.
@henryfarrell stop channeling Samuel Alito.
@henryfarrell So, we're going to lose justice in favor of, um... justy-ness?
@henryfarrell The AI is the Platonic form of lawyer, then; its citations the very essence of case law, unsullied by actual existence in our material world.

@henryfarrell If I were their lawyer I'd just go all in and argue that it doesn't matter that the case law was fake, all that matters is that the logic is consistent and persuasive.
If the cases are hypothetically possible, would the rulings make sense?

(I'm kidding of course, judges are not to be messed with)

@henryfarrell You should pitch this to the NYT
@henryfarrell Let's all just surrender to whatever the average of the internet is on givent topic and then we win by mass generating content that supports our views for other models to learn on.
@henryfarrell Only if it were trained on case law, and only case law. Otherwise it's regurgitating common opinion of any old asshole on the internet.
@henryfarrell side note: an LLM trained only on case law would have been called an "expert system" twenty years ago