@On_Preterition @henryfarrell Good summary of the debacle https://dair-community.social/@kendraserra/110441210244994168
Are you just catching up on the bonkers story about the lawyer using ChatGPT for federal court filings? This is a thread for you.
Are you just catching up on the bonkers story about the lawyer using ChatGPT for federal court filings? This is a thread for you.
@henryfarrell What's perhaps not obvious from that picture is that at every word in ChatGPT's output would have had a reasonable chance to say the right or wrong word. (e.g. Rocks or Cheese).
Once it did so, it's more likely to continue with the incorrect logic than to go back and correct that mistake. Imagine if we had a system that flipped a coin and then justified the answer based only on the answer.
The real issue here is once a mistake is made, every future parts of the answer are suspect
@henryfarrell This is an interesting take, but I think it doesnât hold, as we empirically donât yet understand well enough what we get when we enter certain prompts. Itâs also not clear that there are prompts that allow us to get the statistical summarization of what the LLM âknowsâ - without doing some work on the output ourselvesâŚ
Also, doesnât precedence law not mean that a theoretical average doesnât mean anything, and you need specific, existing cases?
Still interesting!
Did ChatGPT generate this counterargument?
@henryfarrell If I were their lawyer I'd just go all in and argue that it doesn't matter that the case law was fake, all that matters is that the logic is consistent and persuasive.
If the cases are hypothetically possible, would the rulings make sense?
(I'm kidding of course, judges are not to be messed with)