A woman sues her insurance company for terminating her disability benefits. They reach a settlement and agree that the suit will be dismissed with prejudice.

She decides she doesn't like the settlement and asks her lawyers to reopen the case.They say they can't: it was dismissed, and in the settlement she agreed not to reopen the case.

She asks ChatGPT if her attorneys are lying to her. It says they are. She fires them and continues pro se, advised by ChatGPT.

CharGPT generates legal arguments for reopening the case, which she files, and 21 more motions, a subpoena, and eight other notices and statements, which she files.

The court denies her motion to reopen the case.

Advised by ChatGPT, she files a new suit against the insurance company and submits 44 more motions, memoranda, etc., which include citations to nonexistent cases.

Now the insurance company has sued OpenAI for tortious interference with their settlement contract.

🍿

https://storage.courtlistener.com/recap/gov.uscourts.ilnd.496515/gov.uscourts.ilnd.496515.1.0_1.pdf

@mjd TBH I do not think OpenAI should be responsible. They're just providing a fancy random text generator to the public. And it's outright impossible to teach a random text generator to _not_ output a specific kind of text, as whatever you do, there is a way around it.

The woman should pay all costs, as per the usual "vexatious filings" or "frivolous lawsuits" standards.

Plus, the law in her state against practicing law without a license starts with "No person shall...". ChatGPT isn't a person.

@divVerent @mjd ChatGPT is not a person, which is why ChatGPT is not being sued. OpenAI sells a tool that gave her legal advice, and they certainly didn't say anywhere that it's actually just a "fancy random text generator"

@jonoleth @mjd Pretty sure it's common knowledge that LLMs are nothing but random text generators.

OpenAI is a company, not a person. From what I understand, the law banning unlicensed legal advice bans _persons_ and gives them a penalty for doing so anyway.

But OpenAI, being a company, cannot commit crimes (after all, how to put a company in prison?). Only the employees can. So the question is which concrete employee committed a crime there. (Yes, some say companies _can_ commit crimes, but then solve the problems by making an employee / owner / ... actually criminally liable - but then they are the ones who have committed the crime)

The question is rather, have any employees of OpenAI committed a crime there? If any employee at OpenAI _knew_ that it tries to give legal advice, and did not implement any countermeasures, then that employee has committed a crime. That's the case no matter how the "random text generator" works.

If someone tries to get legal advice out of a magic 8-ball, AND the company producing the 8-ball does not implement any countermeasures (such as writing in the manual that responses it gives cannot be used as legal advice), then they can potentially be held liable. Except that in case of a mechanical device that works strikingly like a die it may not be necessary to put such a disclaimer ;)

@divVerent @jonoleth If you're aware of any specific Illinois caselaw that's on point here, I'd be interested to hear about it. But if you're just a nonlawyer making stuff up about what you imagine the law to be, please leave me out of the discussion.

@mjd @jonoleth I am not even American. If in your country machines and companies are "persons" and have human rights that have priority over the human rights of _humans_, then your whole country is wrong. What's next, voting rights in federal elections for corporations? Second Amendment for AIs?

But yeah, that might indeed be the case.

In my country it is "societas delinquere non potest". A company _cannot_ be defendant of a crime - only the people actually performing the actions can.

But yeah, done here. Let's see what broken new case law will come from Trumpistan.

@divVerent @mjd @jonoleth

"Pretty sure it's common knowledge that LLMs are nothing but random text generators."

Among us? Yes. Among the rest of folks? No, it is not well known at all, most laypeople I talk to believed the hype at face value

@TeflonTrout
@divVerent @mjd @jonoleth And that's not how the product is marketed.

Either hold OpenAI liable as though the product is what they claim it is, or hold them liable for fraudulently advertising it as such.

@divVerent

"Pretty sure it's common knowledge that LLMs are nothing but random text generators."

Absolutely not. Maybe in tech circles but the rest of the world has no clue whatsoever how LLMs work. And OpenAI is more than happy to keep it that way

"OpenAI is a company, not a person."

For legal purposes, most countries treat companies as distinct legal entities, and not just in the US. Still, this is pretty off-topic

@jonoleth @divVerent @mjd

Wait, what?
They *sell* this shit? And charge money for it?

Where the holy cat turds do they find clients? On the Internet?

(No, I've never tried to use an AI.)

@WellsiteGeo corporate clients for automated systems like customer service, data analysis, internal support, surveillance, code generation, etc. Most of them don't work very well, but they look like they do so people keep paying.

There are also private individuals with their own subscriptions that use LLMs for any number of recreational or professional things, but I doubt they're where the real money is.