A woman sues her insurance company for terminating her disability benefits. They reach a settlement and agree that the suit will be dismissed with prejudice.

She decides she doesn't like the settlement and asks her lawyers to reopen the case.They say they can't: it was dismissed, and in the settlement she agreed not to reopen the case.

She asks ChatGPT if her attorneys are lying to her. It says they are. She fires them and continues pro se, advised by ChatGPT.

CharGPT generates legal arguments for reopening the case, which she files, and 21 more motions, a subpoena, and eight other notices and statements, which she files.

The court denies her motion to reopen the case.

Advised by ChatGPT, she files a new suit against the insurance company and submits 44 more motions, memoranda, etc., which include citations to nonexistent cases.

Now the insurance company has sued OpenAI for tortious interference with their settlement contract.

🍿

https://storage.courtlistener.com/recap/gov.uscourts.ilnd.496515/gov.uscourts.ilnd.496515.1.0_1.pdf

@mjd TBH I do not think OpenAI should be responsible. They're just providing a fancy random text generator to the public. And it's outright impossible to teach a random text generator to _not_ output a specific kind of text, as whatever you do, there is a way around it.

The woman should pay all costs, as per the usual "vexatious filings" or "frivolous lawsuits" standards.

Plus, the law in her state against practicing law without a license starts with "No person shall...". ChatGPT isn't a person.

@divVerent @mjd This is simplistic to the point of being false. Long before we had LLMs, we had Clippy, which was smart enough to say “it looks like you’re writing a memo.” OpenAI and its counterparts can unquestionably add a “it looks like you’re seeking legal advice” detector to their products. They already, supposedly, try to detect whether their users are attempting self-harm. LLMs evolved from classification software, so this kind of thing is in their roots.