It seems like every day brings another story about a lawyer caught submitting a court filing that cites nonexistent cases fabricated by AI. Riana Pfefferkorn’s new analysis answers a question courts keep asking: who’s filing AI-tainted briefs? It turns out, it’s mostly solos and small firms — not big law — and half the time, the hallucinated citations came from ChatGPT. These lawyers aren’t lazy; they’re drowning. AI tools promise relief from the grind but deliver new ways to screw up. What about the ethics rules or CLEs? Until the tools get better — and lawyers really understand what “hallucination” means in practice, we’ll keep seeing made-up cases in real filings.
TL;DR
⚖️ 114 U.S. cases analyzed
💼 90% from small or solo firms
🤖 ChatGPT led the hallucination pack
📚 Courts + bars need real AI literacy, not lip service
https://cyberlaw.stanford.edu/blog/2025/10/whos-submitting-ai-tainted-filings-in-court/
#LegalTech #AIethics #LawPractice #GenerativeAI