Looking forward to reading this paper that won the Best Student Paper award at #Jurix2025: "Do LLMs Truly Understand When a Precedent Is Overruled?" by Li Zhang, Jaromir Savelka, and Kevin Ashley. https://doi.org/10.48550/arXiv.2510.20941

#Precedent #LegalNLP #LNLP #NLLP #LLM #LegalAI #LawAndAI

(edited for typo)

Do LLMs Truly Understand When a Precedent Is Overruled?

Large language models (LLMs) with extended context windows show promise for complex legal reasoning tasks, yet their ability to understand long legal documents remains insufficiently evaluated. Developing long-context benchmarks that capture realistic, high-stakes tasks remains a significant challenge in the field, as most existing evaluations rely on simplified synthetic tasks that fail to represent the complexity of real-world document understanding. Overruling relationships are foundational to common-law doctrine and commonly found in judicial opinions. They provide a focused and important testbed for long-document legal understanding that closely resembles what legal professionals actually do. We present an assessment of state-of-the-art LLMs on identifying overruling relationships from U.S. Supreme Court cases using a dataset of 236 case pairs. Our evaluation reveals three critical limitations: (1) era sensitivity -- the models show degraded performance on historical cases compared to modern ones, revealing fundamental temporal bias in their training; (2) shallow reasoning -- models rely on shallow logical heuristics rather than deep legal comprehension; and (3) context-dependent reasoning failures -- models produce temporally impossible relationships in complex open-ended tasks despite maintaining basic temporal awareness in simple contexts. Our work contributes a benchmark that addresses the critical gap in realistic long-context evaluation, providing an environment that mirrors the complexity and stakes of actual legal reasoning tasks.

arXiv.org

🎉 We’re welcoming Sabine Wehnert to TrustHLT (RC Trust, UA Ruhr)!

She develops practical Legal NLP: extracting textbook knowledge, linking it to statutes & case law, and testing retrieval for performance, bias robustness & explainability—so legal AI is reliable and justifiable in practice.

👉 What capability should trustworthy legal AI deliver next?

#TrustworthyAI #LegalNLP #InformationRetrieval #ExplainableAI #KnowledgeGraphs #LegalTech #NLP #Research #UARuhr

Photo: Foto Fuchs Magdeburg

I am at NLLP workshop today for an excellent series of papers on #legalNLP #legaltech #NLLP. You can follow online via a stream https://www.youtube.com/watch?v=cdHE7u9vfSk

I will be sharing my thoughts on the future of the field in the concluding panel (SPOILER: i am going to share stories working with lawyers) #NLProc #EMNLP2022

NLLP Workshop @ EMNLP 2022

YouTube

#Introduction

Hi all 👋, I am a first year PhD student in #NLProc at the University of Mannheim working on #ScholarlyNLP / #ScholarlyDocumentProcessing and #ComputationalSocialScience.

🔍 My work involves identifying survey variable mentions in scientific texts #svident

💡 I am also interested in #InformationRetrieval, #EntityLinking, #LegalNLP, #Explainability, #Fairness, and #Robustness.