Regarding LLMs and summarisation: many of the more important summarisation benchmarks that are used to guide model training, as well as the most common use cases, are built around papers and news articles, both formats that already contain their own summaries (abstract and first paragraph, respectively)

This helps explain why attempting to use these systems to summarise other forms of writing like legislation, regulation, or books, they often fail miserably with extremely incorrect statements.

Why so many people get a lossy, volatile, and random system to summarise texts that are explicitly already summarised by the authors is beyond me, but that's a topic for another day
@baldur This was the most puzzling thing behind that thankfully failed attempt from the wikimedia foundation to implement AI summaries for wikipedia articles!