Regarding LLMs and summarisation: many of the more important summarisation benchmarks that are used to guide model training, as well as the most common use cases, are built around papers and news articles, both formats that already contain their own summaries (abstract and first paragraph, respectively)

This helps explain why attempting to use these systems to summarise other forms of writing like legislation, regulation, or books, they often fail miserably with extremely incorrect statements.

Why so many people get a lossy, volatile, and random system to summarise texts that are explicitly already summarised by the authors is beyond me, but that's a topic for another day
@baldur This was the most puzzling thing behind that thankfully failed attempt from the wikimedia foundation to implement AI summaries for wikipedia articles!
@baldur how the hell did we learn nothing from this https://abcnews.go.com...
Xerox Machines Change Documents After Scanning

Some of the company's Workcentre machines are altering numbers in documents.

ABC News

@baldur

Because people who like stuffing their brains with all sorts of top level familiarity are convinced that they're getting deeper understanding from summaries, after some point of confusion through they themselves summarizing primary sources in their own mind. The cognitive process of self summarizing is not the same as reading someone else's summary to gain understanding, but they think it totally is after doing so much self summarizing and knowing its a slog.

Keen example - motherfucking dorks at UW in the brain science department creating podcast summaries of papers with AI so they can get a passive understanding payload on their way to work. My Dad was touting this as the apex of the current utility and I looked him right in the eye and asked if nerds love tools that reinforce their nerdiness.