https://pluralistic.net/2026/03/02/nonconsensual-slopping/

“a fatal flaw in the idea that we will increase our productivity by asking chatbots to summarize things we don't understand: by definition, if we don't understand a subject, then we won't be qualified to evaluate the summary, either.”

👌 @pluralistic

Pluralistic: No one wants to read your AI slop (02 Mar 2026) – Pluralistic: Daily links from Cory Doctorow

@johnpaulflintoff I don't quite understand the criticism. In fact, it could apply to summaries in general. And yet, I find summaries very useful. It's not about fully understanding them, but rather that they contain a minimum amount of information so I can decide if I'm interested in the topic and whether to read the full text or not. That's all I ask of a summary. @pluralistic
@the_heruman @johnpaulflintoff Because summaries made by experts are not wrong in the way that AI is wrong. And errors in summaries made FOR experts are detectable in ways that summaries made for laypeople are not.
@pluralistic Well, I thought you were talking about a recent controversy regarding the generation of summaries for scientific papers. I guess you're referring to summaries of corporate reports and things like that. In any case, translating and summarizing are two of the things LLMs do best, and I imagine they'll keep improving over time. That said, one does have to know how to use them (contextual information is essential). Moreover, I think they should include a clear label indicating how the summary was generated, so the recipient can decide how much trust to place in it...
@johnpaulflintoff