https://pluralistic.net/2026/03/02/nonconsensual-slopping/

“a fatal flaw in the idea that we will increase our productivity by asking chatbots to summarize things we don't understand: by definition, if we don't understand a subject, then we won't be qualified to evaluate the summary, either.”

👌 @pluralistic

Pluralistic: No one wants to read your AI slop (02 Mar 2026) – Pluralistic: Daily links from Cory Doctorow

@johnpaulflintoff I don't quite understand the criticism. In fact, it could apply to summaries in general. And yet, I find summaries very useful. It's not about fully understanding them, but rather that they contain a minimum amount of information so I can decide if I'm interested in the topic and whether to read the full text or not. That's all I ask of a summary. @pluralistic

@the_heruman
Agreed, I’m not hostile to the notion of summaries per se.
Not even against the idea of using AI to generate a summary.

I recommend the piece by @pluralistic that I linked to because its fuller argument makes a great deal of sense.

@the_heruman @johnpaulflintoff Because summaries made by experts are not wrong in the way that AI is wrong. And errors in summaries made FOR experts are detectable in ways that summaries made for laypeople are not.
@pluralistic Well, I thought you were talking about a recent controversy regarding the generation of summaries for scientific papers. I guess you're referring to summaries of corporate reports and things like that. In any case, translating and summarizing are two of the things LLMs do best, and I imagine they'll keep improving over time. That said, one does have to know how to use them (contextual information is essential). Moreover, I think they should include a clear label indicating how the summary was generated, so the recipient can decide how much trust to place in it...
@johnpaulflintoff

@the_heruman

I haven't read the article as I have limited time this morning, but I would say the problem comes in when you are using summaries to steer a complex process. That's higher stakes than, "Am I interested in this?"

@johnpaulflintoff @pluralistic

@johnpaulflintoff @pluralistic You only use an AI summary if you don't really care about the answer. Which begs the question, why get an LLM to summarise anything?

If you need to make an informed decision about something, you need to read it and understand it yourself.

You run the risk of making decisions based on hallucinations, with potentially catastrophic consequences.

@gavin57 @pluralistic
I guess it may be possible to use it if you don’t massively care about the quality of the output - but are *slightly* interested
@johnpaulflintoff @pluralistic For those with a slight interest in something, there are perfectly good online encyclopaedias for that. 😉

@gavin57 @johnpaulflintoff @pluralistic

This use case of AI is for people who took the “thinking for non-thinkers” courses in school.

@gavin57 @johnpaulflintoff @pluralistic i don't think people who use these things cared about being wrong in the first place.

@arclight On its face, the quoted argument seems to suggest that I can't improve my understanding of a subject by reading a human-written summary either. But sometimes reading a human-written summary does improve my understanding.

Does the linked article address this objection? If so I'll be interested to read it, but the quoted sentence doesn't seem promising.

@johnpaulflintoff @pluralistic Learning to read critically and question your assumptions and the claims of others are things you learn in those liberal terrorist training camps they call "college".

Is the US you will get very little of that in the occupational training you receive as a child.

To see that you need to understand enough to know how to evaluate a claim you have to first realize you should evaluate the claim before just believing it.

@johnpaulflintoff @pluralistic

This right here is my chief complaint with the whole "AI" "assistance" proposition. If I have to check its work thoroughly, how is that an improvement over just doing the work from the start? Even when dealing with other humans, unless the other person is really on their game, I nearly always find it easer/faster to do it over from scratch myself. I can't seen any angle from which "AI" looks like an improvement.