The widespread publishing of AI slop (and relatedly, even predating LLMs, the enshittification of Google search results) is a much more interesting discussion than most of LLM Discourse.
https://mastodon.green/@Tarnport/115679627597776698
Tarnport (@[email protected])

All the shrieking "If you don't like AI, don't use it, but quit trying to control others who do," is actually an ancient debate. It goes back to Hammurabi's Code and the Commandments of Ma'at: DO NOT POLLUTE THE COMMON WELL. It's the most ancient law we have. You can't pollute the river upstream and call it individual prerogative. Watch how fast you go down.

Mastodon.green
(It’s important to highlight the Google search results problem, and the related bots-on-Twitter and Fox News on TV problems. LLMs are just one more contributor to the slow but steady poisoning to what was briefly the high point of our civilization’s access to knowledge—so if you only stop LLMs, you have at best slightly slowed the knowledge commons problem.)

And to be clear, this is a bubble. And there are scams. But there were scams and bubbles around railroads and the web too. This isn’t tulips, and if you insist on telling people it’s a tulip they’re going to tune you out.

https://social.coop/@luis_in_brief/115680503358736664

Luis Villa (@[email protected])

@[email protected] the externalities are real, the cons are real, and the bubble is real. But if your conversation starts from “welllll actualllly it isn’t useful” then people aren’t going to listen to you on any of the problems.

social.coop
Related: I don’t always agree with @pluralistic but this is leagues better than any other AI bubble criticism you’ll read today—long but absolutely worth a read and worth grappling with. His focus on labor power and industry intermediaries, rather than individual workers, is really important.
https://fedi.simonwillison.net/@simon/115680616717668184
Simon Willison (@[email protected])

I thoroughly recommend reading all of Cory Doctorow's recent speech on AI skepticism, it's crammed with new arguments and interesting new ways of thinking about these problems https://pluralistic.net/2025/12/05/pop-that-bubble/#u-washington

Mastodon

And I need to sit with the “reverse centaur” analogy, a lot. He’s 100% correct that Silicon Valley’s plan is to sell the vision of centaurs while actually creating a reserve army of labour, cowed and ready to serve when needed as reverse centaurs.

And as someone who is a pretty happy (and real) centaur right now… I need to wrestle with that.

Some other readings this morning: Robin Sloan is whimsical here, as is his wont. He’s “ambivalent, in the sense of having many thoughts and feelings at once”. So read lightly.

But the key thought to hold alongside Cory’s “reverse centaur”: if everything “melts into code”—what does this tell us about who/what will become centaurized?

[Robin hints at another branch to explore: “seeing like a state” has been always essentially about seeing like tabular data. What now?]

https://www.robinsloan.com/lab/all-that-is-solid/

All that is solid melts into code

More computer, rather than more human.

Robin Sloan

And from @ethanz: have we literally instantiated Gramscian hegemony by encoding most knowledge into a single Thing? I think the answer is importantly “no”, because hegemony to me has always had an important component that lives in the heads of the people, and LLMs can’t encode, and will only somewhat influence, that component. But it’s still an important argument.

(If none of this made sense… read Ethan’s piece, he explains what it is and why it matters.)

https://ethanzuckerman.com/2025/12/05/gramscis-nightmare-ai-platform-power-and-the-automation-of-cultural-hegemony/

Gramsci's Nightmare: AI, Platform Power and the Automation of Cultural Hegemony - Ethan Zuckerman

Large language models lock values into place, making it hard to challenge the cultural hegemony of a particular form of western culture

Ethan Zuckerman
@luis_in_brief @pluralistic totally second this, been sharing it round - feeling grateful for the articulation
@luis_in_brief @pluralistic Thank you for sharing ideas to pop the AI bubble, the sooner the better. #aibubble
@luis_in_brief This sounds like a sensible "moderate" position but thus far my model is "it feels like it helps, but it doesn't actually help", and I have yet to see any empirical data that would contradict that. The reason people tune out critics when we say "it isn't useful" is that it *feels* useful. (More detail, obviously, at https://blog.glyph.im/2025/08/futzing-fraction.html )
The Futzing Fraction

At least some of your time with genAI will be spent just kind of… futzing with it.

@luis_in_brief There may even be a dumbo's-feather effect where a chatbot allows for breaking the static friction of a stuck neurodivergent mind, and I am — it would not be an exaggeration to use the adjective "desperately" here — envious of that experience.
@luis_in_brief But while I want to be careful not to contradict your lived experience of having successfully had the sensation of a coding tutor and an EA with your chatbots, I can also say that ChatGPT, Claude, and Gemini have resoundingly failed every single experiment I have put them to, almost uniformly wasted my time and just generally been attractive nuisances around the work I want to do.
@luis_in_brief I also yearn for the glorious 5 minutes of my career when I had an EA but a chatbot could not *remotely* perform the functions that I'd want from someone in such a role. So on that front, I am just curious. What work are you delegating to it and how?
@glyph the very TLDR is it’s a GTD coach. Slightly longer form: I use my existing todo manager as an input mechanism and data store, and wrap that in a thick layer of prompts that walk the LLM through daily and weekly GTD reviews. It has been hugely helpful in reducing task-related anxiety, and prioritizing and reducing my open backlog of tasks.
@glyph the regular data export from the existing todo tool, scripts to manage the prompts, and first drafts of the prompts themselves were pretty much all written by Claude Code. None of it is particularly publicly-shareable code, for a variety of reasons, but I use it multiple times a day.
@luis_in_brief Interesting. That is almost perfectly inside-out for me; when I had access to an EA, I already had a pretty good, prioritized list of tasks, and I had (the moral equivalent at the time, of) tags in my task list, and I would slice off any stuff with 'admin' tags and ask them to just go off and act as my agent. "I need to be in Manhattan from Oct 5-9, please book me travel on the internal system, no redeyes." "here are 6 crumpled receipts, please file an expense report"
@glyph it is absolutely not human-facing (other than me), so very much not an EA in that sense. But it has been very helpful (and fun to work on).
@luis_in_brief The fact that the systems are so otherwise unethical makes it very difficult for me to find the fun in it. Once or twice I thought I'd found some really useful, substantive coding assistance, but then I double checked and it was just plagiarism; and I don't have time to do hbomberguy-style googling exact quoted phrases of all its output to try to figure out what open source library I ought to be importing instead. It just feels kind of persistently gross.
@luis_in_brief I remain tepidly enthusiastic about the potential for local models, but the training story there is still distressingly murky so I'm just kind of waiting around for ollama to be able to offer me something I'm not going to feel squicked out by.
@luis_in_brief In any case, thanks for the data point. This is definitely part of an emerging pattern where there does seem to be some kind of use as a digital mirror or transcendently responsive rubber duck, which (c.f. above, static friction of the neurodivergent mind) is not without value.

@glyph yeah the value of this use case is very much tied, though somewhat inadvertently, to my anxiety issues.

Ironically, even if I wanted to use it “agentically” (and have it update tasks and such for me) I can’t because the very-human todo list tool I use has an API that is terrible and borderline unusable.

@glyph this is probably a longer conversation than I have energy for tonight, but: I find LLM plagiarism a difference in degree, not kind, when compared to the faceless strip mining that has been central to the open ecosystem for a decade (or 3).

I’m not even sure the difference in degree is *negative*. There’s a real possibility that on net, LLMs may put us in a place where more people constructively create, modify, and reuse—and training comes to be seen as a very small price to pay for that.

@luis_in_brief this is definitely a place where I disagree, but mounting a robust defense of the previous status quo is beyond my capacity. If OpenAI didn’t themselves believe that they’re gonna make a trillion dollars off of selling our own creativity back to us, I would probably agree. maybe after the bubble bursts I will.

@glyph ~every $2T+ company that has ever existed (except Aramco and maaaaaybe NVIDIA and Broadcom?) would not have reached that valuation without massively profiting from the extensive, mostly unpaid, almost completely uncredited use of the work of open source developers.

One can certainly draw analytical distinctions attempting to show that OpenAI’s statistical sampling of the same work is somehow really different and much worse but… meh?

@glyph like, I can look at the picture of all those CEOs sitting at Trump’s inauguration and tell you which have flagrantly violated the GPL, which have scrupulously complied, and which have statistically sampled, but that’s about 37,000th on the list of ethical problems in that picture

@luis_in_brief @glyph It's not 'just' open-source code, though, right?

The models "work" because content has been plundered from anything and everything in between.

On the software side, I can see discussions trying to dissect the meaning of "open" and whether LLMs are undermining or propagating the very concept... I think there may be good points on either side of that argument (maybe)...

But without all the other stuff that has clearly been plundered from authors, writers, reporters, screenwriters, and so on... these LLM bots would be unable to simulate communication with us in any coherent way.

I've seen arguments stating that putting something on the open web means that it is there for the taking... for example, by virtue of blogging, I am consenting to the scrapers and would be models of tomorrow pillaging my words as they please...

I find this terribly misaligned. It's like saying that by virtue of going outside, I give permission for anyone to take pictures of me and profit off of them in any way that they please.

To me, non-consensual scraping is a blatant and vulgar disregard for me as a person.

@luis_in_brief agreed, for me all generative slop is just endgame of decade long issue of trying to poison search results for either commercial or political gains.
Without this reasoning I don't think gen ai would be as relatively popular.
@peteriskrisjanis I don’t think that’s right. It can be both true that LLMs are very useful for many people in many use cases, and that publishing LLM slop is very bad for the commons. And we have to be able to hold both of those thoughts in our head at the same time to have a useful discussion about them.
@luis_in_brief useful is very stretched as a term here. People like using LLM, whatever it usefulness is worth cost is very different discussion.

@peteriskrisjanis (shrug) I’m writing code that is very useful to me and my family, after not writing code for two decades; using an LLM as a secretary at work (a privilege I have not had in a decade+) and being much more productive as a result; and using LLMs as a travel planner to make family trips more fun. And I’m happily paying hundreds of dollars a month for that capability.

If you can’t see or acknowledge the utility, persuading people on anything else is not going to go well.

@peteriskrisjanis the externalities are real, the cons are real, and the bubble is real. But if your conversation starts from “welllll actualllly it isn’t useful” then people aren’t going to listen to you on any of the problems.
@luis_in_brief honest question, because you seem like an ethical guy (and I try not to bait or argue with folks on the internet): how do you square the benefits you and your family get for the hundreds per month you contribute to the societal cost of the real externalities, cons, and bubble? does your usage feel inconsequential to the larger issues you acknowledge?

@trs This is a fair question and a difficult one. I have to confront it every time I put a piece of plastic into the trash, watch a streaming movie, eat factory-farmed meat, or drive my car.

My entire job depends on the giant data centers that people are suddenly so worried about, and it is not clear to me why these data centers are different or less harmful from the ones that build the AI models. How does an AI query compare with, say, a Google search?

It's similarly not clear to me that data centers are worse in any way than any other industrial activity.

I am not trying to minimize the issues you raise. Just the opposite. But the reality is that my life in the 21st century depends on a giant economic machine that has enormous externalities, many of which are invisible to me. I can only try to deal with these as best I can and in a practical order.

Top of my list is to stop eating meat and to stop flying in airplanes. I am working on these.

At the other end of the scale of practicality, cement manufacture is a major contributor to carbon emissions, but I don't see any way to stop consuming cement or even any value in trying to do so.

I find it very implausible that the AI products are anywhere near the top of this list.

@trs first, thanks for assuming good faith! That’s rare and appreciated.

Second… it’s a very hard question that I am still struggling with. And deserves a longer answer than I can give today.

@luis_in_brief totally fair, I get that. I also have no claim to deserving an answer from you at all! but I'd be curious to hear/read your thinking on that at some point if you do get it down.

@luis_in_brief also, if it's not evident, the whole reason I'm asking is because I struggle with the same question myself.

I can't currently square them for myself and so have taken largely an abstinence-based (e.g. no fun) approach. it helps that it doesn't really feel like I'm missing out on anything, but I know that of course I am.