The widespread publishing of AI slop (and relatedly, even predating LLMs, the enshittification of Google search results) is a much more interesting discussion than most of LLM Discourse.
https://mastodon.green/@Tarnport/115679627597776698
Tarnport (@[email protected])

All the shrieking "If you don't like AI, don't use it, but quit trying to control others who do," is actually an ancient debate. It goes back to Hammurabi's Code and the Commandments of Ma'at: DO NOT POLLUTE THE COMMON WELL. It's the most ancient law we have. You can't pollute the river upstream and call it individual prerogative. Watch how fast you go down.

Mastodon.green
(It’s important to highlight the Google search results problem, and the related bots-on-Twitter and Fox News on TV problems. LLMs are just one more contributor to the slow but steady poisoning to what was briefly the high point of our civilization’s access to knowledge—so if you only stop LLMs, you have at best slightly slowed the knowledge commons problem.)

And to be clear, this is a bubble. And there are scams. But there were scams and bubbles around railroads and the web too. This isn’t tulips, and if you insist on telling people it’s a tulip they’re going to tune you out.

https://social.coop/@luis_in_brief/115680503358736664

Luis Villa (@[email protected])

@[email protected] the externalities are real, the cons are real, and the bubble is real. But if your conversation starts from “welllll actualllly it isn’t useful” then people aren’t going to listen to you on any of the problems.

social.coop
@luis_in_brief This sounds like a sensible "moderate" position but thus far my model is "it feels like it helps, but it doesn't actually help", and I have yet to see any empirical data that would contradict that. The reason people tune out critics when we say "it isn't useful" is that it *feels* useful. (More detail, obviously, at https://blog.glyph.im/2025/08/futzing-fraction.html )
The Futzing Fraction

At least some of your time with genAI will be spent just kind of… futzing with it.

@luis_in_brief There may even be a dumbo's-feather effect where a chatbot allows for breaking the static friction of a stuck neurodivergent mind, and I am — it would not be an exaggeration to use the adjective "desperately" here — envious of that experience.
@luis_in_brief But while I want to be careful not to contradict your lived experience of having successfully had the sensation of a coding tutor and an EA with your chatbots, I can also say that ChatGPT, Claude, and Gemini have resoundingly failed every single experiment I have put them to, almost uniformly wasted my time and just generally been attractive nuisances around the work I want to do.
@luis_in_brief I also yearn for the glorious 5 minutes of my career when I had an EA but a chatbot could not *remotely* perform the functions that I'd want from someone in such a role. So on that front, I am just curious. What work are you delegating to it and how?
@glyph the very TLDR is it’s a GTD coach. Slightly longer form: I use my existing todo manager as an input mechanism and data store, and wrap that in a thick layer of prompts that walk the LLM through daily and weekly GTD reviews. It has been hugely helpful in reducing task-related anxiety, and prioritizing and reducing my open backlog of tasks.
@glyph the regular data export from the existing todo tool, scripts to manage the prompts, and first drafts of the prompts themselves were pretty much all written by Claude Code. None of it is particularly publicly-shareable code, for a variety of reasons, but I use it multiple times a day.
@glyph it is absolutely not human-facing (other than me), so very much not an EA in that sense. But it has been very helpful (and fun to work on).
@luis_in_brief The fact that the systems are so otherwise unethical makes it very difficult for me to find the fun in it. Once or twice I thought I'd found some really useful, substantive coding assistance, but then I double checked and it was just plagiarism; and I don't have time to do hbomberguy-style googling exact quoted phrases of all its output to try to figure out what open source library I ought to be importing instead. It just feels kind of persistently gross.

@glyph this is probably a longer conversation than I have energy for tonight, but: I find LLM plagiarism a difference in degree, not kind, when compared to the faceless strip mining that has been central to the open ecosystem for a decade (or 3).

I’m not even sure the difference in degree is *negative*. There’s a real possibility that on net, LLMs may put us in a place where more people constructively create, modify, and reuse—and training comes to be seen as a very small price to pay for that.

@luis_in_brief this is definitely a place where I disagree, but mounting a robust defense of the previous status quo is beyond my capacity. If OpenAI didn’t themselves believe that they’re gonna make a trillion dollars off of selling our own creativity back to us, I would probably agree. maybe after the bubble bursts I will.

@glyph ~every $2T+ company that has ever existed (except Aramco and maaaaaybe NVIDIA and Broadcom?) would not have reached that valuation without massively profiting from the extensive, mostly unpaid, almost completely uncredited use of the work of open source developers.

One can certainly draw analytical distinctions attempting to show that OpenAI’s statistical sampling of the same work is somehow really different and much worse but… meh?

@glyph like, I can look at the picture of all those CEOs sitting at Trump’s inauguration and tell you which have flagrantly violated the GPL, which have scrupulously complied, and which have statistically sampled, but that’s about 37,000th on the list of ethical problems in that picture