The widespread publishing of AI slop (and relatedly, even predating LLMs, the enshittification of Google search results) is a much more interesting discussion than most of LLM Discourse.
https://mastodon.green/@Tarnport/115679627597776698
Tarnport (@[email protected])

All the shrieking "If you don't like AI, don't use it, but quit trying to control others who do," is actually an ancient debate. It goes back to Hammurabi's Code and the Commandments of Ma'at: DO NOT POLLUTE THE COMMON WELL. It's the most ancient law we have. You can't pollute the river upstream and call it individual prerogative. Watch how fast you go down.

Mastodon.green
(It’s important to highlight the Google search results problem, and the related bots-on-Twitter and Fox News on TV problems. LLMs are just one more contributor to the slow but steady poisoning to what was briefly the high point of our civilization’s access to knowledge—so if you only stop LLMs, you have at best slightly slowed the knowledge commons problem.)

And to be clear, this is a bubble. And there are scams. But there were scams and bubbles around railroads and the web too. This isn’t tulips, and if you insist on telling people it’s a tulip they’re going to tune you out.

https://social.coop/@luis_in_brief/115680503358736664

Luis Villa (@[email protected])

@[email protected] the externalities are real, the cons are real, and the bubble is real. But if your conversation starts from “welllll actualllly it isn’t useful” then people aren’t going to listen to you on any of the problems.

social.coop
@luis_in_brief This sounds like a sensible "moderate" position but thus far my model is "it feels like it helps, but it doesn't actually help", and I have yet to see any empirical data that would contradict that. The reason people tune out critics when we say "it isn't useful" is that it *feels* useful. (More detail, obviously, at https://blog.glyph.im/2025/08/futzing-fraction.html )
The Futzing Fraction

At least some of your time with genAI will be spent just kind of… futzing with it.

@luis_in_brief There may even be a dumbo's-feather effect where a chatbot allows for breaking the static friction of a stuck neurodivergent mind, and I am — it would not be an exaggeration to use the adjective "desperately" here — envious of that experience.
@luis_in_brief But while I want to be careful not to contradict your lived experience of having successfully had the sensation of a coding tutor and an EA with your chatbots, I can also say that ChatGPT, Claude, and Gemini have resoundingly failed every single experiment I have put them to, almost uniformly wasted my time and just generally been attractive nuisances around the work I want to do.
@luis_in_brief I also yearn for the glorious 5 minutes of my career when I had an EA but a chatbot could not *remotely* perform the functions that I'd want from someone in such a role. So on that front, I am just curious. What work are you delegating to it and how?
@glyph the very TLDR is it’s a GTD coach. Slightly longer form: I use my existing todo manager as an input mechanism and data store, and wrap that in a thick layer of prompts that walk the LLM through daily and weekly GTD reviews. It has been hugely helpful in reducing task-related anxiety, and prioritizing and reducing my open backlog of tasks.
@glyph the regular data export from the existing todo tool, scripts to manage the prompts, and first drafts of the prompts themselves were pretty much all written by Claude Code. None of it is particularly publicly-shareable code, for a variety of reasons, but I use it multiple times a day.
@glyph it is absolutely not human-facing (other than me), so very much not an EA in that sense. But it has been very helpful (and fun to work on).
@luis_in_brief The fact that the systems are so otherwise unethical makes it very difficult for me to find the fun in it. Once or twice I thought I'd found some really useful, substantive coding assistance, but then I double checked and it was just plagiarism; and I don't have time to do hbomberguy-style googling exact quoted phrases of all its output to try to figure out what open source library I ought to be importing instead. It just feels kind of persistently gross.
@luis_in_brief I remain tepidly enthusiastic about the potential for local models, but the training story there is still distressingly murky so I'm just kind of waiting around for ollama to be able to offer me something I'm not going to feel squicked out by.
@luis_in_brief In any case, thanks for the data point. This is definitely part of an emerging pattern where there does seem to be some kind of use as a digital mirror or transcendently responsive rubber duck, which (c.f. above, static friction of the neurodivergent mind) is not without value.

@glyph yeah the value of this use case is very much tied, though somewhat inadvertently, to my anxiety issues.

Ironically, even if I wanted to use it “agentically” (and have it update tasks and such for me) I can’t because the very-human todo list tool I use has an API that is terrible and borderline unusable.