I have been hesitating to say this but the pattern is now so consistent I just have to share the observation: LLM users don't just behave like addicts, not even like gambling addicts. They specifically behave like kratom addicts. "Sure, it can be dangerous. Sure, it has risks. But I'm not like those other users. I can handle it. I have a system. It really helps me be productive. It helps with my ADHD so much."
As with kratom addicts, there is even a period of time when they're correct, so it's hard to challenge. The *first* time a person with executive function challenges uses kratom, maybe even the first few months, it really *does* improve their mood, their executive function, etc. But then the secondary cumulative effects start to gradually erode their cognitive abilities so slowly they don't notice.
I'm still open to being wrong, and there are still plenty of people who still exhibit critical judgement in other areas despite my disagreements with them on LLM use. Kratom has a much more straightforward biochemical mechanism which we know is bad for specific and impossible-to-avoid reasons. Maybe there really are safe techniques for LLM use and I sure hope we figure out what they are. But way, way too many tech leaders have started using these tools and then had their brains publicly cooked
@glyph I just keep coming back to that LLMs put our brains directly on the inner loop of an optimization algorithm — that's already true to some degree with advertising and social media engagement algorithms, but LLMs tighten that loop even more, and we don't know what that does to brains!
@xgranade looks like we're gonna find out
@glyph Maybe! Or maybe it'll be like leaded gasoline where we never really are able to trace back which awful things were due to that source of lead and which things weren't because it all gets mixed up in the "heavy metal in head, things bad now" bucket.
@xgranade definitely going to make a killing when I put all this low-linear-algebra steel back on the market
@glyph @xgranade The Internet Archive's offline backups from pre-LLM-content days. Pristine bits.
@glyph
We didn't even get the fun fuck around part.
@xgranade

Makes me think of people who get caught by "lovebombing" repeatedly.

@xgranade @glyph

@clew @xgranade I mean with the GPT-5 / r/MyBoyfriendIsAI debacle (which is, I suppose, still ongoing) we saw that sometimes it literally is just _exactly_ love bombing. But there is a pretty significant difference in degree (if not, necessarily, in kind?) with telling the robot to write your CSS for you, rather than telling it to validate your feelings

hey, I'll take whichever validation I get /jk

@glyph @xgranade

@xgranade
What does the "inner loop of an optimization algorithm" mean? Can you expand on that?
@glyph

@fnohe @glyph With advertising, a company is presumably trying to make numbers go up by intervening in a process (in the causal inference sense) that includes your brain. They're generally prevented by the limited choice of hypothetical interventions, though, each of which has to be written or made by a person.

For social media, that can be shortcutted dramatically by selecting existing posts to show you.

@fnohe @glyph
For an LLM chatbot, they don't even need that. New interventions can be emitted programmatically, and your "engagement" with the chatbot measured as a result of those interventions.

We don't really know what that does to brains, what effectively letting an LLM fuzz our personalities looks like as a mass psychological experiment.

@xgranade @glyph sorry, but our brains are in an optimization loop for thousands of years already. we optimize for pattern recognition and prediction of our environment in order to survive.
@condret @glyph I mean, that's incredibly reductive? Evolution is a far cry from advertisers and slopbros running uncontrolled experiments on our brains.

@condret

Yeah, and that's exactly why it's important to be careful about what your brain is optimising _for_! They're good at it!

@glyph @xgranade

@glyph This feels about right, and with this, the next trick is building a culture of healing and support for when those who have fallen prey to this addiction are ready for change.
@mttaggart unfortunately I have no concept of what "rock bottom" for an LLM user looks like. Step 0 is going to have to be that society stops massively rewarding them with prestige and huge piles of cash first
@glyph Absolutely, but even in that we can make out the general shape of the thing. We know this current model is not economically sustainable by any party. For users, the result will be inability to access models, or paying cripplingly high prices to do so. Option 1 will elucidate their inability to function without the models, and option 2 will impose the kinds of costs that look like rock bottom in other addictions.
@mttaggart my suspicion is that when this happens we are going to find out that there is a huge predisposition component. there will be people who say "ah well. guess I'm a little rusty writing unit tests now, but time to get back at it" and we will have people who will go to sleep crying tears of frustration for the rest of their lives as they struggle to reclaim the feeling of being predictably productive again
@glyph Sure, although that first group may be said to be users, but not addicts, if that's the case. My suspicion is that time of use will be a factor there.

@glyph @mttaggart this is why I don't play MMOs or gatchas and I don't touch LLMs and I mask in public.

I. Am. Vulnerable. To. These. Things. They. Are. Dangerous.

@mttaggart @glyph If there is such an unwinding I think users that can't afford premium service providers will fall back to free/subsidized providers and tools that run on-device. A whole spectrum rather than a binary have / have not.
@glyph @mttaggart unexpected and sudden price jacking might do it.
@mttaggart @glyph
not exactly on the spot but close to it on "culture of healing" i'd say, it speaks of "recovering prompt writers", don't know if you read this marvelous, lenghty piece: https://sightlessscribbles.com/the-colonization-of-confidence/ ?
The Colonization of Confidence., Sightless Scribbles

A fabulously gay blind author.

@glyph The addictive behavior isn't new, I've flagged it before. There is also a reason the meetup is called Claude Code Anonymous. What puzzles me is how dismissive people still are of LLMs, despite the mounting evidence to the contrary. I thought at this point we would be past that.
@mitsuhiko [citation needed]
@mitsuhiko my assertion is that there's no evidence they're useful. There's LOADS of evidence that people SUBJECTIVELY FEEL that they are useful, but that is not the same thing. I subjectively feel that they are destructive and waste time. If you want to proceed past this disagreement you are going to need to bring methodologically credible evidence.

@glyph I mean, I don't know what to tell you. Are you still really doubting that these things are useful? I've written so many times about this now, are you dismissing it? I can point you to code that I've written over the last seven days that in terms of complexity and utility, is way beyond what we've been able to push out over Christmas. (eg: https://github.com/mitsuhiko/tankgame which is public)

Like, how can you doubt this? It just boggles my mind.

GitHub - mitsuhiko/tankgame: Opus builds a tank game

Opus builds a tank game. Contribute to mitsuhiko/tankgame development by creating an account on GitHub.

GitHub
@mitsuhiko amazing. you plagiarized a game-jam game in about the amount of time that a game jam usually takes to run. truly our society will be revolutionized
@glyph Please tell me what is plagiarized and also no I wouldn't be able to do it without an LLM over Christmas while also working on actual work. I just couldn't have done it. You might be able to. I can't. And that's a pretty big difference.
@mitsuhiko re: plagiarism: of course I can't tell you exactly what was plagiarized here. it's *extremely* hard to even track provenance of what an LLM's 'inspiration' was, let alone to determine specifically if it made a sufficiently exact replication of training data that direct copyright litigation would be feasible. you don't know who contributed the training data that made this work possible. in my opinion, it's inherently plagiarism.
@mitsuhiko re: you "couldn't have done it", sure, maybe. that's subjective! which is the exact thing that I said would not move the argument forward. so you're just performing an argument from incredulity here. here's a counterpoint: I asked an LLM to help me with a data structure problem and wasted about a week on useless garbage output. We are now at net zero utility between the two of us, Q.E.D.
@mitsuhiko this is the definition of sample bias (which I did my level best to explain in exhaustive detail in https://blog.glyph.im/2025/08/futzing-fraction.html ). It wastes time sometimes, it saves time sometimes, it helps with learning sometimes, it helps with anti-learning misinformation and incorrect conceptual models sometimes. Is it a net benefit or not? I don't think it is, but I can't prove it! It's kinda incumbent upon boosters at this point, given all the info we now have on its risks!
The Futzing Fraction

At least some of your time with genAI will be spent just kind of… futzing with it.

@glyph You’re arguing against a strange strawman. On the one hand, you claim this is useless; on the other, when I point out that it’s useful to me and show concrete output that has been genuinely valuable, you dismiss it as something else entirely, and apparently plagiarism.

I get the impression that this is upsetting to you, or that you're simply uncomfortable with people using it. What puzzles me is the complete disregard for the evidence being presented, because at this point it doesn't seem grounded in reality.

I think there's nothing I could tell you that would convince you that this is useful to me.

@mitsuhiko I find it upsetting because you're strolling right past all the points I'm making, ignoring the parameters I tried to set on the discussion, and embodying the EXACT THING that I pointed out in the top post. I said that LLM users say "sure it has problems, but I can handle them". And you are responding to that by saying that you've pointed out the addictive tendencies but you still use it because you see benefits. That's the thing that I was saying! That's my worry!
@mitsuhiko As far as the strawman, let me try to explain again. If I have said it's "useless" (a word I try not to use, but I might slip up here and there—in this discussion, I describe it as *having produced* useless output for me, which is just a literal thing that happened, not a description of the model overall) what I am referring to is the *overall cost/benefit* not necessarily being positive.
@mitsuhiko I didn't find your specific example particularly impressive but let's ignore that. Pretend that it's great. How are you measuring that *against* the failed starts, the disinformation, the papering over boilerplate, the repetition due to small context windows, all of the *well known and extremely widely discussed* problems that this technology has? How do you know that it is *overall* saving time? If it saves *you, personally* time, how are you measuring that against social harms?
@glyph a lot of things that are deeply fun have those properties. Computer games, even going to gym can be that way. It’s a very exciting time and it’s deeply enjoyable to play with these things. While simultaneously being useful. I’m not sure how I can break that to you.

@mitsuhiko now you're just doing this dril tweet https://en.wikiquote.org/wiki/Dril#:~:text=the%20wise%20man%20bowed%20his%20head%20solemnly

justifying your addiction with moral relativism and an appeal to a benefit that I do not think is, on net, good.

I think that as a society we've got our arms around gaming & the gym, there's plenty of data about how addictive those things are. (And also about how "computer games" is a pretty big bucket, where you can find a tremendous amount of gamblification right now, which is just as bad if not worse as LLMs)

dril - Wikiquote

@mitsuhiko one of the reasons I find this so upsetting is that your argument here is so obviously missing the point that it makes me scared that these things can so terribly damage your metacognition that this seems like reasonable things to say. *I* still want to experiment with them to develop a better understanding, and this kind of public rhetoric makes me feel like I might be poisoning my own brain to do so!

@mitsuhiko I could *easily* accept an argument like "we all have to make decisions under uncertainty and *in my experience*, accounting for subjective distortion as best I can, there has been a big net benefit. we're going to have to agree to disagree until someone does a more comprehensive study; I'll gather more data on my own use in the meanwhile"

but your insistence that I recognize these anecdotal examples (which I *already acknowledged repeatedly*) as *proof* of net benefit is scary

@glyph @mitsuhiko I don't want to ruin your game, as it seems you both consented to it. Flamimg for flamings sake can be fun. I just want to point out some things for other readers:

Maybe the tank game is functional, and of game jam quality. I can believe that, anyone can learn to write working code.

What I'm more interested in is the hard part of software development: Is this maintainable? Can you make a 2.0 based on it? Can you turn it into a commercial quality game? Can you fix the user crashes and bugs when they come in? Will you be able to make a DLC? Make a good API for modders?

Even if LLM is a time saver now, do you produce technical debt that will slow you down later?

@dragonfi @glyph yes. It’s maintainable.

@mitsuhiko @glyph In hopes that I can foster a better debate culture:

This statement comes off as extremely vague and seemingly ignore all issues I raised. Remember, we can only consider evidence you supplied. We can't read your mind.

In this case, I see that your repo is 3 days old, with only 133 commits. Linking a multi-year project with at thousands of commits would strengthen your argument.

@dragonfi @glyph AI does not exist for multiple years, but as an example, my friend Peter has a repository that he contributes to that has way more than 100,000 lines of AI-generated code (https://github.com/clawdbot/clawdbot and dependencies). My company's two repositories are both beyond 100,000 lines of code, all AI maintained. I'm contributing with AI to my own Rust projects like MiniJinja which are all older than AI.
Commits · clawdbot/clawdbot

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞 - Commits · clawdbot/clawdbot

GitHub
@mitsuhiko @glyph Thanks, this looks more substantial, although the repository was only created in November, will be interesting to check on it after a year.
@dragonfi @glyph Also to put this repository in context: This project despite being ~3 days old is eclipsing in complexity repositories I would have created over the course of multiple weeks if not months.

@mitsuhiko @glyph Complexity is exactly where my question lies: software development is about managing (and minimizing) complexity in the long term. Will you be able to do it with your current workflow?

I think it is quite okay to not have an answer and give it a try. (This is how science and invention works, after all.) Just don't claim that it is maintainable before you spent months or maybe a year maintaining it.

@glyph My offer still stands to have a debate on this on a video call!

@glyph I never want to cast shade on addicts for being addicted, addictions are fucking awful.

I absolutely will cast shade on addicts *or anyone else* for insisting I should be addicted too, and for transforming all of society around the idea that my being addicted is a good thing.

@xgranade to this point I have had kratom users — mostly indirectly, I am not particularly close with any — suggest that I try it because it is "more effective" and "easier to get" than prescription ADHD meds. And I'd definitely be lying if I didn't say I feel a *strong* pull towards believing that. It would be very nice to solve all my problems with a pill or a prompt
@xgranade I should clarify that other ADHD meds are a great thing and I have even had good personal experience with some, and they are in fact way too hard to get. I didn’t mean to dismiss them with the “pill” comment (and it was a poor choice of words given that kratom rarely comes in pill form). the toxic allure of something like kratom is the promise of something easy which is actually a poison, I don’t want to lump that in with the reductive/wrong idea “psychoactive meds are bad”.