Over a year ago, I posited that AI coding stuff isn't about coding or productivity. It's about some % of people who feel a stimulus-reward thing from using it, similar to how some people feel when gambling. It feels so overwhelmingly good to some % of people they don't even bother to measure if their AI stuff is actually doing anything useful, because of course it must be, because the feeling is so strong.

It seems more & more people are also finding this idea lately.

But I've also realized that it seems to apply to any of the prompt-style AI things, not just coding. There is some kind of slot machine playing mania (sorta, not exactly) thing it triggers in some % of people. I'm certain of it now.

If anything, it makes me feel a bit less angry and more sad towards the people with this AI prompt-query compulsion. It feels closer to when you see someone with a gambling addiction stuck at a gambling machine.

@cancel this reflects my experience pretty strongly.

I've been pretty staunchly opposed to this wave of gen-AI since chatGPT launched in 2022, and never intentionally touched it until three months ago, when I finally felt like I needed to spend at least a little bit of time with it to understand/prove what I was opposed to. it almost *immediately* triggered an addiction response (of the gambling category, as you pointed out), to the point where within a week I could barely sleep, and all I could think about was prompting, explicitly like I needed to be using it 24/7 and trying to figure out the right way to extract quality output from it, under this sudden manufactured feeling of urgency.

luckily, i got burnt out on it pretty "quickly" (roughly a month) which forced me to step back, and had lived long enough to be able to identify what this cycle was. It was also tremendously helpful to both have had a long critical perspective built against the tech that I had now tested against, and a really high bar of personal work quality that I was able to use to categorize that output of these tools as "complete shit".

it's wild to me that as someone who was pretty publicly and vocally against the principle of the tech, this addiction loop still hit me at full force, on the very first prompt I ever fed it. for people without the life experience, critical lens, and body of high quality personal work to measure against, I can't imagine how many could possibly escape from the slot machine cycle. "if I can just figure out exactly how to word this prompt, it'll solve all my problems...". I wonder how those who do escape don't talk about it publicly out of shame (me, until this post).

the silver lining for me personally is that it did end up having some kind of positive effect on how I approach my work. reading through so much slop for a month re-lit a fire within me to be even more intentional and human in my work, whether through writing or code.

@jakintosh @cancel

Holy shit. Okay, that's terrifying.

Me, I have next to no susceptibility to gambling-- for one thing, I understand too much about math, the odds of winning are so low the whole thing strikes me as contemptibly absurd-- and for another, I'm a digital artist so gut-wrenched by the uncanny valley effect of putting other people's work through a meat grinder and regurgitating it into a shambling frankenitation of art that it makes me feel physically ill, so I haven't even touched it. I've just been sitting here mystified like WHY EVEN...

If that's the type of effect it's having on people who aren't wired like me... good gawd, we're in deep shit.

@[email protected] @[email protected] It's classic Skinnerian operant conditioning with intermittent (variable rate) rewards. You want whatever it's outputting to be good (code, text, image, etc). Sometimes it isn't but sometimes it is, and you can't usually understand why. When it is good, you experience the reward. The fact that the reward is intermittent and inscrutable makes the desire to repeat the behavior extremely strong. Skinner observed this with his pigeons and it's formed the basis of behavioral modification, including gambling machine design, ever since.

@abucci

Agreed, and I think this makes good evolutionary sense. Normally an intermittent reward is a sign of a skill that one can master. It reminds me of when my then-infant nephew was learning to work a bottle. He was incredibly persistent in his trial and error. It makes sense that as a species for which tool use is so fundamental that we'd be especially prone to this.

But we really aren't prepared for when the thing can't be mastered, where it's fundamentally unreliable. Especially when that's cloaked in distracting complexity.

@jakintosh @cancel

@jakintosh @cancel @williampietri wow. Thank you so much for having shared that. It's very important to document it - both the addiction loop, the shitty outputs, and the facts that nobody is immune
@fanf42 @jakintosh @cancel @williampietri I don't know. Gambling doesn't do anything for me either (except annoy me) and I can very well stay away from GenAI. Probably don't have the addiction-prone personality or something.
@jakintosh
It's the ultimate Skinner Box - you never know if you'll get good or bad output from it, you have tons of controls to play with, but little to no way to measure the effect of those controls.
@cancel

@jakintosh @cancel nothing *quite* this extreme has happened to me but I've become more afraid of it as I've noticed new thoughts, like, "I've tried to use AI to accelerate a project, wasted a lot of time with useless outputs, but then when I see friends claiming to use the same AI to do work, I think, gosh maybe I should give it another shot, maybe this time it'll work"

It's getting to the point where people I know and trust say "I used an LLM to do XYZ" and I just flat out don't believe them

@jakintosh @cancel like yes I believe they did that thing and yes I believe they were prompting an LLM as they did it but I no longer really believe the LLM performed any useful part of the task. it's stolen valor as a service
@jakintosh @cancel that is fascinating. The exact same thing happened to me too, just before Christmas. And it took me some time to figure out that I was fighting addiction. 😬
I'm pretty much the only one at my company who's clearly against generative A.I. My colleagues have been using it for months/years and my boss is always looking for ways to incorporate it into daily work to make us more efficient developers. 😩
His latest big idea: a Hackathon before Christmas to see how far a small team of skilled developers can get with genAI on a clearly defined side-project, loosely related to our regular products. I had never written a prompt before, but I agreed to at least try it out for two weeks. I always planned to put it aside after that, because I cannot use genAI in good conscience, knowing enough about all the problems it causes.
Nevertheless, just as you say, I wrote this one prompt on a Monday afternoon... and it spit out 90% of the code I ultimately handed in for the Hackathon. And the code was surprisingly well-written and well-tested, something that I can imagine myself actively maintaining for the foreseeable future. The documentation was much too verbose, but you can always throw that away. 😅
And for a few days after that, I wanted to throw everything at it, just to see what genAI would do with it. I had set clear boundaries for myself, so I only used it for the Hackathon and never for "real" work. But I had this strange, unexplainable urge to replace any internet search, any look into the docs, StackOverflow, etc. with a prompt.
But I want to believe, that even without clear boundaries, I'd have stopped myself again when I started using the LLM for anything other than Python code. Using it for Ansible was a very sobering experience, showing off all its worst traits: hallucinating dependencies, broken code, slightly off code that breaks things in subtle ways that take forever to debug,...
I'm so happy to see other people talk about getting adicted to genAI. I did notice that effect and my gut was trying to tell me what it is and it's just very affirming to hear it from other people too. 🙂
@jakintosh @cancel this is a very interesting first-party account! I've seen comparisons of this particular feedback loop to gambling, but as someone who's pretty resistant to nearly every reward-hacking mechanic this society has to offer, including every drug i've tried (the only thing that does it for me--debatably--is open source labor) that's kind of staggering to consider. it adds an entire new dimension of ethical implications to AI boosterism, too...
@jakintosh @cancel
I saw that coming so I went in very cautiously, immediately saw worrying signs, stopped. It's too slot-machine-like, I can't risk letting it at my addiction brain.
@cancel this would connect a lot of dots in my opinion. I haven't used genAI but when it seems pretty clear that it doesn't work as well as all the hype, I had to wonder what makes people use it anyway and what makes some people use it a LOT.
@cancel your observations are confirmed in The AI Con. Good book.
@cancel @Xenograg Just one more prompt, and I’m sure to get it right!

@cancel

I am in Higher Education and I can see the exact same effects that you are describing. It's frightening, and, yes, it's sad. Your post is a very perceptive observation of what is happening.

#noAI

@cancel gambling that the next sentence pleading will result in code that actually works... makes it sound fun!
@cancel That's so interesting, I've never considered it that way. I don't personally find it an addicting activity in the slightest, but I suppose 'AI' does provide 24/7 access to an entity willing to entertain any of your ideas or thoughts in a respectful manner, which I can definitely see meeting some people's unmet needs in a toxic way.

@cancel Using copilot is like replacing an actual pilot with a blowup doll

#Copilot

@cancel Well, the problem is that the gamblers are the ones you have to work with.

I just went through a migration process on some internal software yesterday. The lack of clarity in people's mind is fascinating. It feels like one works with hallucinating mushrooms. They can't come up with clear, simple, structured answers to challenges.

Honestly, we are f***ed.

Or in other words: Abandon current #IT. Leave it to the mushrooms. And the authorities and tech bros.

@cancel Phrasing this as a gambling addiction makes so much sense to me and gives an actual explanation for the behaviour I have observed in some friends.

I really hope this bubble bursts soon, because really I want my friends back in actual reality.

@cancel I definitely spin to win with prompting, however it doesn't feel as addictive in a classical sense because if I didn't need it for work or something for a long time I know I would have no withdrawals. I think the romantic companions may be more risky
@darryl @cancel Why do you need it for work?
@cancel I hadn't thought about it this way before, but I can definitely see it, especially with generative AI "art." It's different every time, even when using the same prompt, so you never know what might come out, and there's an excitement you get when it generates something you like. I can definitely see people becoming addicted to that.

@cancel I think this is why these tools are designed to give you an answer, no matter what.

They'll tell you something is possible when it isn't. They'll give you *some* solution to your problem, no matter how half-baked or incomplete.

What they won't do is say 'I do not know', or 'no, this is not possible', or 'I can't do that without more information'.

They're designed to trigger an addictive response, because their makers want people to depend on them.

@rainynight65 @cancel The whole point of the largeness of the models is to be able to extrude some plausible sounding bullshit when there's nothing actually well correlated with the query.
@dalias @rainynight65 @cancel much like anything in search, optimization had the goal of driving engagement. From that patterns emerged like optimising for recall instead of precision, or doing away with in session consistency in personalized timelines.

@mainec @dalias @rainynight65 @cancel
I think it's simpler than that. The models are trained on web pages, books, papers and so on, almost all of which provide answers and very few that say they don't know. We don't usually publish stuff when we fail to figure something out.

And the models follow that pattern when they complete the query. There's no intent after all. They are no more malicious than a regular expression. All they do is fill in text that best match the patterns they've given.

@jannem @mainec @rainynight65 @cancel Without the largeness the bullshit would be a lot more obviously nonsensical and they could not have achieved this kind of resonance with people's delusion/addiction susceptibilities.
@jannem @mainec @dalias @rainynight65 @cancel "best match" is some impressive rhetorical sleight of hand. a regular expression engine does not require an entire data center to train and returns matches according to the user's goals, not the goals of google or microsoft or whoever else
@dalias @rainynight65 @cancel i would disagree. the point of the largeness is to make reproducibility and the scientific method impossible, while also justifying the DDOS on all civilian infrastructure, while also requiring massive financial assistance to create (allowing the financial sector to complete their conquest of computing)
@hipsterelectron @rainynight65 @cancel Ok I should have said one point not whole point.
@dalias @rainynight65 @cancel i think your point is (as noted in reply) exactly the rationale for the optimization process that maximizes next-word frequency

@rainynight65 @cancel it's called casino mechanics, they're built with the same principles slot machines are built on.

Right or wrong, you always pay a quarter and get an answer..

If the answer is right you win!!! (money from slot machines, time from LLMs).

But it's a gamble every time, a quarter whether you win or lose, every time, multiplied by millions of slot machines

...all encouraging gambling addiction. A bona fide disease. Android and iOS and Meta and X and tiktok all do it too.

@cancel it's no coincidence that the last decade has also seen a massive expansion of the gambling industry, accelerated by pandemic isolation and tech's current era of morally empty rudderless greed, preying on peoples' biochemistry and diverting our circuitry of hope towards addiction and upward distribution as the floorboards of capitalism rot away.
@cancel and they regularly argue that this is simply "entertainment", and point to the philosophical ambiguous zone between "sometimes good things happen randomly and you feel better" and skinner boxes deliberately engineered to siphon away human health and wealth and time as reasons that what they're doing is an immutable feature of society and not a trillion dollar engine of misery.
@cancel You know? i did a gig for a couple of months fine tuning LLMs for this facebook subsidiary, and could not help to notice this addictive attitude while tasking. In retrospetive I hated it and stressed me quite durong some projects I took part in, but I see your point in that this tech might generate some addiction... Worrying... 🫤
@cancel Now that would be an interesting FMRI study
Being Addicted To Generative AI

Believe it or not, people can become addicted to generative AI. It is a serious topic and an emerging one. Here's the scoop.

Forbes
@jd @cancel that article feels like slop. Which would be peak irony.
@cancel intermittent reinforcement is best reinforcement

@cancel

This reminded me of a couple of videos I watched recently.

First, Adam Conover did a pretty good video on "ai" and the effects on mental health:

https://www.youtube.com/watch?v=fPW3B6v60nc

Another really good video on the topic, though very funny (and sad) was by
Eddy Burback:

https://www.youtube.com/watch?v=VRjgNgJms3Q

OpenAI Wants You To Go Insane

YouTube
@cancel It almost works for some tasks, but beyond summarization and similar pattern-matching tasks, it really doesn’t. The chance that it might work—especially for things you’re not good at or don’t have time for—is addictive. So you keep trying, even though you know you’ll quickly hit a wall, even with the simplest tasks.

@cancel I think it's also an issue of people feeling like they have to use generative AI in coding in order to keep up with the current landscape.

I'm currently relearning coding (use to be an IT major in uni before dropping out) and there is a legit temptation to use Chat GPT or Copilot to speed up more of the tedious aspects (not just in coding but in project management overall). And that's kinda what makes them so hard to ignore.

@cancel I really liked this write up I stumbled on a while back on gambling and AI https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/
Generative AI runs on gambling addiction — just one more prompt, bro!

You’ll have noticed how previously normal people start acting like addicts to their favourite generative AI and shout at you like you’re trying to take their cocaine away. Matthias Döpm…

Pivot to AI
@cancel I think it is as simple as that it is a matter of dopamine and checking your work is not giving dopamine. Nothing so nefarious or lazy as you describe.
@cancel In my head this is connected to how we're wired for shortcuts. We crave the calorie-dense fat and can wield cheating as a strategy.
@cancel That is a compelling insight.
@cancel except they're also running around shouting at everyone they have to use LLMs to code
@cancel I currently have a completely different problem: yesterday I tried to find an f# solution how to serialize/deserialize json stuff with Giraffe by googling - there are lots of answers, answers galore... none of them answered *anything*. I *had to* ask ChatGPT for help. This is disgusting. The internet is completely flood with shit.
@cancel If the AI services can keep selling you ALMOST the idea you want, then they can keep you hanging for longer :~)