why I don't use generative AI: https://ionathan.ch/2026/03/18/LLMs.html
Why I don't use generative AI

⟨λ. closure ahead⟩
@ionchy The argument that usage analytics support continued investment is a pretty convincing one against mere use

@wilbowma @ionchy two points I didn't see addressed explicitly in either blog post (yours or ionchy's), though I think both are related to ionchy's analytics argument:

  • scale. individual actions maybe do not contribute to real problems, but if tons of people are using LLMs, this may substantively contribute to real problems. for example: individual resource consumption may be negligible, but at-scale it is not; individually produced slop is probably not a big deal, but replacing the majority of human output with slop is; etc.

  • use is training. the early models were not as good at doing simple coding tasks, but new models are (seemingly) pretty good at those tasks. I have to imagine they're using interaction logs as feedback to improve the models, which is another avenue for giving the "AI" industry more power through mere use. (this is also exacerbated by scale.)

  • I think this second point is also part of why LLM adoption at big companies is being forced on employees: the more the workers are made to use the homunculus, the better it can do their jobs, leading to more employee displacement and an improved profit margin.

    @pdarragh @wilbowma 2 is a good point, I hadn't considered that what you give to the LLM in return might become part of its training data (is this part of usual ToS?)
    @ionchy @pdarragh @wilbowma it’s usually a setting that is on by default (although who knows whether they pay any mind to you turning it off)

    @ionchy

    sometimes when I copy medium-short snippets of code from StackOverflow, I’ll retype the whole thing out by hand instead, editing along the way.

    omg finally someone else who also does this

    @ionchy i like your section about intent. i have been thinking a lot lately about how and why it's such a minority position to care about *why* choices were made in code or proof or art, not just that the end result exists
    @chrisamaphone maybe you intended the comparison all along but it was Shardul at PLATEAU that drove home for me that this can be viewed as a “convincing v. explaining” thing in the sense of your truth-future-proof talk. @ionchy

    @simrob @chrisamaphone @ionchy hello I have been summoned

    I am so happy to see the beginnings of the public refusal dialogue, the development of shared vocabulary and metaphors

    since it is apparently the season for writing and posting genAI takes, here is mine that I wrote over winter: https://etaoin-shrdlu.xyz/writing/going-vegetarian.php

    Going vegetarian

    Out of ethical and sustainability concerns, I am making the personal decision to go “genAI-vegetarian”, i.e. to avoid consuming the products of generative AI. This essay goes into more detail about my choices and reasoning.

    shardulc
    @chrisamaphone @ionchy yea, llms break the chain of intentions. my sketch-summary is that LLMs are indistinguishable from malicious genies. https://felix.dognebula.com/art/malicious-genie.html
    LLMs are indistinguishable from malicious genies

    If your AI agent doesn't need human-like rights, then it doesn't have human-like intentions. It might be 'creative', but it isn't *conscientiously* creative. And that makes it indistinguishable from a malicious genie. 1300 words - 6.5 minutes

    @ionchy There's a reason implementing the algorithm in the paper yourself is a tradition even when you're practically transliterating it!
    @ionchy ("reïnforces" - nice diaeresis!)
    @ionchy yay this is so relatable thank u for writing it up!
    @ionchy scrollbar-color: #FFE100 #202020; 🤯🤯🤯