The thing I actually wanted to say about AI today, before the whole world jumped the shark yet again.

Anyway, @zkat warned us. Talking about whether or not AI "works" was a trap, and always was. The ethical component is all that matters, and from that analysis alone, the onus is on all of us to reject and oppose AI.

Getting mired into whether or not it "works" is bad praxis in several ways: it de-emphasizes the ethics, it opens up to goalpost shifting about what it means for AI to "work," and it's easier for the boosters to Gish gallop or overwhelm with jargon.

Sure enough, that's where we are now. I'm as guilty of that as anyone, to be sure. But like... all weekend, there have been so many new claims about AI "working," and every one takes a lot of effort to read critically and debunk. None of them change the ethical calculus.

@xgranade reminds me of this post I read recently about how arguing against a claim involves implicitly accepting its framing

https://anthonymoser.github.io/writing/rhetoric/framing/kirby/2026/01/28/the-kirby-frame.html

The Kirby Frame

writing is a technology, frames are a frame. bespoke human content.

moser’s frame shop
@technomancy @xgranade given how net-unhelpful the "AI safety" people have been in this struggle, i believe "our AI works great and is so powerful, it might just kill us all" is a framing we must not accept even a tiny bit, which requires pushing back on what are essentially technical claims (albeit wild and ungrounded ones) about "how well it works". and again because of the stakes and the harms already done, these become an inseparable part of the moral arguments.