Stubsack: weekly thread for sneers not worth an entire post, week ending 12th April 2026

https://awful.systems/post/7852082

Stubsack: weekly thread for sneers not worth an entire post, week ending 12th April 2026 - awful.systems

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid. Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret. Any awful.systems sub may be subsneered in this subthread, techtakes or no. If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high. > The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them. (Credit and/or blame to David Gerard for starting this.)

New Yorker article on Sam Altman dropped. Aaron Swartz apparently called him a sociopath. The article itself also had wat looked like an animated AI generated image of Altman so here is the archive.is link (if you can get the latter to load, I was having troubles).

“New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI.”

Man, this one is a weird read. On one hand I think they’re entirely too credulous of the “AI Future” narrative at the heart of all of this. Especially in the opening they don’t highlight how the industry is increasingly facing criticism and questions about the bubble, and only pay lip service to how ridiculous all the existential risk AI safety talk sounds (should be is). And they don’t spend any ink discussing the actual problems with this technology that those concerns and that narrative help sweep under the rug. For all that they criticize and question Saltman himself this is still, imo, standard industry critihype and I’m deeply frustrated to see this still get the platform it does.

But at the same time, I do think that it’s easy to lose sight of the rich variety of greedy assholes and sheltered narcissists that thrive at this level of wealth and power. Like, I wholly believe that Altman is less of a freak than some of his contemporaries while still being an absolute goddamn snake, and I hope that this is part of a sea change in how these people get talked about on a broader level, though I kinda doubt it.

I see what you’re saying, but I think that’s a bit much to expect from a relatively mainstream and (I hate to say it, but it applies) bourgeois publication like the New Yorker. Their editorial line allows them to raise controversy in one dimension (in this case, the particulars of Sam Altman’s character) but not multiple dimensions simultaneously (hey, this guy sucks AND his tech sucks AND you’re gonna lose money). And there’s a lag-time factor, too; seems like Farrow and Marantz were working on this story for at least the latter half of last year. By the time some of the dubious economics such as the bad data-center deals and rampant circular financing were clear, this piece probably would’ve been deep into fact-checking and unlikely to change much in substance.

We here are on the leading edge of this stuff, not that that’s any great advantage! I wouldn’t expect an outlet like New Yorker to be publishing anything like “the dashed expectations of AI” until maybe this time next year. And even then, it might still have a personalist bent.