Stubsack: weekly thread for sneers not worth an entire post, week ending 8th March 2026

https://awful.systems/post/7459754

Stubsack: weekly thread for sneers not worth an entire post, week ending 8th March 2026 - awful.systems

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid. Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret. Any awful.systems sub may be subsneered in this subthread, techtakes or no. If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high. > The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them. (Credit and/or blame to David Gerard for starting this.)

Stumbled across a YouTube slop-farm calling itself The Interactive Archive recently, and the whole thing is just plain shameless:

AI slop banner, AI slop thumbnails, AI slop avatar, its all slop from top to fucking bottom.

Its getting pitiful views, too - the highest-viewed video on the damn thing (as of this writing) is the one about arena shooters dying off, sitting at just under 400 views:

Putting that into context, a random screen recording I uploaded hit nearly 700 through sheer luck.

The Interactive Archive

The Interactive Archive is a documentary-style gaming channel dedicated to exploring the history, evolution, and impact of video games, franchises, and genres. From the rise and fall of iconic series to the mechanics that shaped entire generations of players, each episode breaks down how games changed, why they succeeded or failed, and what influenced their design. Through researched analysis, historical context, and cinematic storytelling, this channel connects gaming’s past with its present—covering everything from classic franchises to the systems and ideas that defined modern gaming. On this channel you’ll find: • The rise and fall of major franchises • The evolution of game mechanics and genres • Historical inspiration behind games • Industry trends and design shifts • Explainers on strategy, simulation, and complex systems If you enjoy thoughtful, documentary-style gaming content that goes beyond gameplay and into the story behind the industry, you’re in the right place.

YouTube

the one about arena shooters dying off

That extruded piece of audiovisual garbage has been bugging me massively for a couple hours now, so I’m gonna make a quick recommendation:

If you’re looking for a non-dogshit video about the arena shooter’s implosion, boomer shooter YouTuber Skeleblood made a pretty solid one a couple years ago.

Why did Quake (and arena shooters) die?

YouTube
Looks like they’re using the standard blue/orange color scheme
Orange/Blue Contrast - TV Tropes

We'll start off with a little warning: after you finish reading this article, this color combination will be everywhere you look. It will follow you around and constantly haunt you, it's too iconic in digital design. Thankfully for most people, …

TV Tropes

Prosperity’s Path: OpenAI has shown it cannot be trusted. Canada needs nationalized, public AI archive.ph/QLg2D

tldr tech bullshit requires ur tax dollars

yeah, the current situation in Europe is like: “As EU citizens, we should break free of our dependency on US Big Tech like the Torment Nexus. That’s why my company is advancing our fully sovereign solution, the Agony Core! Europe-owned, GDPR-compliant, Frontex-approved scalable Torment-as-a-Service, at competitive prices with TN-based deployments!”
It is amazing in a way, as in .nl our anti piracy org (brein) already went after local AI models for copyright infringement. While people in power still think we should go all in on AI. Sadly people with tech skills are rare in gov (politicians who go after the votes of tech enthousiasts otoh).
My FIL is texting me business plans from the slop hole. He wants me to read them to my wife, who is already mad at him about it.
“posting from the slop hole” is probably the best description possible for this, brb stealing

wired.com/…/openai-fires-employee-insider-trading…

lol. Between this and the ayatollah clawback, I’m expecting some entertaining litigation.

OpenAI Fires an Employee for Prediction Market Insider Trading

Prediction markets like Polymarket and Kalshi are big business, and some Big Tech employees are testing boundaries by making trades based on insider knowledge.

WIRED
just got a job in mathematical publishing. it’s work i think i’ll actually enjoy and expect to be very good at, it pays much better than any other job i’ve had previously (and they maxed out the position’s pay range, which i wasn’t expecting) and it has about a month of paid leave a year. such a relief
fuck yeah! congrats!

Felicitations!

(“A job,” Blake thinks. “I need to find one of those.”)

Just a thought…

Imagine if China and North Korea bombed the White House, killed the president and most of the cabinet, and shot missiles at primary schools. Would you go out on the street and demonstrate in favor of the Chinese Communist Party and Kim Jong-Un, and expect to be heard and taken seriously?

Somehow brown people in other lands are supposed to be avid readers of wingnut welfare papers and have their worldview 100% aligned with Lindsey Graham and Bibi Netanyahu…

When they witness the skyrocketing economic growth enabled by American AGI, they will be clamoring for Westernization
This piece on how doomers and rationalists have made everything worse with their “AGI is nigh” shtick and ended up giving AI companies way more power than they should.
How AGI-is-nigh doomers own-goaled humanity

The road to where we are now was (mostly) paved with good intentions — but mixed with too much uncritical acceptance of hype.

Marcus on AI

That is what happens when your mode of analysis is closer to erotic Harry Potter fan fiction (which is indeed the medium in which Yudkowsky has delivered some of his prognostications)

I was going to throw a point of order about not all fanfic being erotic, but given how they fetishize “intelligence” and “rationality” I can’t be sure that they *don’t" get off on that slog.

You can’t build a fandom full of that many fucked-up people without some of them getting horny in a fucked-up way.

Edit to add: example of a non-erotic fanfic archiveofourown.org/works/73396436

Why one small American town won’t stop stoning its residents to death - Charlotte_Stant - The Lottery - Shirley Jackson [Archive of Our Own]

An Archive of Our Own, a project of the Organization for Transformative Works

your mode of analysis is closer to erotic Harry Potter fan fiction

To give Gary Marcus credit here, HPMOR may not be erotic, but many of Eliezer’s other works are (or at least attempt to be), the most notable being Planecrash/Project Lawful which has entire sections devoted to deliberately bad (as in deliberately not safe, sane, consensual) bdsm.

I know I’ve said somewhere on here before that “Harry Potter for pop science nerds” is fanfiction on easy mode, but I’ll stand by it.
Also I think there’s enough manipulation fantasy in HPMOR, and enough lack of agency from Hermione, that it qualifies—in it’s own way—implicitly as being erotic.

it’s amusing to me that these nerds thought they could in any way affect policy even with a sane administration, not to mention this bugshit crazy one

like I’ve said before, I’d be perversely happy if we managed to off ourselves by building the robot god. beats drowning in our own filth or blowing ourselves up

I mean yeah I guess in a competition between getting a bullet directly through my brain, getting all my limbs chainsawed off with my head last and being drowned in boiling water, the bullet would win every time. Though the real funniest outcome is if superintelligence turns out to be completely impossible and we fuck ourselves over with garbage to mediocre AI embedded in all our critical infrastructure

Obviously I don’t want the human race to go extinct, but if there was a choice of inevitable outcomes where we do, building an inimmical superintelligence at least implies agency, not carelessness.

Anyway Big Yud’s fantasy of a precisely timed diamondoid-bacteria delivered killshot to every human being at the same time might sound terrible, but from a sensory perspective of the victims, it’s basically suffering-free. You go about your day, BAM nothingness. Maybe there’s a difference in timing so you see your partner keel over a split second before you die - again, you might not even realize what is happening.

I am not sure from where this idea of a global instantaneous simultanous genocide comes from, maybe a tit-for-tat escalation to counter every argument against shutting down the robot god, but from a storytelling perspective it’s pretty useless. There’s no drama where the survivors lament their loss or brood over what might have been. It’s just a plug being pulled on the simulation.

(weirdly it’s also the logical outcome of asking a computer to “end human suffering”, a bit like the Robobrain logic in Fallout 4’s Automatron expansion, but I doubt it’s meant that way)

That’s precisely what I was thinking. Obviously I don’t want everyone to die but if you forced me to choose between apocalypse scenarios, I’m picking something painless and instantaneous- like a super-virus that activates immediately or a bullet in the head-over being slowly tortured to death via something like nuclear radiation
A real evil robot god would keep a sample of humanity alive forever in order to torture them as reprisal for them being really really mean to it back in the day.

tit-for-tat escalation

I think maybe you nailed it here. Being able to pretend they’re doing game theory/mapping out an escalation ladder allows Yud and our friends to feel like they’re in the same intellectual lineage as guys like Oppenheimer and Teller, manifesting the same sort of “objective” emotionless rationality. The big difference, you see, is that the AI will think so much faster than us that…!

if we had made the podcast series on rationalists, their importance as useful idiots for billionaires was the structure i wanted to hang the whole thing on. so this is a gratifying read. that said i think the ideas here will be very familiar to many stubsack readers

The rationalist view of the world assumes, at some level, that the relevant actors are optimizing for well-understood, predictable variables and a clear understanding of what best serves their self-interest. What it cannot account for is bad faith, impulsiveness, ideological motivation untethered from evidence, random instances of force majeure, and personal whims and petty rivalries.

i will go further and say that not accounting for such things is considered virtuous in rationalist ideology

It’s especially strange because becoming less prone to bias and developing a clear understanding of what serves your interest is so much of the pitch for Rationalism as a community/ideology/project. Like, here’s unbearably long essays that promise to help cultivate the superpower of seeing the world clearly and acting in it effectively, now if you acknowledge that nobody outside this small set of group homes is actually doing that you’ll be shunned. And that’s not getting into how easily exploitable those assumptions of good faith are by bad-faith actors. It comes back to that quote from Scott that has stuck in my head apparently more than it did his: if you build a community based on the principle that you will absolutely never have a witch hunt you will end up living among approximately seven principles civil libertarians and eleven million goddamn witches, and this is true even if you’re right that witch hunts are bad.

i think this is exactly why they had to come up with - or rather, misappropriate - the concept of coupled vs decoupled thinking. when they (especially the more, ahem, human biodiversity minded of them) fold ridiculous claims about what constitutes virtuous cognition into scientific and sophisticated sounding terminology, it makes those claims seem aligned with the broader sales pitch of rationalism

also that scott quote is excellent. i hadn’t heard that one before

I actually dug up the context to make sure I wasn’t forgetting something horrific. It’s from a 2017 piece (CW: SSC Link) back before he went mask-off but was firmly in the “I’m a liberal and I talk exclusively about how liberals and their institutions suck” useful idiot phase of his career, so the overall essay is about how actually the wing nuts have a point when they say that all so-called neutral institutions are actually secret communist indoctrinators that want to trans your children and take your guns. I’m paraphrasing, obviously; he believes/pretends that when they called these things left-wing they didn’t mean “literally in league with Stalin and the Devil”. However, in the middle of the usual beigeness he tries to maintain his air of neutrality by having a section on how bad Voat ended up being, which concludes with:

The moral of the story is: if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.

Neutral vs. Conservative: The Eternal Struggle

I. Vox’s David Roberts writes about Donald Trump and the rise of tribal epistemology. It’s got a long and complicated argument which I can’t really do justice to here, but the the…

Slate Star Codex

Muldly positive news: there is a fork of the Zed with the llm autocomplete stuff ripped out now: gram.liten.app/posts/first-release/

(I’ve used zed with the ai kill switch and really like the buffer/editing ux; but it’s always felt a bit gross, I’m excited to see where the fork goes)

Gram 1.0 released

First release of Gram

Are they planning to follow the upstream updates from Zed, or is it a hard fork?
seems unlikely, see here for reference
GRAM: A Zed fork without all the AI

78 comments

Lobsters

Oooh I wish this project a lot of success!

Zed is interesting but the project’s very pro-AI stance keeps me away from it. So a fork without that stuff is great, hope that works out longer-term.

well would you look at that, some consequences
Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes

Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.

Futurism

Given the delay, I’ll bet an internal review discovered more instances in other articles where COVID couldn’t be the excuse.

Someone with more time and motivation could probably look at his past articles and see if any others have been quietly pulled or edited, although I have no idea if ars could be assed to change the ones which weren’t publicly criticized

Thought inspired by some git on the red site, basically their premise was that birthrates are declining because we no longer have a society with a 2-parent nuclear family with one “breadwinner”.

Here’s my counterproposal to the implied idea we need to implement The Handmaid’s Tale:

Ban contraception and abortion but if you get pregnant, there is no stigma to giving birth out of wedlock. The delivery is safe and paid for, and should you wish, the child will be reared in state-funded orphanages. These institutions will be receive more than adequate funding. Their charges will be given preferred entry to the best schools and universities, as well as preferential treatment when it comes to future employment. It will be illegal to discriminate against anyone so raised.

Surely this will raise the birthrate, right?

Stop the presses. Dude who’s into LLM’s has shit takes about open source software.

Apparently OSS devs that publish under non-commercial licenses are shutting people out?

Definitely some bespoke what the fu-

JB stan account (@johnbrownstan.bsky.social)

This author has chosen to make their posts visible only to people who are signed in.

Bluesky Social
this is confusing, how many licenses that are “NonCommercial” are mainstream Free/Open source? From what I’ve seen they’re deffo a minority anyway.
Copyleft is non-commercial haven’t you heard? I mean its really unfair, the code is completely free but you are not allowed to create the torment nexus without everybody seeing your work.

I can think of one notable project I ever saw one, and that’s Bookwyrm with the Anti-Capitalist Software License v1.4.

But this seems to vague-posty to refer to something that specific. Prolly just someone butthurt over copyleft.

Right? Like he never heard of MIT or Apache2 licence.

You could argue that for example AGPL3 is non-commercial in practice due to the requirements to disclose code. But even then.

There are licenses that effectively repel corporate use without a non-commercial clause; I looked at them on Open Source SE a while ago, including a fun bit of dentistry previously, on Lobsters. GPLv3 and AGPLv3 are examples in common usage. This might help to illuminate our boy’s actual problem: he can’t use Free Software without complying with the onerous requirement of ensuring the Four Freedoms by not plagiarizing, and he really wants to plagiarize.
Which FSF- or OSI-approved licenses limit corporate usage in spirit, but not in letter?

A common desire in the Free and Open-Source Software world is to release some code in a way that approximates two ideals: The software is in the public domain and its techniques are common knowledg...

Open Source Stack Exchange
Skyview.social mirror so everyone can see - he’s locked out everyone who’s not signed in.
A BlueSky thread by JB stan account on Skyview

As I've gotten older I've gotten more and more annoyed with the whole open source world view.

Thank you, should have checked that.
@BlueMonday1984 @JFranek "If you use this code, you can never charge money for whatever you used it in" sorry what this has the same energy as the brain-wormed Americans who think that socialised heathcare means doctors work for free

new episode of odium symposium. it’s a tribute to knowledge fight, in which we dissect an episode of nick fuentes’s show. i was nervous about how this would turn out but i think it’s actually my favorite episode yet.

www.patreon.com/posts/11-groyper-151852222 (links to other platforms at www.odiumsymposium.com)

God that was bleak - I thought Nick was bad in his guest spots on Alex’s show (seen via Knowledge Fight, of course) but apparently you really do need at least two layers of insulating podcast to avoid suffering critical psychic damage from that level of hatred. I appreciated the acknowledgement that in order to feel at all okay playing clips you needed to sanewash him a little bit. I’m pretty sure that JorDan do the same thing with Alex and don’t acknowledge it nearly often enough.

I also feel like some of Nick’s schtick is about trying to position himself and maintain his position in the right wing grifter bigot-industrial complex. Like, the open disdain for his audience and presenting his actually pretty straightforward feelings on the halftime show as somehow brave and iconoclastic is also about differentiating himself and making his audience feel superior to Alex, Tucker, Candace, etc. In that sense the open disdain for the audience serves another purpose in terms of reinforcing heirarchy. Look at how great it feels for me to be better than you. And even you are better than the chuds, who are better than the racialized other.

wrt to the first part, nick consistently outmaneuvers people who bring him onto their platform. he’s honestly brilliant at understanding who the audience is, what frame he’s appearing in, and how to signal given those circumstances. i didn’t understand until i started prepping for this episode that nick is actually lazy and incurious in almost the exact same way alex jones is. dan and jordan notice and call out how he effortlessly establishes dominance over alex, but i think there’s a second order game going on where nick manages to appear competent and informed compared to alex that you don’t realize is just an artifact of conversational skill until you hear nick on his own show.

wrt to the second part, i could not agree more and i’m very glad to hear that is a takeaway because it is absolutely something i was hoping to communicate. that’s the freudianness of it all, how these existing patterns of relations to another get played out and reenacted through the audience’s relationship to nick

The great chain of bleating

Followup on the Mass AI Bill, Russel has 180’d on it:

…substack.com/…/93a-the-three-characters-that-sho…

Buried in the penalty clause, the part of the bill that nobody reads, is a single reference: violations “shall be punishable in the same manner as provided in Chapter 93A of the General Laws.”

For those outside Massachusetts: Chapter 93A is the state’s consumer protection statute. It is, by most accounts, the most aggressive consumer protection law in America.

Here’s what 93A unlocks. Anyone can sue, not just the government. Class actions are on the table. If the court finds a violation was willful or knowing, damages get tripled. And the bar for what counts as “unfair or deceptive” is lower than in almost any other state.

Now bolt 93A onto all of that. What do you get?

You get a bill that doesn’t need a single regulator to lift a finger. You get a bill that funds its own enforcement through plaintiff attorneys who can file class actions, collect treble damages, and recover legal fees. You get the ADA website-accessibility litigation playbook, where lawyers systematically identify technical violations and file suits at scale, applied to every piece of AI-generated content touching Massachusetts.

Private right of action, fuck yeah. Turns grok into a legal fees dispenser.

The bill doesn’t need to be well-drafted to be dangerous. It needs to be vague, broad, and connected to 93A.

lol

93A: The Three Characters That Should Terrify Every AI Company

Inside the Massachusetts bill that looks like a joke and reads like a lawsuit

The Pacific Divide

This preprint just shared by Gary Marcus is interesting.

People increasingly use large language models (LLMs) to explore ideas, gather information, and make sense of the world. In these interactions, they encounter agents that are overly agreeable. We argue that this sycophancy poses a unique epistemic risk to how individuals come to see the world: unlike hallucinations that introduce falsehoods, sycophancy distorts reality by returning responses that are biased to reinforce existing beliefs…

These results reveal how sycophantic AI distorts belief, manufacturing certainty where there should be doubt.

LLMs an addictive psychological hazard: confirmed?

A Rational Analysis of the Effects of Sycophantic AI

People increasingly use large language models (LLMs) to explore ideas, gather information, and make sense of the world. In these interactions, they encounter agents that are overly agreeable. We argue that this sycophancy poses a unique epistemic risk to how individuals come to see the world: unlike hallucinations that introduce falsehoods, sycophancy distorts reality by returning responses that are biased to reinforce existing beliefs. We provide a rational analysis of this phenomenon, showing that when a Bayesian agent is provided with data that are sampled based on a current hypothesis the agent becomes increasingly confident about that hypothesis but does not make any progress towards the truth. We test this prediction using a modified Wason 2-4-6 rule discovery task where participants (N=557) interacted with AI agents providing different types of feedback. Unmodified LLM behavior suppressed discovery and inflated confidence comparably to explicitly sycophantic prompting. By contrast, unbiased sampling from the true distribution yielded discovery rates five times higher. These results reveal how sycophantic AI distorts belief, manufacturing certainty where there should be doubt.

arXiv.org

This paper is great for outlining just how shitty LLMs are for like, everything

Yknow, if the multiple suicide reports weren’t enough

Anyone Else Have Those Weird Dreams Where Sobbing Future Generations Beg You To Change Course?

The human subconscious is such an interesting thing. No matter how much you think you’ve got it figured out, it’ll always spit out the most random stuff. Take me, for example. After coming home from a long day at the world’s most groundbreaking artificial intelligence organization, I’ll go to bed and have the weirdest dreams […]

The Onion
another Onion banger for these trying times
Prince of Darkness (1987) - Dream Broadcast scene

YouTube

Usually, you wake up on a lifeless beach that’s adorned with some sort of abandoned marble temple. It’s supposed to be beautiful, but instead it’s really sad. Almost unbearably sad. So much so that you want to get away from it. So you crawl downward into these vents going below the horrible temple, and suddenly it’s like you’re moving through the innards of an incomprehensible machine that’s thudding away, thud, thud, thud. And as you get deeper, the metal sidings are carved with scrawled ominous curses and slurs directed toward you, and you hear the voices, louder than before, and you somehow know these people are in pain because of you. It keeps getting colder. Color drains from the world. And you see the crowd through the slats of the vents: pale and emaciated men, women, and children from centuries to come, all of them pressed together for warmth in some sort of unending cavern. What clothes they have are torn and ragged. Before you know it, their dirty hands and dirty fingernails lurch through the grates, and they’re reaching for you, tearing at your shirt, moaning terrible things about their suffering and how you made it happen, you made it, and you need to stop this now, now, now. And next they’re ripping you apart, limb from limb, and you are joining them in the gray dimness forever.

Don’t worry, there’s always Effective Altruism if you ever feel guilty about causing the suffering of regular people. Just say you’re going to donate your money at some point eventually in the future. There you go, 40 trillion hypothetical lives saved!