but @zzt you don’t understand. the system whose purpose is to destroy open source is itself open source, if by “open source” you mean it’s a binary blob that can only be produced or modified in any significant way by a multi-billion-dollar corporation or a state, via utterly unethical means, almost always derived from a previous binary blob created via utterly unethical means
get the fuck out of my mentions, me
ethical AI? of course! I deleted my OpenAI account because they’re a multi-billion dollar fashtech corporation run by cynical capitalists willing to use LLMs for war and replaced it with Anthropic, a multi-billion dollar fashtech corporation run by fucking full-fat TESCREAL cultists willing to use LLMs for war as long as they retain sufficient control over what they see as an incipient machine god
why aren’t you clapping
ethical AI? of course! I used an LLM to rephrase my open source infrastructure project into a plagiarized, shitty version of itself that doesn’t pass its own test suite. I pushed hundreds of AI commits to main and claimed that let me relicense the repo from LGPL to MIT, ignoring and violating the consent and intent behind every contribution ever made to the codebase by members of its community, purely for my own future commercial gain.
this is very ethical because nobody’s been sued for it yet
ethical AI? of course! it’s always ethical to bamboozle an elderly CS legend into releasing a paper about LLM coding that doesn’t stand up to even basic scrutiny, instead of spending his limited remaining time on this planet finally finishing his book series on algorithms.
it’s ethical because in the future CS will be fucking dead and nobody will be able to afford a computer that isn’t a rented thin client barely capable of accessing cloud resources you pay for by the minute
ethical AI? of course! my famous wife says it is while I rock our newborn baby to sleep and if you disagree with me that’s sexism and also you’re harassing my child.
go the fuck home phil
ethical AI? of course! there’s nothing more ethical than forcing slop code and horrific development practices on the user and contributor community of vim, an editor valued by people who like using tools that are lightweight and straightforward in exactly the ways that LLMs aren’t
https://hachyderm.io/@AndrewRadev/116176001750596207
I’m so sorry this keeps happening
This is in a PR where Shougo, another long-time contributor, communicates entirely in walls of unparseable AI slop text: https://github.com/vim/vim/pull/19413 What a pathetic state after decades of active, thoughtful work. "I asked the chatbot how to write this code", "Well, I asked my chatbot, and "he" doesn't like it". What a fucking embarrassment.
fuck. harfbuzz has fallen. https://typo.social/@behdad/116172838540880597
Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid. Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret. Any awful.systems sub may be subsneered in this subthread, techtakes or no. If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high. > The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them. (Credit and/or blame to David Gerard for starting this.)
@zzt
Also posted to Adafruit, which got whiplash, from critique of their use of slop machines, so hard that they're faulting right.
https://blog.adafruit.com/2026/02/16/no-tools-for-you-a-century-of-men-policing-womens-tools/
@zzt
Yeah, I've learned about the “promo” from checking out that harrassment.
Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid. Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret. Any awful.systems sub may be subsneered in this subthread, techtakes or no. If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high. > The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be) > > Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them. (Credit and/or blame to David Gerard for starting this.)
@zzt Well, Anthropic must be ethical since it was rejected by the US government as 'hard-left lunatics'.
Anyone have a working sarcasm meter?
this was supposed to be a shitpost, what the fuck: https://social.coop/@cstanhope/116177449448368652 the chardet guy actually put “do not plagiarize from LGPL/GPL code” into the fucking prompt
how dare I assert that slopfans are all cookie cutter grifters whose brains got broken by a basic psychological trick
@cwebber I'm not sure that's slop, but I won't discount the possibility... 🤔 But this part is funny in the dark humor sort of way: "...explicitly instructed Claude not to base anything on LGPL/GPL-licensed code." So, you see, no problem... 🙄
@cap_ybarra @zzt I mean, I think you’re right on some level
But don’t underestimate the level to which people get taken in by how language models are trained to create anthropomorphised responses. That’s more than enough to hack the brains of a lot of people, especially if they self identify as “smart”.
@cap_ybarra @zzt Incidentally I’ll always recommend Cialdini’s “Influence” to anyone who thinks humans are (except in some very specific cases) rational.
I’m starting to work through “Thinking Fast And Slow” too, which is looking to be another key work in the area of system 1 vs system 2 thinking.
LLMs are designed to hijack system 1 thinking. It’s freaking horrifying.
Decades of grimy future, post-apocalyptic wreckage, ancient forgotten pre-event technology movies, and this is how they all come true.
Ethical "AI" is an oxymoron.