An excellent resource of "Free /Open Source Software tainted by #LLM developers / developed by #genAI boosters, along with alternatives." I'll be returning to this often to weed out some of the #slop / #slopware that appears to be gradually creeping onto my computer. https://codeberg.org/small-hack/open-slopware
open-slopware

Free/Open Source Software tainted by LLM developers/developed by genAI boosters, along with alternatives. Fork of the repo by @gen-ai-transparency after its deletion.

Codeberg.org

There's a YT channel I love that makes bags and tote bags out of old clothes. Shows all the techniques, how to pattern each kind, etc. Relaxing, instructive.

It's now using GenAI for video of "AI models" showing off the bags. I've now unsubscribed from the channel.

FUCK GenAI AND ALL ITS PEDDLERS.

#GenAI #SamAltman #OpenAI

Shared: AI as a Fascist Artifact https://tante.cc/2026/04/21/ai-as-a-fascist-artifact/.

#Genai #Fascism

Look, I'm tired of talking about "AI" too. Yet this essay was very good, framing the use of this technology (and others) through the lens of fascism. Very long though.

AI as a Fascist Artifact

(This is a bit of a merger of two talks I recently gave about fascism and AI. One was in German at the Cables Of Resistance conference, one in English at the Milton Wolf Seminar on Media and Diplomacy. I added some shots of the slides I used as a structure for the text which [โ€ฆ]

Smashing Frames
Tesla has raised its planned capital expenditure to 5 billion USD in 2026, triple its previous annual spend, as it races to transition into an AI and robotics company. CEO Elon Musk said the investment reflects substantial growth in compute infrastructure and data centres. https://techcrunch.com/2026/04/22/tesla-just-increased-its-capex-to-25b-heres-where-the-money-is-going/ #AIInfrastructure #AI #GenAI
Tesla just increased its capex to $25B. Here's where the money is going. | TechCrunch

Tesla's planned capex for 2026 is three times higher than what the company has historically spent. Its CFO said, as a result, Tesla will have a negative free cash flow the rest of the year.

TechCrunch
MIT researchers have developed RLCR, a training method that teaches AI models to estimate their own confidence. The technique reduced calibration error by up to 90 percent while maintaining accuracy. This addresses a key cause of hallucinations in reasoning models - their tendency to answer every question with equal certainty regardless of whether they actually know the answer. https://news.mit.edu/2026/teaching-ai-models-to-say-im-not-sure-0422 #AIagent #AI #GenAI #AIResearch
Teaching AI models to say โ€œIโ€™m not sureโ€

MIT CSAIL's โ€œReinforcement Learning with Calibration Rewardsโ€ technique improves AI confidence estimates without sacrificing performance, addressing a root cause of hallucination in reasoning models.

MIT News | Massachusetts Institute of Technology

Howdy Fedizens, if your #union or #communityorganising entity has constructed an #AI #GenAI #LLM policy (particularly if it is basically "this is bad, don't do it, we won't do work with it, and we won't accept work from it") with a generally negative tone, I'd love to see it, if it is publicly available or shareable.

As IT Director of my union, I have been asked to construct such a draft policy. LLM "advice" is already causing significant issues internally and externally.