ALWAYS A SCAM. NEVER ANYTHING GOOD.

LLMs are a plague, an infection that does nothing but erode democracy and spread dezinfo.

https://www.nature.com/articles/d41586-026-01100-y

#LLMs #ai #techbros

Scientists invented a fake disease. AI told people it was real

Bixonimania doesn’t exist except in a clutch of obviously bogus academic papers. So why did AI chatbots warn people about this fictional illness?

Two AI lab moves worth tracking today.

🧠 Meta launches Muse Spark, its first MSL model, bringing multimodal reasoning and parallel subagents into Meta AI.

🧠 Z.AI ships GLM-5.1 for long-horizon agentic engineering, with direct relevance for coding and agent stacks.

#AI #LLMs #AgenticAI #MachineLearning
solomonneas.dev/intel

More and more people are opting to replace the discomfort of real human interaction with frictionless conversations with AI.

That bodes ill for the prospect of progressive social change

https://jacobin.com/2026/04/ai-critical-thinking-chatbots-subjectivity

#AI #SocialMovements #SocialChange #CriticalThinking #GenAI #ChatGPT #ChatBot #LLMs #TheAIMirror #FuckAI

Movements Need the Critical Thinking That AI Destroys

Struggles against oppression start with people critically reflecting on their experiences. What happens to such struggles when we outsource our thinking to AI and replace human interlocutors with sycophantic chatbots?

I have never worked in big tech, I don’t own capital, I don’t drive a car, I don’t eat red meat, i am heavily invested in local organizing, I support artists every way I can, I own thousands of books, i am European but emigrated to the US, I love nature, I learn to repair most devices I own, I left my job over ai slop.

Also: I work in tech, I own many computers, I am a heavy proponent of llms for coding in order to improve quality of and access to computation.

All decisions I am making after careful research and deep ongoing reflection on their ethical implications.

#llms #llm #vibecoding #genai #ai

One reason that it’s hard to take anti-ai discourse seriously here is the underlying assumption that everybody using ai is somehow all-in and uncritical about the technology. Yet the llm for coding discourse on X looks like this… everybody was dunking on Claude code too, for example, except the takes were usually informed and funny.

I can actually have productive discussions there, about labor, about ethics, about technical quality, about novel ideas, about concrete uses, and even have fun and good laughs (do I like that the main timeline is a hellhole? Of course not). There’s a reason most people don’t switch over to bluesky or mastodon, if even mentioning that you use llms just gets you abuse in the comments, or the assumption that you aren’t thinking about the consequences of your actions.

#llms #genai #llm #vibecoding

MegaTrain: Full Precision Training of 100B+ Parameter LLMs on a Single GPU

https://arxiv.org/abs/2604.05091

#arxiv #llm #llms

MegaTrain: Full Precision Training of 100B+ Parameter Large Language Models on a Single GPU

We present MegaTrain, a memory-centric system that efficiently trains 100B+ parameter large language models at full precision on a single GPU. Unlike traditional GPU-centric systems, MegaTrain stores parameters and optimizer states in host memory (CPU memory) and treats GPUs as transient compute engines. For each layer, we stream parameters in and compute gradients out, minimizing persistent device state. To battle the CPU-GPU bandwidth bottleneck, we adopt two key optimizations. 1) We introduce a pipelined double-buffered execution engine that overlaps parameter prefetching, computation, and gradient offloading across multiple CUDA streams, enabling continuous GPU execution. 2) We replace persistent autograd graphs with stateless layer templates, binding weights dynamically as they stream in, eliminating persistent graph metadata while providing flexibility in scheduling. On a single H200 GPU with 1.5TB host memory, MegaTrain reliably trains models up to 120B parameters. It also achieves 1.84$\times$ the training throughput of DeepSpeed ZeRO-3 with CPU offloading when training 14B models. MegaTrain also enables 7B model training with 512k token context on a single GH200.

arXiv.org
When Claude Mythos is leaked and turns out to just be deterministic pattern matching 🙃 #AI #noAI #LLM #LLMs #vibeInfosec

RE: https://mastodon.online/@parismarx/116372697459719963

One of the worst things about this is that Big Tech corporations like Google, Meta, Anthropic and OpenAI have become so disproportionally wealthy and powerful that they can unleash this shit show upon the world without being held accountable… 😖

#tech #technology #BigTech #IT #AI #ArtificialIntelligence #LLM #LLMs #ML #MachineLearning #GenAI #generativeAI #AIAgent #AISlop #FuckAI #Fuck_AI #enshittification #google #gemini

There's this common thing where people claim that not all #AI is bad. But we keep seeing cases where non generative AI is also bad. One of the reasons is because #LLMs have created an environment that pushes quality control to the victim and remove any accountability to involved actors.

Here is a Dutch article on 10% error rate in automatic parking tickets. And the victims are mostly people in a lower sociology-economic class.

- the card for people with special allowances to park their car closer to their destination is not detected and thus frequently fined
- humans around the car are ignored, so the context of quickly loading/unloading are often ignored
- the complaint-procedure takes long, requires some effort, and is badly documented.

https://www.nu.nl/binnenland/6391925/ruim-10-procent-van-parkeerboetes-door-scanautos-is-onterecht.html

Ruim 10 procent van parkeerboetes door scanauto's is onterecht

Het gebruik van scanauto's bij parkeercontroles zorgt jaarlijks voor ongeveer 500.000 onterechte boetes, blijkt uit onderzoek van de Autoriteit Persoonsgegevens (AP). Kwetsbare groepen zoals mindervaliden krijgen relatief vaak zo'n onterechte boete.

NU

Between MAGA and AI: The future of everything is lies…

#LLMs (or so-called #AI systems) lie constantly. They lie about technical details, reasoning, and even basic logic, because they lack consciousness, metacognition, or grounding in reality. While powerful, they are fundamentally unreliable and should be approached with skepticism.

https://aphyr.com/posts/411-the-future-of-everything-is-lies-i-guess

The term “AI” is overly broad and carries connotations that misrepresent the nature of current systems. “AI” implies a level of intelligence or consciousness that these technologies lack, as they are fundamentally statistical models trained on data rather than systems with true understanding or intent.

#MAGA, of course, has the same problem with the concept of “intelligence” and “facts.” Like the AI industry, they normalized wrongdoing, made it constant and loud until pushback felt tedious. The AI industry normalized theft (models trained on pirated books, scraped web content, unlicensed creative work), and is conditioning people to accept plausibility over facts. Both assumed correctly that scale creates its own dynamic and provides immunity. The consequences of both are tragic.

The Future of Everything is Lies, I Guess