This "careful" "AI Safety" company that just accidentally leaked its entire source code to the world is the one that African governments are entering into agreements with to include in infrastructures from health care to god knows what.

These are the products people have to use to make sure that they don't get dinged in their performance reviews for "not using AI."

These are the products teachers have to use in schools so that "students aren't left behind."

https://www.theguardian.com/technology/2026/apr/01/anthropic-claudes-code-leaks-ai

Claude’s code: Anthropic leaks source code for AI software engineering tool

Nearly 2,000 internal files were briefly leaked after ‘human error’, raising fresh security questions at the AI company

The Guardian

I appreciated this article by @mttaggart
infosec.exchange.

I get the temptation especially in this world we're all living in where you have to produce something super fast all the time.

But my question is, what are people's arguments for how functioning software can be created with these tools?

What about new architectures, new ways of thinking, new programming languages, etc? Who will create those?

https://taggart-tech.com/reckoning/

I used AI. It worked. I hated it.

I used Claude Code to build a tool I needed. It worked great, but I was miserable. I need to reckon with what it means.

@timnitGebru I am troubled by the conclusion:

> I don't particularly like this tool, and I truly believe in its societal danger. I still think it's addictive; I still think the ways it goes wrong are far more likely than the ways it goes right.

but later

> I don't think condemning people for using them is that helpful to anyone, or to whatever cause you're fighting for so fervently that condemning someone seems worthwhile.

the tech is bad and hurts us but this should not impact my judgment..

@timnitGebru ..of a person using it? I understand why the author was tempted to use them. Lots of people are under pressure. I hate that. But why should we not point out the bad stuff? Using bad tech is normalising it and signaling that we are ok with the externalitiies.

The pro-LLM side is very vocal. I am reading articles that not using LLMs makes me a hobbyist and out of a job in a few years.

Also, not using a computer built from rare earths from problematic sources is still quite a

@timnitGebru different problem than not using LLMs for programming (Yes, some of us are already forced to use them)