This "careful" "AI Safety" company that just accidentally leaked its entire source code to the world is the one that African governments are entering into agreements with to include in infrastructures from health care to god knows what.

These are the products people have to use to make sure that they don't get dinged in their performance reviews for "not using AI."

These are the products teachers have to use in schools so that "students aren't left behind."

https://www.theguardian.com/technology/2026/apr/01/anthropic-claudes-code-leaks-ai

Claude’s code: Anthropic leaks source code for AI software engineering tool

Nearly 2,000 internal files were briefly leaked after ‘human error’, raising fresh security questions at the AI company

The Guardian

I appreciated this article by @mttaggart
infosec.exchange.

I get the temptation especially in this world we're all living in where you have to produce something super fast all the time.

But my question is, what are people's arguments for how functioning software can be created with these tools?

What about new architectures, new ways of thinking, new programming languages, etc? Who will create those?

https://taggart-tech.com/reckoning/

I used AI. It worked. I hated it.

I used Claude Code to build a tool I needed. It worked great, but I was miserable. I need to reckon with what it means.

I'm not even talking about the data stealing, exploitation, environmental pillaging, pollution, environmental racism etc.

I'm talking about the way people use the tools. Like what do advocates of using these tools say will happen to software engineering in the future? That it just won't need to exist because everyone will be able to create software using these tools?

That it will just take a different form, which is fine?

@timnitGebru I think this is relevant to these questions, albeit handles them on a different level:
https://freakonometrics.hypotheses.org/89367

> Someone still has to reread, compare, test, contextualize, and sometimes rewrite. And if no one seriously takes on that work, the cost does not disappear. It reappears later in the form of errors, urgent fixes, loss of trust, and eventually litigation. What is presented as a productivity gain is often just an accounting displacement.

If No One Pays for Proof, Everyone Will Pay for the Loss

This post was initially written in French, Si personne ne paie pour la preuve, tout le monde paiera pour le sinistre Let’s start with a truism. In ordinary life, just as in economic life, we have to make decisions without ever knowing everything. Every decision involves some uncertainty, and therefore some risk. Some risks are … Continue reading If No One Pays for Proof, Everyone Will Pay for the Loss →

Freakonometrics
@rysiek Great article.

@timnitGebru it really is.

And boy does the Claude Code leaked codebase support that assessment. Have you seen @jonny 's thread on this? If not:
https://neuromatch.social/@jonny/116324676116121930

@timnitGebru the whole thing is great, but somewhere down the thread there are truly astonishing gems like:

> So the reason that Claude code is capable of outputting valid json is because if the prompt text suggests it should be JSON then it enters a special loop in the main query engine that just validates it against JSON schema for JSON and then feeds the data with the error message back into itself until it is valid JSON or a retry limit is reached.

Thousand monkeys, thousand typewriters…

@timnitGebru of course it makes total sense for Claude Code to waste developer tokens like that, since Anthropic charges per token… 🙄
@rysiek Literally the questions of "what if computer science was no longer about figuring out the most efficient way to do X but the brute force way to do X"?

@timnitGebru "efficient" in what way, measured by whom, right?

Wasting developer tokens — tokens these developers or their companies pay for — is very efficient from the perspective of extracting value…