This "careful" "AI Safety" company that just accidentally leaked its entire source code to the world is the one that African governments are entering into agreements with to include in infrastructures from health care to god knows what.

These are the products people have to use to make sure that they don't get dinged in their performance reviews for "not using AI."

These are the products teachers have to use in schools so that "students aren't left behind."

https://www.theguardian.com/technology/2026/apr/01/anthropic-claudes-code-leaks-ai

Claude’s code: Anthropic leaks source code for AI software engineering tool

Nearly 2,000 internal files were briefly leaked after ‘human error’, raising fresh security questions at the AI company

The Guardian

I appreciated this article by @mttaggart
infosec.exchange.

I get the temptation especially in this world we're all living in where you have to produce something super fast all the time.

But my question is, what are people's arguments for how functioning software can be created with these tools?

What about new architectures, new ways of thinking, new programming languages, etc? Who will create those?

https://taggart-tech.com/reckoning/

I used AI. It worked. I hated it.

I used Claude Code to build a tool I needed. It worked great, but I was miserable. I need to reckon with what it means.

I'm not even talking about the data stealing, exploitation, environmental pillaging, pollution, environmental racism etc.

I'm talking about the way people use the tools. Like what do advocates of using these tools say will happen to software engineering in the future? That it just won't need to exist because everyone will be able to create software using these tools?

@timnitGebru EMC++S: Embracing Modern C++ Safely. My appetite for actually using GenAI is wearing thin after the severe information security risk Claude Code and other frontends are known to pose, after the leak <48 hours ago. LLMs have suggested regular expressions to me, but their role has been pretty limited to that of a error prone natural language search processor for me. This suggests a far lower economic point of inflexion for GenAI driven advantage than that promoted for it.
@timnitGebru Also, a lot of the FreeBSD related work I've been doing lately hasn't been writing software itself in anger, but hardware qualification: physically plugging hardware together, usually network adapters, switches, and routers, and evaluating compatibility. Using agents for any of this, whilst possible, would be like putting a hat on a hat, to borrow an expression from Seth MacFarlane in Family Guy. The human factor reigns supreme because of ISO OSI Layer 1.