"We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter..." -- Sam Altman
https://x.com/TheChiefNerd/status/2032012809433723158
There you go, there it is. Yup.
| Web | https://joegaffey.com |
| https://twitter.com/joegaffey | |
| Github | https://github.com/joegaffey |
1. YES THEY ARE.
They are vibe-coding mission-critical AWS modules. They are generating tech debt at scale. They don't THINK that that's what they're doing. Do you think most programmers conceive of their daily (non-LLM) activities as "putting in lots of bugs"? No, that is never what we say we're doing. Yet, we turn around, and there all the bugs are.
With LLMs, we can look at the mission-critical AWS modules and ask after the fact, were they vibe-coded? AWS says yes https://arstechnica.com/civis/threads/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes.1511983/
"We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter..." -- Sam Altman
https://x.com/TheChiefNerd/status/2032012809433723158
There you go, there it is. Yup.
âYou canât write a compelling narrative about the thing you didnât build. Nobody gets promoted for the complexity they avoided.
Complexity looks smart. Not because it is, but because our systems are set up to reward it.
Anyone can add complexity. It takes experience and confidence to leave it out.â
âNobody Gets Promoted for Simplicityâ from Terrible Software/Matheus Lima
https://terriblesoftware.org/2026/03/03/nobody-gets-promoted-for-simplicity/
via @adactio
New, by me: How AI Assistants are Moving the Security Goalposts
AI-based assistants or âagentsâ â autonomous programs that have access to the userâs computer, files, online services and can automate virtually any task â are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.
Read more (and boost please!):
https://krebsonsecurity.com/2026/03/how-ai-assistants-are-moving-the-security-goalposts/
Seems painfully obvious that, whatever you think about #genai code, anyone using it is heading for a code-review logjam. Assuming that the org requires code review; if yours doesnât, nothing I can say will help you. Anyhow, Rishi Baldawa writes smart stuff about the problem and possible ways forward, in ËThe Reviewer Isn't the Bottleneckâ: https://rishi.baldawa.com/posts/review-isnt-the-bottleneck/
[My prediction: A lot of orgs will *not* do smart things about this and will suffer disastrous consequences in the near future.]

AI tools are flooding PR queues and the instinct everywhere is to call review the bottleneck. I think thatâs the wrong question. The reviewer is the last sync point before production changes. The goal shouldnât be how to remove the gate, but how to make it cheaper to operate.