Joe Gaffey

@joegaffey
162 Followers
453 Following
2.1K Posts

1. YES THEY ARE.

They are vibe-coding mission-critical AWS modules. They are generating tech debt at scale. They don't THINK that that's what they're doing. Do you think most programmers conceive of their daily (non-LLM) activities as "putting in lots of bugs"? No, that is never what we say we're doing. Yet, we turn around, and there all the bugs are.

With LLMs, we can look at the mission-critical AWS modules and ask after the fact, were they vibe-coded? AWS says yes https://arstechnica.com/civis/threads/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes.1511983/

After outages, Amazon to make senior engineers sign off on AI-assisted changes

AWS has suffered at least two incidents linked to the use of AI coding assistants. See full article...

Ars OpenForum

"We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter..." -- Sam Altman

https://x.com/TheChiefNerd/status/2032012809433723158

There you go, there it is. Yup.

Chief Nerd (@TheChiefNerd) on X

🚨 SAM ALTMAN: “We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.”

X (formerly Twitter)

“You can’t write a compelling narrative about the thing you didn’t build. Nobody gets promoted for the complexity they avoided.

Complexity looks smart. Not because it is, but because our systems are set up to reward it.

Anyone can add complexity. It takes experience and confidence to leave it out.”

‘Nobody Gets Promoted for Simplicity’ from Terrible Software/Matheus Lima

https://terriblesoftware.org/2026/03/03/nobody-gets-promoted-for-simplicity/

via @adactio

Nobody Gets Promoted for Simplicity

We reward complexity and ignore simplicity. In interviews, design reviews, and promotions. Here’s how to fix it.

Terrible Software
(2/2) What I did know was that it was to be guided by the overarching values of fuelling creativity, driving collaboration and igniting compassion. These values are even more important in the age of AI, a technology that has the same potential to liberate and cause harm #Web37
(1/2) 37 years ago today I submitted my proposal for the World Wide Web 🎂. Today, Rosemary & I spoke with students in New Orleans at Walter Isaacson's Digital History Class at Tulane University. I was asked, as I often am, if I ever could have foreseen where we’d be today. I could not.
Anyone aware of something more official / standardized to add details about individual field errors to a Problem Details HTTP response beyond what's described as extension example in the RFC itself? https://datatracker.ietf.org/doc/html/rfc9457#section-3-8
RFC 9457: Problem Details for HTTP APIs

This document defines a "problem detail" to carry machine-readable details of errors in HTTP response content to avoid the need to define new error response formats for HTTP APIs. This document obsoletes RFC 7807.

IETF Datatracker
i have a feeling that everyone who is arguing about LLMs in the sense of stuff like productivity, quality of the output, performance or functionality of generated code, largely even licensing or whatnot is kinda missing the point

even if LLMs generated the best code in the world, i would not be using them

even if LLMs gave me the biggest ever productivity boost i would not be using them

even if the output was clean copyright-wise and fully original, i would still not be using them

i can't consciously support a worldwide slop machine that helps and finances the rise of fascism, that helps further oppression, that helps a bunch of billionaires control public opinion, that drives society-wide psychosis with far-reaching consequences, that attempts to strip all joy from activities people like while using that to make the rich even richer, and that's not even getting to environmental stuff or whatever

i'm getting left behind you say

well fuck your industry and fuck you
JavaScript Iterator․zip landed in Firefox 148, making it simple to loop over multiple things at the same time. Here's how it works:

New, by me: How AI Assistants are Moving the Security Goalposts

AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

Read more (and boost please!):

https://krebsonsecurity.com/2026/03/how-ai-assistants-are-moving-the-security-goalposts/

#openclaw #AI #agentic #aiagents #lethaltrifecta

Seems painfully obvious that, whatever you think about #genai code, anyone using it is heading for a code-review logjam. Assuming that the org requires code review; if yours doesn’t, nothing I can say will help you. Anyhow, Rishi Baldawa writes smart stuff about the problem and possible ways forward, in ˚The Reviewer Isn't the Bottleneck”: https://rishi.baldawa.com/posts/review-isnt-the-bottleneck/

[My prediction: A lot of orgs will *not* do smart things about this and will suffer disastrous consequences in the near future.]

The Reviewer Isn't the Bottleneck

AI tools are flooding PR queues and the instinct everywhere is to call review the bottleneck. I think that’s the wrong question. The reviewer is the last sync point before production changes. The goal shouldn’t be how to remove the gate, but how to make it cheaper to operate.

Rishi Baldawa