This is an example of someone who’s in deep denial and my New Year’s Resolution is not to debate productivity gains from using AI.

What was interesting is the comments are almost uniformly senior engineers with 10+ years of experience talking about how Claude Code debunks this thinking.

The hype is now reality when it comes to software engineering. The interesting discussion now is the ramifications.

@carnage4life Both are wrong though? It’s not AGI, it’s also not useless; it can code but it also kinda can’t and needs someone who understands the code to oversee it. It’s… kind of just the next abstraction layer up on a good IDE. But it does work, it just isn’t AGI nor does it have the possibility of becoming AGI (hard confident). Nuance is hard?
@trisweb voila. I think it was Anil Dash who put it best—it’s the incredible hype machine around it. Yes it’s useful, but do you have to burn down your crown jewels (eg M365) for it? @carnage4life

@flq @carnage4life yep. I’m not even contending that it’s useful, or even net positive; just that it’s not useless.

That actually helps an argument against it being I.e. good for society, or worth investing huge sums of our GDP into. “It doesn’t work and it’s useless!” is an easily shot-down lie; “it does work, it’s just not worth the squeeze” is a rational argument people might go “hmmm” about.

@carnage4life what I think really works is that claude knows “of” a lot of things. It can point you in the right direction when you need to find your way in new terrain. Using a new api, upgrading to a new version. And it can write code a lot faster so ic you have a relatively standard problem it’ll type like mad and create a standard solution for you. So a senior sparring partner who knows every framework and a medior coder who can churn out lines per second.
@carnage4life it’s hard to deny the productivity gains in my day to day but it also makes stuff up in a way that it feels like my ability to engineer and see the big picture is important. Or maybe that’s cope and I’ll soon be out of a job. Right now, however, it has no hope of understanding the addled brains of my business analysts and I have a feeling my job will just be 100% interpreting them, which in a way it kind of already is. Oh and I guess compliance paperwork for my apps.
@carnage4life wishful thinking is obvious to see in others, but it is a cautionary tale for ourselves as well. To quote Feynman, “The first principle is that you must not fool yourself and you are the easiest person to fool.”
@carnage4life prior to this past fall, I saw a number of studies, e.g. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/ , that convincingly supported Verona's assertion. Since then I have seen growing numbers of coders say anecdotally that Claude Code refutes it, but I haven't seen any actual studies that speak to it; does anyone know of any?
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity