I wrote up a thing about the AI coding agents https://www.neilhenning.dev/posts/five-stages-grief/
Five Stages of Grief With the Rise of AI

I tried copilot in VS Code sometime in 2025 and was thoroughly disappointed with its ability. It felt like a really bad auto-complete at best, making plenty of mistakes and generally being utterly irritating to actually use with its need to constantly pop in as I was typing with long form coding suggestions that were often wrong. I was already predisposed to being negative about AI due to the usual mix of reasons and wrote off all the AI coding tools as useless.

Neil Henning

@neilhenning I can relate very much. Thanks for writing this! Collective therapy is the only way to make sense of this.

Also, try Codex instead of Claude some day. And try Pi with Codex. There's a lot to see out there.

@sschoener I've yet to hear a good thing about Codex - only Claude! Whats good about it?!

What are you doing with Pi? That was next on my list!

@neilhenning In my experience Codex is faster than Claude and follows directions more directly. Claude is more the creative type. "Codex is Claude but for programming", as a friend used to say.

Pi -- it's nice that it is such a malleable environment. You can just Pi to write an extension for itself to implement whatever behavior you'd like your coding harness to have, and BAM, it exists.

@neilhenning Good read! We should try to have a chat over video at some point?!
@xoofx definitely man its been too long!
@neilhenning “Huh.” My words exactly
@neilhenning Good summary of the emotional experience. We’re grieving the shift in how our careers looked for decades. Fastest technology transition I’ve ever seen, by far. Since November, we’ve gone from a few Claude users to near 100% of the team. Now the question is how much AI can we all reasonably drive at the same time. And how do we keep building guardrails. And how do we run it all more efficiently.
@neilhenning Running local models (gpt-oss, qwen3) _on the CPU_ in my closet at 100 watts and getting better model performance than what flagship models provided 18 months ago is also wild.
@chadaustin @neilhenning this is the one thing that gives me a bit of hope. Being doomed forever to depend on some corporation to rent programming tools from would be a nightmare. But if the open models keep getting better we can reclaim (a bit of) that independence

@neilhenning nice article, I was just drafting a blog post of my own along similar lines :)

It can one shot some stuff impressively but recently working on high performance stuff and multi threading needs a lot of guidance.

I find a lot of unexpected things I need to guide it to do that I would never think to working with a colleague. Such as use the math library and don’t just write dot and mag verbatim every time. Implicit things now need to be explicit and I don’t know until I try