As a research project, I built a needed tool with Claude Code. I thought it would be a disaster, but it wasn't. I have some complicated feelings about it.

https://taggart-tech.com/reckoning/

I used AI. It worked. I hated it.

I used Claude Code to build a tool I needed. It worked great, but I was miserable. I need to reckon with what it means.

I really appreciate all the replies and support on this one. It was hard to write. I do want to call out two points that aren't being discussed, and that I felt pretty strongly about:

  • Open source is in trouble, and maintainers need help. Generative code is the help that showed up. What is the expectation here?
  • "The tool requires expertise to validate, but its use diminishes expertise and stunts its growth." What does "responsible use" look like that prevents this obvious and pervasive harm?
  • @mttaggart A great read, thanks. I’m someone who instinctively knows I’ll hate shepherding an LLM, and who is aghast at the thievery and environmental vandalism required to create them. I also viscerally fear becoming dependent on big tech and eroding my own expertise to do the one thing I’m any good at. But I’m learning not to judge as harshly those who feel they need to use them, even if I think it’s a road to ruin intellectually and financially (as they become more expensive)