As a research project, I built a needed tool with Claude Code. I thought it would be a disaster, but it wasn't. I have some complicated feelings about it.
As a research project, I built a needed tool with Claude Code. I thought it would be a disaster, but it wasn't. I have some complicated feelings about it.
I really appreciate all the replies and support on this one. It was hard to write. I do want to call out two points that aren't being discussed, and that I felt pretty strongly about:
@mttaggart
Actually, from my personal experience (CC too, which is probably still one of the better AI coding agents even if it has many warts), yes it "works", but before you start to announce your great successes with it, don't forget so ugly details that people like to overlook.
The human aspect. On one side you need an experienced overseer that makes sure that CC stays on track. I've seen CC go on many fascinating off topic excursions.
@mttaggart
And the other human aspect is the human competition.
It's known since over half a century that the difference between efficient and non efficient developers is over a magnitude (actually the 1968 article gives a 10-28 range depending how you measure the data), efficiency being defined in time to deliver a working program from starting to with the spec.
Later research lowered that a bit for the means of the top and bottom groups but the extent outliers are still in similar ranges.
@mttaggart
But you probably wonder where the punchline is in relation to AI, my dear colleague and CEO was very excited that he managed to rewrite our MVP prototype with a much better architecture in 7-8 weeks with AI help after estimating that with classical human teams that would have taken 18 man months or so.
Now acting as his official spoilsport (it's somewhere in my CTO contract that this one of my duties) I had to point out that he's one of highly
efficient coding junkies, the topic of the MVP is in the area of his core competencies that he can literally write scientific papers about, and yes if you divide 18 months by ten (don't forget doing it by himself there is also no so things like team overhead), the huge speedup he attributed to Mr Sonnet can be explained by trivial software engineering research known for half a century.
@mttaggart
But yes "AI" does change the profession.
IMHO, so coding agents, especially if you add all the guard rails to make them safe work almost certainly slower than a human expert working in the core of his/her expertise.
But eg when I move in the areas where I have to start looking up libraries (JavaScript, TS) LLMs suddenly start to show their capabilities in speed reading.
@mttaggart
And the other aspect is that LLM are simply (despite the risk of errors, but nearly all real world algorithms have failure modes, live with that) a milestone in NLP. Especially in multilingual NLP.
So yes you need to design your processes with the possibility of errors in mind.
Good developers learn error handling in kindergarten.
At least our algorithm and data classes literally test in the scoring unit tests also for error handling. Dealing with the correct results is easy.