Actually depends IMO.
Writing production code first, having AI write unit tests afterwards? Terrible idea.
Using AI to generate missing unit tests in a years old legacy system? Better idea.
Doing AI coding in TDD style? Great idea. Turns out that writing the test first really improves AIs ability to come up with good production code.
TDD fans won't be surprised.
Very good and important questions.
We're talking about a technology that's growing exponentially: whatever the answer is today, it will be different next year.
All we can do today is experiment with all the new tools they release, in order to understand how they work and learn what they can do for us.
@fxnn @alanpaxton @akahn there's no reason to assume that reasoning lies on a continuum from whatever it is LLMs do today, such that quantitative improvement will mean they will eventually be able to reason
my recent attempts to drive an LLM using TDD have gone extremely badly, including it fabricating its own tests and showing no ability to comprehend the problem or my instructions
Yes, the steps we make shouldn't be too coarse, and also AI augmented coding is still somewhat adventurous. One of the important things it needs is a good "system prompt", some rules which tell it e.g. to never fix a test by deleting it.
I can recommend @kentbeck's recent blog posts on that matter, paywalled, e.g. https://open.substack.com/pub/tidyfirst/p/persistent-prompting
I can relate. While I get that people want to earn money with their blogs, I believe that there must be a better way than to feed Substack or Medium's pockets.
Anyways, the one good free blog on the general LLM topic (I'm reading regularly) is that from @simon, https://simonwillison.net.
Also I got recommended a blog post from https://harper.blog/posts/ which seems to feature some interesting AI coding posts lately.
Nothing more specific at the moment, unfortunately.