I mean the AI models do a great job in generating code, but how often do you verify the logic even the logic of the tests? I am just reading through all of them and they are all low quality:
- testing something that never changed (the test was never red)
- making up tests that are just not relevant
- extra hard to read test code
- bad explainers in test code
- testing too many things at once, won't know why it fails (if it ever does)

How do all these people that give me that #AI #fomo do it?

@wolframkriesing if it's low quality, then it's not a great job, is it?
@wolframkriesing I had the same experience reading AI-generated tests last week

@wolframkriesing

I don't have answers, besides the obvious.

I think it all comes back to how one reacts to being offered "an army of junior programmers who never sleep."

For some, that sounds like heaven. I am beginning to suspect that their quickest path to productivity may be jumping on the AI wagon and riding it until they decide to get off.

Of course, their *cheapest* path is probably to go grab a drink with me, and ask me what I really think of all this. But I can only afford so many extra calories, so some are going to have to just take the ride.

@wolframkriesing curious what model you use, what agent software and how you prompt it, all three of them are quite essential to a good outcome.
@dsp opus
I think i am prompting quite expressive, but since I am working with legacy software where the data (in my case reading from CSV files) is quite important to look at there is quite a number of places one needs to look at to even understand what is important to have in a test. I am learning the business cases by reading software and trying to find out what is unwanted side-effects and what is real wanted business logic, so I guess that is not easy by default