One of the lessons I learned from going back to school for CS was to be suspicious of code that worked as intended the first time.

Writing unit tests before or concurrently was critical to discovering ways the code might fail and in the process understand how the program was operating.

The meta goal became to automatically distrust things that worked without anyone knowing why.

Why?

Because if you don’t know why it worked before you have no idea if it will continue working.

All of the above was, “Everyone knows” status.

And then LLMs came along and everyone seemed to say, “Actually, forget all that and throw your integrity away.”

The transformation was invasive and pernicious.

@CptSuperlative I really appreciate you post. I used your white box testing using the AI itself... so am testing a multi-thousand line script by asking Claude Code to perform white box testing on it. I will use this concept henceforth for all coding tasks. Many thanks for your post, it taught me a lot. The first bug it found was to do with if someone types ë in a machine name and the regex would fail. This sort of minute bug lying in wait is lovely to spot this way.