One of the lessons I learned from going back to school for CS was to be suspicious of code that worked as intended the first time.

Writing unit tests before or concurrently was critical to discovering ways the code might fail and in the process understand how the program was operating.

The meta goal became to automatically distrust things that worked without anyone knowing why.

Why?

Because if you don’t know why it worked before you have no idea if it will continue working.

All of the above was, “Everyone knows” status.

And then LLMs came along and everyone seemed to say, “Actually, forget all that and throw your integrity away.”

The transformation was invasive and pernicious.

@CptSuperlative Reminds me of this article on 'Residuality Theory' and complexity science for software engineering. https://ericnormand.substack.com/p/residuality-theory
"The industry has come up with a set of best practices that can mitigate many stressors. … Architecture is not “use as many best practices as possible” just like software design is not “use as many design patterns as possible.” …
[W]e should mentally simulate the stressors ahead of time to understand how [the system can survive] in the complex real world."
Residuality Theory

Good idea, bad name

Eric Normand's Newsletter
@CptSuperlative The trend is "maximising the number of design patterns" rather than "simulating disasters in advance".