The risk with removing parts of your delivery process that you think you don't need anymore because "AI can do it" - such as teasing out the mental model of how the system works from the heads of people who own the system - is that you get to discover, often way too late, what some of the purposes of that practice were and the benefits you didn't recognize you were getting.

Assumptions you don't know you're making. Unknown unknowns.

1/2

An additional problem is the delay between the point in time you start piling up the risks in the codebase and the team's process, and the time it takes for them to visibly materialize.

Some are easier to observe like bugs and outages, but some are not as tangible and harder to detect, like decreased ability to reason about the system and anticipate problems before they occur, shifting the ratio of proactive problem detection to reactive mitigation, etc.

@d_stepanovic Nobody likes hearing a warning. We like to learn for ourselves 🤭

@d_stepanovic

Rising bug counts take a while to catch anyone's attention. They have to get *really bad* first.

And, unfortunately, the most typical response is to try to "fix" the "QA testing" process, so that fewer bugs will "slip through."