https://www.youtube.com/watch?v=BoKQFxxanGU

| website | https://draganstepanovic.com |

An additional problem is the delay between the point in time you start piling up the risks in the codebase and the team's process, and the time it takes for them to visibly materialize.
Some are easier to observe like bugs and outages, but some are not as tangible and harder to detect, like decreased ability to reason about the system and anticipate problems before they occur, shifting the ratio of proactive problem detection to reactive mitigation, etc.
The risk with removing parts of your delivery process that you think you don't need anymore because "AI can do it" - such as teasing out the mental model of how the system works from the heads of people who own the system - is that you get to discover, often way too late, what some of the purposes of that practice were and the benefits you didn't recognize you were getting.
Assumptions you don't know you're making. Unknown unknowns.
1/2
Preparing a talk "Agentic coding - Systems Perspective", and along the way realized that cognitive/comprehension debt didn't arrive with the advent of agentic coding. Most teams doing work in isolation (individually), already heavily experienced it.
The difference is that on this spectrum of fragmentation of the mental model of workings of the system we went from erosion to a complete dissolution of a shared mental model.
I also have new understanding of why teams doing pair/mob worked so well
Let's not forget that LLMs got fed with an ever decreasing quality of work our industry has been producing over the last 15 years as a result of cheap money with ever decreasing central bank interest rates.
And both of these reinforcing loops are accelerating with LLMs dogfooding.
RE: https://mastodon.social/@glynmoody/116367986499686502
OpenClaw _is_ the vulnerability