CircleCI's analysis of 28 million CI workflows confirms the same picture the DORA data shows. While feature branch activity's up significantly, the median impact on *main* (i.e. release) branch activity's net-negative 7%.

Only the top 5% of teams saw significant gains. The top 10% flatlined at 1%.

For the average team, AI slows them down overall.

Told ya!

https://www.linkedin.com/pulse/what-28-million-workflows-reveal-ai-codings-biggest-risk-circleci-j9syc/

What 28 million workflows reveal about AI coding’s biggest risk

In our last issue, we shared a preview of data from our upcoming 2026 State of Software Delivery showing that the promised AI productivity boom isn’t all hype. Throughput across the CircleCI platform increased 59% year-over-year, by far the largest productivity jump we've ever recorded and a clear i

"But Jason, this is only 28 million data points comprising actual observations from real projects..."
Now let's watch engineering leaders nod sagely in agreement and then proceed to do nothing about it. Like they always did.
@jasongorman "But we are the top 5% team" ..
@mosmann @jasongorman I would not take the mentioned "Top 5% Teams" as the actual best teams in terms of actual real world impact. It just means that the teams that pushed a lot to the main branch do it even more now. But that could just be teams with no code review dumping code into main and testing there. Including hotfixes because stuff did not work.
@mormund @mosmann Exactly. It means "top 5 in the data"
@jasongorman @mormund I think we lost the "build reliable, maintainable software" KPI years ago:)