CircleCI's analysis of 28 million CI workflows confirms the same picture the DORA data shows. While feature branch activity's up significantly, the median impact on *main* (i.e. release) branch activity's net-negative 7%.

Only the top 5% of teams saw significant gains. The top 10% flatlined at 1%.

For the average team, AI slows them down overall.

Told ya!

https://www.linkedin.com/pulse/what-28-million-workflows-reveal-ai-codings-biggest-risk-circleci-j9syc/

What 28 million workflows reveal about AI coding’s biggest risk

In our last issue, we shared a preview of data from our upcoming 2026 State of Software Delivery showing that the promised AI productivity boom isn’t all hype. Throughput across the CircleCI platform increased 59% year-over-year, by far the largest productivity jump we've ever recorded and a clear i

"But Jason, this is only 28 million data points comprising actual observations from real projects..."
Now let's watch engineering leaders nod sagely in agreement and then proceed to do nothing about it. Like they always did.
@jasongorman "But we are the top 5% team" ..

@mosmann @jasongorman 👆This! 100% this!

Teams & developers generally just don’t realise how bad they are. The apparent arrogance and lack of humility is staggering, and IMO is getting worse.

@thirstybear @mosmann I've known for years that the teams who need my help most believe they need it the least

@jasongorman @thirstybear @mosmann

You are Nanny McPhee 😁

“When you need me, but do not want me, then I must stay. When you want me, but no longer need me, then I have to go.”

@chrisoldwood @jasongorman @mosmann Funnily enough that’s how I describe my coaching approach too 🙂
@mosmann @jasongorman I would not take the mentioned "Top 5% Teams" as the actual best teams in terms of actual real world impact. It just means that the teams that pushed a lot to the main branch do it even more now. But that could just be teams with no code review dumping code into main and testing there. Including hotfixes because stuff did not work.
@mormund @mosmann Exactly. It means "top 5 in the data"
@jasongorman @mormund I think we lost the "build reliable, maintainable software" KPI years ago:)
@jasongorman @mormund @mosmann given that their average CI time is 6s, I don't think we need to ponder about this for too long...
@mosmann @jasongorman the first thing I thought. I know that "top 5%" is used in the article in a strictly statistical sense (as a percentile), but I fear that too many people won't realize that it's unlikely that you can know how to behave to land there - it's just a statistical outlier. So the real take-away is the median negative rate, IMHO.

@jasongorman This is a very interesting report, thou it’s no surprise. Thanks for sharing.

Two very important aspects these data do not tell: changes in code quality, and in product quality. I suspect enshittification not making things better here, too.

@meduz There's data on failed merges and MTTR which hints at degredation.

@jasongorman Very likely, thou it doesn’t say much about what ends up merged. It’s probably nuanced depending on how much a team would need guidance (even from AI 😨).

Also interesting to cross such data with development teams burnout in the coming times.

@jasongorman the average AI-adopting engineering leader will think that they must be in that top 5%