Heard someone make the argument that LLMs for coding is like doping for athletes: the side effects are unpleasant, but the performance enhancement is real.
@gvwilson The best available evidence strongly suggests that - at the level of team outcomes (as opposed to code output) - performance gets worse for the majority of teams.
@jasongorman @gvwilson can I ask for links there? Thanks!

@carlton @gvwilson

Study of 28 million CI workflows reveals that the media outcome for AI coding tool use is +ve on feature branch activity but -ve on main branch activity

https://www.linkedin.com/pulse/what-28-million-workflows-reveal-ai-codings-biggest-risk-circleci-j9syc/

DORA 2025 data indicates that AI coding assistants are an "amplifier" of dev team strengths and dysfunctions, noting how initial gains are lost to "downstream chaos" for teams who weren't already high-performing. That's *most* teams.

https://research.google/pubs/dora-2025-state-of-ai-assisted-software-development-report/

What 28 million workflows reveal about AI coding’s biggest risk

In our last issue, we shared a preview of data from our upcoming 2026 State of Software Delivery showing that the promised AI productivity boom isn’t all hype. Throughput across the CircleCI platform increased 59% year-over-year, by far the largest productivity jump we've ever recorded and a clear i

@jasongorman @gvwilson The team that self describes itself as being out of the ordinary is already a worry. Thanks! 👀