The future is already here, it's just not evenly distributed has never been more appropriate.

AI productivity gains have gone from a myth to a fact I see multiple examples of at work each week.

The question is now more if such productivity gains will be localized to the software industry (or even just big tech) or is more broadly applicable to other industries?

https://www.theregister.com/2026/03/17/ai_businesses_faking_it_reckoning_coming_codestrap/

AI still doesn't work very well, businesses are faking it, and a reckoning is coming

interview: Codestrap founders say we need to dial down the hype and sort through the mess

The Register

@carnage4life can you share your actual productivity gains since the article you linked says the opposite including the fact that insurance companies are actively starting to exclude genAI from coverage?

The only gains I've seen are from translating between languages like "convert this Python to Power BI Power Query"or "convert this SQL to Python", things that likely have hundreds of thousands of examples in the training data. In those cases, the user still needs to understand both languages to fix errors created by using a vector math token predictor to create code in languages with strict syntax requirements. It's just "predicting"; it has no way to know if it is functional.

Plenty of people are using it for "summarize this page" or "clean up my notes" type work, but there is no way to systematically test that for accuracy or efficiency.

It's still too soon in our journey to know how bad the vibe coding will be once nobody can understand how anything works. It'll be "load-bearing legacy code" all the way down.

@jrdepriest I save hours a week on writing tasks at work. Engineering teams I work with regularly report tasks taking 2x to 3x less time than with pre-AI tools. I just had one team in my org deliver 4x as much processed data output than they originally planned for the half
@carnage4life @jrdepriest Any other metrics besides time?