Majority of CEOs report zero payoff from AI splurge

: PwC survey finds more than half of 4,500+ biz leaders see no revenue growth nor cost savings

The Register
The Futzing Fraction

At least some of your time with genAI will be spent just kind of… futzing with it.

I hate that this is the piece of my writing that I have linked to the most in my life at this point but someone really needs to tell CEOs to start *listening* to me so I don’t need to repeat myself so much
@glyph I'm working my way up to getting our CTO and CEO to read it.
@fancysandwiches Thanks. I really hope it makes an impact.

@glyph "Despite the CEOs' repsonses [sic], PwC concludes more investment is required. It claims that "isolated, tactical AI projects" often don't deliver measurable value, and that tangible returns instead come from enterprise-wide deployments consistent with business strategy" - Right, doesn't provide measurable benefits in small, targeted applications, but definitely will if you YOLO the whole business into it 🥴

https://www.theregister.com/2026/01/20/pwc_ai_ceo_survey/

Majority of CEOs report zero payoff from AI splurge

: PwC survey finds more than half of 4,500+ biz leaders see no revenue growth nor cost savings

The Register
@reedmideke Yes, most of these critical studies have problems like this; PwC wants to sell you strategic AI consulting (source: <https://www.pwc.com/gx/en/services/ai.html>) and many other "AI is bad" surveys are conducted by companies that will show you that THEY hold the secret knowledge that lets you do it the right way.

@reedmideke But the fact remains that these articles always have 2 parts:

part 1, the part where they actually *measure* what the AI is doing, and it's bad

and part 2, the part where they *imagine* what some hypothetical future AI might be like, and it's good

@glyph @reedmideke like assorted educational reforms -- co-requisite instead of remedial courses and gosh, they're not working yet but we just need to figure some things out!
Collateral damage is *not* their problem.
@glyph @reedmideke And for some reason, they often get away with describing what they do in part one and the outcome of what they imagined in part two as if that was what they found in part one in the abstract, summary, and conclusion, leaving out what they found in part one and that part two was entirely an exercise in confabulation.