
One of the promises of AI is that it can reduce workloads so employees can focus more on higher-value and more engaging tasks. But according to new research, AI tools don’t reduce work, they consistently intensify it: In the study, employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day, often without being asked to do so. That may sound like a win, but it’s not quite so simple. These changes can be unsustainable, leading to workload creep, cognitive fatigue, burnout, and weakened decision-making. The productivity surge enjoyed at the beginning can give way to lower quality work, turnover, and other problems. To correct for this, companies need to adopt an “AI practice,” or a set of norms and standards around AI use that can include intentional pauses, sequencing work, and adding more human grounding.
@thzinc @simon and that the quantity of work went up, but reading between the lines it sounds like the quality went down. I say this because people using LLMs to do things that professionals in a particular field would do almost invariably gets a result of a lesser quality.
Maybe that doesn't matter. But I strongly suspect it does.
@simon A way I've been thinking about this is that it intensifies work by intensifying technical debt.
I know there's been a huge backlash against that term, but when everyone is signing off on and shipping 1.5K lines of code at a go several times a week, common sense dictates that you're robbing Peter to pay Paul later.
@faassen (Leaving Simon out of this I'm sure he gets his ear bent plenty :)
Well that's the thing, right?
Is code review happening? Yes. Either your org does it or not.
Is code review happening at the level it used to before generative AI started pushing gigantor change sets into the development pipeline? That's what's unclear to me.
@feoh
Ignoring AI for a moment:
I know I can do high quality work on my own, in the right context, without review.
I have a lot of thoughts about PR code reviews; I know they slow me down a lot. They run contrary to heavy repeated refactoring to mold things in the right shape. I also tend to prefer earlier points of collaboration, during pairing and such.
But in a team you aren't by yourself and you need to take others along even if that means slowing down one way or another. The goal of shared understanding is far more important than review
And I don't understand why AI would let you give up shared understanding.
@simon I have also felt this AI treasmill materialize under my feet.
I need to be more intentional (and okay) with stepping away from managing agents to get some quiet and rest.
@simon Thanks for sharing - will TAL. Read the first couple of paragraphs and this passage: “we found that employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day,” reminded me of
"I have no reason to believe that gains in understandability (or on factors affecting productivity) would change that. We're just gonna get more software, moving faster, doing more things, always bordering on running out of breath.”
from https://ferd.ca/the-law-of-stretched-cognitive-systems.html
Am curious, is this a recent development? On average before using AI how long would it take before you felt mentally depleted?
“I'm frequently finding myself with work on two or three projects running parallel. I can get so much done, but after just an hour or two my mental energy for the day feels almost entirely depleted.”
@simon
This is sad as the sustained focus required for programming is also very costly mentally in my experience. One of the benefits of using AI tools could be that you can relax a little and still get big refactorings donr.
I wonder whether part of what feeds into this is a grind mindset - I see advice that you should keep agents running at all times, or multiple agents, etc, and wonder how much of this is fundamentally productive in many contexts.