The "AI is gonna make programmers more massively more efficient myth" is hitting reality. And not surviving.

https://garymarcus.substack.com/p/sorry-genai-is-not-going-to-10x-computer

Sorry, GenAI is NOT going to 10x computer programming

Here’s Why

Marcus on AI
@tante maybe it will make those massively more productive who don’t fall for it

@tante You cannot deconstruct the origins of "10x" often enough.

> The original study that found huge variations in individual programming productivity was conducted in the late 1960s by Sackman, Erikson, and Grant (1968). They studied professional programmers with an average of 7 years’ experience and found that the ratio of initial coding time between the best and worst programmers was about 20 to 1; the ratio of debugging times over 25 to 1; of program size 5 to 1; and of program execution speed about 10 to 1. They found no relationship between a programmer’s amount of experience and code quality or productivity.

source: https://www.construx.com/blog/productivity-variations-among-software-developers-and-teams-the-origin-of-10x/

Productivity Variations Among Software Developers and Teams The Origin of 10x | Construx

Some people have asked for more background on where the "10x" name of this blog came from. The gist of the name is that researchers have found 10-fold

Construx
@leitmedium @tante Is the study and the other linked sources not supporting the 10x claim rather than deconstructing it? (or did I miss(understand) some relevant context here?)
@simulo @tante It was a bit misleading, yes. The source from tante relies on the core idea of 10x as people with high outcome. The original study also stresses out that there is this phenomen while you can argue that this might also be a capitalists dream of measuring peoples output. If I remember correctly the initial studies were based on interviews of managers of coding people...

@leitmedium @tante Briefly looking at the study, it seems it was a psychology-style experiment.

I looked it up because I read your toot just after stumbling on Brooks’ "The Mythical Man Month" citing the study (which might have contributed to the popularity of the concept)

Being a psychological study and focussed on output it will ignore other factors like people supporting each other – teaching someone else will not increase my personal productivity etc.

Contempt for the glue people

The clip below is from a lecture from 2015 that then-Google CEO Eric Schmidt gave to a Stanford class. Here’s a transcript, emphasis mine. When I was at Novell, I had learned that there were …

Surfing Complexity
@simulo @leitmedium @tante fair, just want to say psychology also studies things like how collaboration etc improves stuff. :D
@flourn0 @leitmedium @tante yes – might have been more precise as: "quantitative study measuring produced lines of code by a single person"
@simulo @tante Going to look it up in The Mythical Man Month, thanks for the reference. I read a lot about 1960s/70s phantasm on coders, productivity and cost of employees in the past and the 10x topic is a good example for the still ongoing struggle with this whole topic.
@leitmedium @simulo @tante and wasn't there this "Coding War Games" survey started in 1977 which somehow did show individual difference by an order of magnitude not being related by language, experience or salary and did not find this astounding. Much more of note was, they found that "It mattered a lot who your pair mate was.". And also that " The top performers' space is
quieter, more private, better protected from interruption, and there is more of it." https://www.gwern.net/docs/cs/2001-demarco-peopleware-whymeasureperformance.pdf
@leitmedium What's wild was that it was low-performing outliers that led to this 10x idea! It's like, there are some 1/10x programmers. And that's just a quirk of how these things tend to be distributed!

@tante I find "10x-ing requires deep conceptual understanding" is a very important point. And sadly, I realized from others and myself a tendency to solve problems which arise by not-so-deep understanding with a lot of code when using an AI assistant.

I can only guess why this is the case. I could envision it like: When hitting a problem arising by missing conceptual understanding, coding assistants will help you generate code. But without them you will get stuck and have to understand.

@inw @tante I feel like I've done this repeatedly with little hobby projects. But I think I would overcode at first even without the AI assistant and in both cases go back with the better understanding and refactor to improve the system.

@flourn0 @tante I observed the refactoring and going back happens less if an AI assistant is used. But this may be an effect of using one to save time. Which may be an indicator of stress.

However, I certainly did observe complex code constructs created by AI assistants where a simple change in a data structure would have been sufficient. It is so easy to ask the assistant to write code without reading the code and thinking of the change one makes. It is a skill to ask the right questions.

@flourn0 @tante While the skill of asking the right questions is needed with and without an AI assistant it may be slightly different in both cases. Asking questions to a human the answer will mostly not be a lot of code. Asking an AI assistant one must remember to critically read the code and explanation - not only check if it works.

To be fair, most of these observations were from the time we started to use an AI assistant. It has gotten better now. Not the assistant, but our usage of it.

@tante I would say that AI is being more helpful than even some forum programmers, I found AI being more helpful than even forum programmers than does not know how to answer a simple question about Java programming.