The "AI is gonna make programmers more massively more efficient myth" is hitting reality. And not surviving.
https://garymarcus.substack.com/p/sorry-genai-is-not-going-to-10x-computer
The "AI is gonna make programmers more massively more efficient myth" is hitting reality. And not surviving.
https://garymarcus.substack.com/p/sorry-genai-is-not-going-to-10x-computer
@tante You cannot deconstruct the origins of "10x" often enough.
> The original study that found huge variations in individual programming productivity was conducted in the late 1960s by Sackman, Erikson, and Grant (1968). They studied professional programmers with an average of 7 years’ experience and found that the ratio of initial coding time between the best and worst programmers was about 20 to 1; the ratio of debugging times over 25 to 1; of program size 5 to 1; and of program execution speed about 10 to 1. They found no relationship between a programmer’s amount of experience and code quality or productivity.
@leitmedium @tante Briefly looking at the study, it seems it was a psychology-style experiment.
I looked it up because I read your toot just after stumbling on Brooks’ "The Mythical Man Month" citing the study (which might have contributed to the popularity of the concept)
Being a psychological study and focussed on output it will ignore other factors like people supporting each other – teaching someone else will not increase my personal productivity etc.
@tante I find "10x-ing requires deep conceptual understanding" is a very important point. And sadly, I realized from others and myself a tendency to solve problems which arise by not-so-deep understanding with a lot of code when using an AI assistant.
I can only guess why this is the case. I could envision it like: When hitting a problem arising by missing conceptual understanding, coding assistants will help you generate code. But without them you will get stuck and have to understand.
@flourn0 @tante I observed the refactoring and going back happens less if an AI assistant is used. But this may be an effect of using one to save time. Which may be an indicator of stress.
However, I certainly did observe complex code constructs created by AI assistants where a simple change in a data structure would have been sufficient. It is so easy to ask the assistant to write code without reading the code and thinking of the change one makes. It is a skill to ask the right questions.
@flourn0 @tante While the skill of asking the right questions is needed with and without an AI assistant it may be slightly different in both cases. Asking questions to a human the answer will mostly not be a lot of code. Asking an AI assistant one must remember to critically read the code and explanation - not only check if it works.
To be fair, most of these observations were from the time we started to use an AI assistant. It has gotten better now. Not the assistant, but our usage of it.