The "AI is gonna make programmers more massively more efficient myth" is hitting reality. And not surviving.

https://garymarcus.substack.com/p/sorry-genai-is-not-going-to-10x-computer

Sorry, GenAI is NOT going to 10x computer programming

Here’s Why

Marcus on AI

@tante I find "10x-ing requires deep conceptual understanding" is a very important point. And sadly, I realized from others and myself a tendency to solve problems which arise by not-so-deep understanding with a lot of code when using an AI assistant.

I can only guess why this is the case. I could envision it like: When hitting a problem arising by missing conceptual understanding, coding assistants will help you generate code. But without them you will get stuck and have to understand.

@inw @tante I feel like I've done this repeatedly with little hobby projects. But I think I would overcode at first even without the AI assistant and in both cases go back with the better understanding and refactor to improve the system.

@flourn0 @tante I observed the refactoring and going back happens less if an AI assistant is used. But this may be an effect of using one to save time. Which may be an indicator of stress.

However, I certainly did observe complex code constructs created by AI assistants where a simple change in a data structure would have been sufficient. It is so easy to ask the assistant to write code without reading the code and thinking of the change one makes. It is a skill to ask the right questions.

@flourn0 @tante While the skill of asking the right questions is needed with and without an AI assistant it may be slightly different in both cases. Asking questions to a human the answer will mostly not be a lot of code. Asking an AI assistant one must remember to critically read the code and explanation - not only check if it works.

To be fair, most of these observations were from the time we started to use an AI assistant. It has gotten better now. Not the assistant, but our usage of it.