We'll see how I feel in the morning, but for now i seem to have convinced myself to actually read that fuckin anthropic paper

I just

I'm not actually in the habit of reading academic research papers like this. Is it normal to begin these things by confidently asserting your priors as fact, unsupported by anything in the study?

I suppose I should do the same, because there's no way it's not going to inform my read on this

"AI" is not actually a technology, in the way people would commonly understand that term.

If you're feeling extremely generous, you could say that AI is a marketing term for a loose and shifting bundle of technologies that have specific useful applications.

I am not feeling so generous.

AI is a technocratic political project for the purpose of industrializing knowledge work. The details of how it works are a distant secondary concern to the effect it has, which is to enclose and capture all knowledge work and make it dependent on capital.

So, back to the paper.

"How AI Impacts Skill Formation"
https://arxiv.org/abs/2601.20245

The very first sentence of the abstract:

> AI assistance produces significant productivity gains across professional domains, particularly for novice workers.

1. The evidence for this is mixed, and the effect is small.
2. That's not even the purpose of this study. The design of the study doesn't support drawing conclusions in this area.

Of course, the authors will repeat this claim frequently. Which brings us back to MY priors, which is that this is largely a political document.

How AI Impacts Skill Formation

AI assistance produces significant productivity gains across professional domains, particularly for novice workers. Yet how this assistance affects the development of skills required to effectively supervise AI remains unclear. Novice workers who rely heavily on AI to complete unfamiliar tasks may compromise their own skill acquisition in the process. We conduct randomized experiments to study how developers gained mastery of a new asynchronous programming library with and without the assistance of AI. We find that AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average. Participants who fully delegated coding tasks showed some productivity improvements, but at the cost of learning the library. We identify six distinct AI interaction patterns, three of which involve cognitive engagement and preserve learning outcomes even when participants receive AI assistance. Our findings suggest that AI-enhanced productivity is not a shortcut to competence and AI assistance should be carefully adopted into workflows to preserve skill formation -- particularly in safety-critical domains.

arXiv.org
@jenniferplusplus I like the fact that their own research doesn't fit their lazy claim you reference, and they spend a lot of time trying to work out how the claim can be true, even though their own evidence is against it (and more in line with the mixed evidence in the literature, as you say).
@jenniferplusplus it reminds me a bit of the famous thing with the Flat Earth Society people who spent $20k on an expensive laser gyroscope to "prove" that the Earth was not a rotating sphere... and then spent a lot of time being very confused and upset when, of course, it measured precisely what you'd expect from a rotating spherical Earth.
@aoanla @jenniferplusplus I was baffled that Anthropic published this paper, let alone promoted it on their blog. Cos even their headline results say "AI coding bots are shit, don't use them, they're no faster and they make you stupid". But yeah, they thought they were saying things about productivity.