I just
I'm not actually in the habit of reading academic research papers like this. Is it normal to begin these things by confidently asserting your priors as fact, unsupported by anything in the study?
I suppose I should do the same, because there's no way it's not going to inform my read on this

AI assistance produces significant productivity gains across professional domains, particularly for novice workers. Yet how this assistance affects the development of skills required to effectively supervise AI remains unclear. Novice workers who rely heavily on AI to complete unfamiliar tasks may compromise their own skill acquisition in the process. We conduct randomized experiments to study how developers gained mastery of a new asynchronous programming library with and without the assistance of AI. We find that AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average. Participants who fully delegated coding tasks showed some productivity improvements, but at the cost of learning the library. We identify six distinct AI interaction patterns, three of which involve cognitive engagement and preserve learning outcomes even when participants receive AI assistance. Our findings suggest that AI-enhanced productivity is not a shortcut to competence and AI assistance should be carefully adopted into workflows to preserve skill formation -- particularly in safety-critical domains.
@mikalai @seanwbruno @jenniferplusplus the thing that is a positive signal is that it *survived* peer review, which implies that there are multiple, knowledgeable, independent scientists in the area of study of the paper that read it and came to the conclusion, "the conclusions stated by this paper are supported by the data and arguments presented in the paper".
This paper would not survive peer review.
It is a flawed system but it is not worthless.