https://www.anthropic.com/research/AI-assistance-coding-skills

Anthropic's own fucking study lmao

"Yeah AI coding makes you worse at it"

like
significantly so

This tracks with my personal experience in my previous job.

Rapid deskilling when they started using llm "prompt engineering" for the code they pushed.

Within weeks they basically had stopped being able to explain what they were pushing during code review.

This mixed with that study from last year where the results were "programmers think they are 20% more efficient, in fact they were 20% slower and worse" just seem to indicate that vibe coding is uh

bad
for both the programmer, and what is being programmed.

Like, really fucking bad.

Quick link to article about

https://arstechnica.com/ai/2025/07/study-finds-ai-tools-made-open-source-software-developers-19-percent-slower/

Study finds AI tools made open source software developers 19 percent slower

Coders spent more time prompting and reviewing AI generations than they saved on coding.

Ars Technica
@Loosf so uh I don't have the mental bandwidth to read the article right now but I'm stuck in an argument with someone who insists that the "programmers think they are 20% more efficient, in fact they were 20% slower and worse" thing applies only to _inexperienced_ programmers ... does this break down the observed effect by level of initial experience at all?
@zwol @Loosf they studied a cohort of junior programmers with not necessarily a lot of python exposure (at least once a week), and no prior exposure to a complex library (Trio).

@pkhuong @Loosf hm, so there may actually be a gap in the research.

Personally I don't see any reason to think LLM use _wouldn't_ also degrade the abilities of experienced programmers but that's not going to get me out of the argument :-(

@zwol @pkhuong @Loosf

Here's the experience of a more senior developer. https://mikelovesrobots.substack.com/p/wheres-the-shovelware-why-ai-coding

Lots to think about here. Between these articles on whether AI coding actually speeds things up at all, and Pluralistics article ...

https://pluralistic.net/2026/01/06/1000x-liability/

To me, quality is what distinguishes good products from bad. Quality appears to be at risk from AI coding.

Where's the Shovelware? Why AI Coding Claims Don't Add Up

78% of developers claim AI makes them more productive. 14% say it's a 10x improvement. So where's the flood of new software? Turns out those productivity claims are bullshit.

Mike Judge
@TobyHaynes @pkhuong @Loosf oh that looks like exactly what i needed, thanks
@zwol @pkhuong @TobyHaynes thank you for these links
They're very useful
@zwol that was a different study
Study finds AI tools made open source software developers 19 percent slower

Coders spent more time prompting and reviewing AI generations than they saved on coding.

Ars Technica
@zwol @Loosf Not at all. The +/-20% result was from an experiment involving experienced open source maintainers working in their field of expertise.
@Loosf This also makes extra work to manually review and debug someone's code made with AI.
@Alkaris extremely unmaintainable code!! Just what the doctor ordered
@Loosf the real news here is that anthropic apparently actually has people that are such true believers in being a force for good that they'd publish this??!
@Loosf so that’s my experience of coding LLMs too. They shift more work from the developer to the reviewer. Which is a big problem since most projects have fewer people with the competency to review well than to develop, and hence the bottleneck of review gets even narrower.
@Honeydew @Loosf and in general, my experience with code review is that removing bugs is significantly harder than not putting them there in the first place.

@Loosf @davidgerard
"On a quiz that covered concepts they’d used just a few minutes before, participants in the AI group scored 17% lower than those who coded by hand, or the equivalent of nearly two letter grades. Using AI sped up the task slightly, but this didn’t reach the threshold of statistical significance."

This really makes me wonder about how LLMs are seen as being a good tool for education in general.

@Loosf all these studies also go "well if you use it correctly you won't lose skills"

and it's like, stop shoving it in my face I don't want it, why would I want to go back to the starting point without any improvement just to use AI

I mean the ethics are also fucking grim but yeah.
@Loosf this conclusion resonates with me (in addition to the finding on how different style of use affects learning): “In an AI-augmented workplace, productivity gains matter, but so does the long-term development of the expertise those gains depend on.”
@Loosf Why on Earth would they publish this heh
@janusfox maybe they think it makes their product actually sound good? I have no clue

@Loosf
as a coder and a programmed device myself, i steer clear of them

if there were a silver lining, it probably would be that my skills are getting better in comparison.

@Loosf This tracks with what I've thought all along, and I've heard others like Primeagen echoing.

Those two big concerns to me were: do the programmers understand their code well enough to be able to spot errors in the code the AI is producing, and how will the coder be able to debug and/or maintain the code that is being produced if they relied on the AI to produce code they don't completely understand. (And it gets deeper when you start looking at side effects and other interactions.)

I've had this concern since I tried having ChatGPT write a profile of an artist that is lesser known, but has a high-enough profile that it would have information about him available. The result: about 50-60 percent of the profile was decent... But it started creating works that the artist hadn't created, and started listing collaborators that the artist didn't know -- much less collaborate with.

In that case, I was able to tell that the AI was wrong because I was the master, I already had the knowledge and was just using it as a tool to try to shorten my workload.

But for people doing so-called vibe coding this could be quite disastrous as they generally haven't actually mastered the language(s) and coding practices, and therefore don't have the skills to correct the AI, much less the code it is producing.

@unattributed @Loosf yes, thank you, in vibe coding you don't take responsibility for your output.

Yes I assign authorship to the engineer. AI is a function, whose input is human direction. Garbage in, slop out.

@Loosf That's probably a good thing in their book, right? People use the generator, they lose the ability to work without it, they become dependent on it, and now they're customers for life who will advocate for the technology because their livelihood depends on it.
❝The largest gap in scores between the two groups was on debugging questions, suggesting that the ability to understand when code is incorrect and why it fails may be a particular area of concern if AI impedes coding development.❞

@Loosf Plugging yourself into the Deskillotron 9000 causes skill loss.

My shocked face: 😐

@Loosf oouuuhgghhee (sound i made in reaction), seems bad ​
@Loosf the closer i read this, the more i can't believe that anthropic allowed this study to be released. it really does basically say "AI assistants turn people into bad programmers and they're not any faster at programming either"

@Loosf You'll probably like this

It's a good article to slap people in the face with :P https://unsolicited-opinions.rudism.com/bad-programmer/

Using AI Generated Code Will Make You a Bad Programmer | Rudi's Unsolicited Opinions

@Loosf I cannot decide if the fact that Anthropic keeps publishing self-owns like this is evidence of a deep commitment to intellectual honesty or a delusional assumption that everyone will still want to subscribe to the the lying plagiarism robot even though it has "a few problems" like making you stupid and evil
@Loosf like the thing they did six months or so ago where the result of the experiment was "we hooked up our product to the procurement API for a vending machine and it lost all of its money and went insane and started hallucinating that it was a real person and threatening suppliers", but the tone was "wow! we learned so much! isn't this neat!"

@Loosf the other hilarious detail is that it didn’t even make them any faster.

Like, even though they claim in the introduction that LLMs enhance productivity, the actual speed gains were tiny and not statistically significant.

So there’s not even a tradeoff where you might get faster but worse results or something; it’s lose-lose.

@benjamineskola two minutes faster for two entire grades lower in average

What a bargain! Doing things only marginally faster but worse!

@Loosf even that is excessively generous to them

2 minutes is within the margin of error. they can’t even be sure there was any speed benefit at all! but they can be confident that it makes you stupider.

@Loosf shout out to the taco bell ai for letting me, a water enjoyer, order 6 million cups of water.
@Loosf this was exactly my personal experience, and that was with auto complete assistants like early GitHub copilot, not even the coding agents they have now. It took me 6 months to get my skills back to where they were before I started using it.
That wasn't what this study was. It was specifically looking at how AI affects learning a new library/tooling. And, yeah, it's not surprising that using AI to blindly generate code winds up not translating into similar skills for writing the code manually. (Also, the comprehension scores were significantly divided between different ways people used the assistant, so this is another study saying that not all AI use is equivalent.)

Note that we require students how to do math longhand as well before allowing them to bring successively more complex calculators into the classroom. We don't say calculators are unconditionally bad because too heavy a reliance on them hinders learning.

Computer science courses are far from being made obsolete through AI. The fact that debugging skills suffered the most from relying on the AI suggests that a solid foundation might even be more important as these tools become a standard part of the toolkit. But that's all that can be gleaned from this study -- trying to say anything based on these results about programmers who are already experienced with the language and libraries being used, is entirely misrepresenting the findings.

re: @[email protected]

@Loosf

This is why the capitalists love it.

Finally, they can deskill and commodify the industry that builds their surveillance, war, and money-laundering technology

@Loosf AI use when I was still in tech introduced months of security ticket backlogs that slowed the team down DRAMATICALLY. AI coding is garbage
Reviewing "How AI Impacts Skill Formation"

It's a weak study, but it still has interesting findings

Jennifer++