The thing is, *even if* LLMs made me produce code 25% faster like they claim (and it doesn’t), it would still be a net negative even without all the costs (direct and indirect) simply because a human wouldn’t have the innate understanding of the code that comes with having written it, which short-circuits so much later. Most of coding time is NOT producing the initial version. We’ve known this for decades. It doesn’t matter how much people want that not to be true https://mikelovesrobots.substack.com/p/wheres-the-shovelware-why-ai-coding
Where's the Shovelware? Why AI Coding Claims Don't Add Up

78% of developers claim AI makes them more productive. 14% say it's a 10x improvement. So where's the flood of new software? Turns out those productivity claims are bullshit.

Mike Judge
So many people desperately want this silver bullet to work but it just doesn’t, at least not to the extent it needs to, to even barely justify its negatives. Sure it’ll bash out boilerplate for you, and if you’re crap at something it’ll give you something plausible that similarly unskilled people will think is good enough. But no matter how much you squint, that isn’t worth the billions it cost you, or the environmental damage, or the mass automated theft. A reckoning is coming
The problem is of course that the besuited c-suite chimps making the decisions are incapable of judging the actual efficacy of these things; both because they desperately want their employees to be more replaceable and lower skilled, and because everything *they* do all day can be replaced by a pattern matching autocomplete machine and no-one could tell the difference

@sinbad This. When considering all the inadequate decisions or poor communication that's come down from c-suiters over the years, it's easy to imagine an AI doing the same but much more efficiently.

Are they naive, or do they really don't think AIs are coming for their jobs?

@sinbad a tool that turns *all* of coding into debugging even though we all know debugging is the slowest part of coding with the highest mental overhead and strain

@eniko @sinbad the really shitty part of the whole AI bubble is that putting only a tiny fraction of all that burned money into improving development and debugging tools would give us a real, measurable productivity boost.

Instead in the best case those tools are abandondend, in the worst case actively sabotaged by integrating 'AI features' >:(

@eniko @sinbad don’t forget that you also don’t know any of the code you’re debugging because you never wrote it
@eniko
A very nice take on "LLM assisted coding". Haven't thought about it that way yet, but will definitly use your idea in upcoming discussions
@sinbad I still can’t decide if this is going to lower the software quality bar even more pathetically and take me out of the industry entirely, or provide with with a decade of gainful employment cleaning up after the bullshit.
@cloudthethings Both, that's the short-term and long-term
@sinbad I think AI can have its place for "massive data analysis" .. like finding patterns and correlations in data sets no human could ever process. I can see it useful for finding anomalies in stars clusters, folding proteins, maybe discovering new molecules "by doing what is good to do, mixing numbers and getting something out". BUT .. as a tool to mimic/replace human creativity ? Why ? What's the role of humans then ? Replacing humans ? Nah, that's not good.
@gilesgoat Yep but even then real humans need to review the findings because it still screws up and finds false patterns all the time
@gilesgoat @sinbad Nah, not the LLM kind of AI. We already do use computer programs for all those things, and those programs are way better at those things than any generalised AI could ever be.
@sinbad Gen AI coding is like pair programming but not. PP can't work for everyone or in all situs but where it works, it can offer better understanding of the problem by 2 coders, each of whom has been already been articulating parts out loud (so the work is partially translated to human language for docs or whatever), less coder loneliness/drift (for those prone to it), 2 experienced pairs of eyes on it to avoid design traps. Ofc less successful if one of the 2 is delusional/psychotic.
@sinbad I mostly agree but with two caveats: 1) AIs are improving *a lot* and unless we are very close to a plateau, in a couple of years things might be different and 2) Even if I'm wrong about 1. I think that we are just going into a new equilibrium where AI will be used for boilerplate and writing more idiomatic code. I don't think we are getting a reckoning with huge economic impact.
@sinbad Good point! I don't see all this hype about LLMs making you code faster. Most developers only spend a small portion of their time on actual coding, and even less so for writing the initial version of the code (as you mentioned).

If the only goal is to save time writing code then you'll likely develop your skills/experience less if you blindly rely on LLMs for that.
However, if you instead aim at spending
more time on coding using by LLMs then it could be possilble that they can help you write better code or learn and understand things better. Something that might actually pay off. But unfortunately that's not at the centre of the ongoing hype..

I mainly see two types of use cases for LLMs in code writing these days:
1. I need to write lots of boilerplate code, fast. If that's the case then you have a problem though.
2. I don't understand this language/maths/algorithms well enough, and want the LLM to write the code. Then you have another problem though: Until we have super-advanced AI that can replace human beings in every way, then you don't want your developers to deal with code and languages they don't understand.

You could instead try to use LLMs to help you brainstorm, understand algorithms/maths, search documentation and libraries and come up with good solutions - but that's not a matter of
saving time, which is the only thing people seem to care about.
@sinbad All this focus on coding speed feels about as silly as that "evaluate your developers by number of code commits" philosophy that gained popularity a while back, and then quickly died out because everyone realised how wrong it was.