Just saw an article that said that software development is now worth less per hour than minimum wage because of LLMs. I won’t link it because it’s full of the kind of sweeping boosterism I despise, but let’s assume he’s (of course it’s a he) is correct.

Joke’s on you pal, I’ve written code for free for literal years, try undercutting my $0/hr rate haha #winning 😛

Maybe the boosters are somewhat right, that loads of tech companies will become just a shallow stack of managers supervising LLMs that just repeatedly copy a corpus of previous work in different variations. A lot of tech work today is shitting out copies of what everyone else is doing, following a PMs ramblings after all. Maybe all that gets automated away, and apart from the jobs angle maybe that’s somewhat ok (pssst add basic income and maybe I’ll give you half a pass)
It just means we get a lot more shit software though. It’s bad now, and can only be made worse by the PMs not having to go through a practicality filter, or having people who *actually* know how to fix things when the shit inevitably hits the fan. I hope there will still be pockets of conscientious developers who actually know their stuff for me to obtain software from, otherwise I’m either going to have to swear off tech entirely or start writing even more of my own tools

I know the booster response to this is “LLMs can write software better than you” - and I think that’s because our concept of “better” is different. Faster is not better to me. I’m sure the model contains more APIs, idioms and algorithms than I have in my head at any one time. But a bigger “database” isn’t better either.

Better for me means that I, or some other responsible person I trust, understands it fully, and can therefore always fix it. I don’t trust any tech where that’s not the case.

A really important differentiating fact to me is that an LLM doesn’t *understand* anything. It does a good job at imitating understanding via an extremely complex statistical model, but it’s not the same thing. Being fooled by its impression of a thing does not mean it’s doing the thing.

I do not trust any tech statistical model to have my back when the shit his the fan, like I trust a responsible person

I used to think open source was the natural answer to this trust model, but that’s being undermined and devalued by the corporate code ingestion machine. I’d say there’s even less chance that open source will be sustainably funded from here, and we’re already seeing projects succumbing to reaching for generated code or being vibe coded a “reimplementation” to break the license. It’s really sad. I feel like the whole premise of responsible software authorship is being killed in front of me
@sinbad it just makes me want to try harder, honestly
@ghosttie @sinbad I'm wavering between doubling down on my existing development commitments ( part of me suspects that once projects get to a certain size the benefits of the new gen of slop may become negligible, particularly given my bias toward R&D ), and signing up for an electrical engineering course ... Really don't want to have to go on the tools, but otherwise my retirement plan best-option looks like performative self immolation :-/
@ghosttie I’m not changing, but I feel like I’m destined to end up like Obi-Wan. Maybe that’s not so bad, I’m the right age for it

@sinbad my hope is that the bubble will pop, everyone will realize that the emperor has no clothes, and LLMs will end up as cool as NFTs

That's not the only way things could go, but that's the one I'm hoping for

@sinbad I'm not super worried long term. LLM written code can now be studied. Numbers from any individual study are hard to say are definitely correct. Collectively the picture of what we've been saying the whole time is correct: Writing code that "just works" faster is worse than worthless because it's impossible to maintain. If you take the time to correct the code so that it is maintainable, you lose any speedup you gain from the LLM. Except that you now you paid Anthropic $200 extra.
@HauntedWindow I hope you’re right, but I wouldn’t put it past most tech companies to just let systems limp on, frankenpatched half to death before eventually vibe coding a replacement from scratch because that’s easier
@sinbad I have no doubt that several of them will try. At the end of the day they do have to contend with the reality that operating that way is more expensive. It's already more expensive and that's when the AI companies are using VC to subsidize the cost of operating and training their models. Eventually AI subscriptions will cost at least double what it costs now.
@HauntedWindow the numbers I’ve seen suggest 3-5x just to break even
@sinbad Yeah, I think those are more accurate. So I have trouble envisioning a future where every software business is willing to pay an extra $1000 per month for each developer to either make broken software that's shedding customers or to spend extra dev time fixing LLM problems.
@sinbad I'm starting to get the impression that having good and open test coverage is a liability now