My experience with generative-AI has been that, at its very best, it is subtly wrong in ways that only an expert in the relevant subject would recognise. So I don't worry about us creating super-intelligent AI, I worry about us allowing that expertise to atrophy through laziness and greed. I refuse to use LLMs not because I'm scared of how clever they are, but because I do not wish to become stupider.
I will say one thing for generative AI: since these tools function by remixing/translating existing information, that vibe programming is so popular demonstrates a colossal failure on the part of our industry in not making this stuff easier. If a giant ball of statistics can mostly knock up a working app in minutes, this shows not that gen-AI is insanely clever, but that most of the work in making an app has always been stupid. We have gatekeeped programming behind vast walls of nonsense.
We seem to have largely stopped innovating on trying to lower barriers to programming in favour of creating endless new frameworks and libraries for a vanishingly small number of near-identical languages. It is the mid-2020s and people are wringing their hands over Rust as if it was some inexplicable new thing rather than a C-derivative that incorporates decades old type theory. You know what I consider to be genuinely ground-breaking programming tools? VisiCalc, HyperCard and Scratch.
You know what? HyperCard was a glorious moment in time that I dearly miss: an army of non-experts were bashing together and sharing weird and wonderful stacks that were part 'zine, part adventure game and part database. Instead of laughing at vibe-coders, maybe we should ask ourselves why the current state-of-the-art in beginner-friendly programming tools is a planet-boiling roulette wheel.
On the gripping hand, if you're a trained programmer using vibe-coding because of a perceived increase in your productivity, or pressure from management to increase your productivity, I would refer you to my first post in this thread…
I've seen lots of posts in the last couple of days about how quickly one can write lots of code with AI. I feel perplexed by this as I hate large programs. The largest thing I have written in the last decade is Flitter. It's only 30k lines and I believe very strongly that it is. Still. Too. Big. Even there, I wrote it purposely to allow the stuff I write *in* it to be very smol: mostly no more than 100 lines. That is the maximum I want to write in a day.
To me, all these people crowing about having written 10k lines of code in a day are idiots. If you need to write that much code in a day, you are manifestly working at the wrong level of abstraction to solve your problem.
@jonathanhogg yep. And if they're working on an operating system (or any related system software, or anything that needs to stay up and running), they're committing malpractice that's going to get a lot of people killed:
https://mastodon.social/@JamesWidman/116133223470110717

@JamesWidman @jonathanhogg

Skynet didn't destroy the world by getting too smart-- it actually just started glitching and chasing its own tail in gibbering circles and everything broke.

I mean, it invented time travel, so gibbering circles were pretty much inevitable. As I understood it, the question was not how to destroy the world or eliminate humanity, but how to do so in a way that fails due to time travel, but ends up with the next iteration of Skynet being just a little bit more effective. It was like... playing the villain to motivate the humans to improve it, as the only way to solve the problems it was presented with.

And I mean, if you destroy the world and kill (almost) all humans, then change history so it didn't happen, then it didn't happen! Right?

Like that one Wakfu villain, except it was actually working.

CC: @[email protected] @[email protected]