My experience with generative-AI has been that, at its very best, it is subtly wrong in ways that only an expert in the relevant subject would recognise. So I don't worry about us creating super-intelligent AI, I worry about us allowing that expertise to atrophy through laziness and greed. I refuse to use LLMs not because I'm scared of how clever they are, but because I do not wish to become stupider.
I will say one thing for generative AI: since these tools function by remixing/translating existing information, that vibe programming is so popular demonstrates a colossal failure on the part of our industry in not making this stuff easier. If a giant ball of statistics can mostly knock up a working app in minutes, this shows not that gen-AI is insanely clever, but that most of the work in making an app has always been stupid. We have gatekeeped programming behind vast walls of nonsense.
We seem to have largely stopped innovating on trying to lower barriers to programming in favour of creating endless new frameworks and libraries for a vanishingly small number of near-identical languages. It is the mid-2020s and people are wringing their hands over Rust as if it was some inexplicable new thing rather than a C-derivative that incorporates decades old type theory. You know what I consider to be genuinely ground-breaking programming tools? VisiCalc, HyperCard and Scratch.
You know what? HyperCard was a glorious moment in time that I dearly miss: an army of non-experts were bashing together and sharing weird and wonderful stacks that were part 'zine, part adventure game and part database. Instead of laughing at vibe-coders, maybe we should ask ourselves why the current state-of-the-art in beginner-friendly programming tools is a planet-boiling roulette wheel.
On the gripping hand, if you're a trained programmer using vibe-coding because of a perceived increase in your productivity, or pressure from management to increase your productivity, I would refer you to my first post in this thread…
I've seen lots of posts in the last couple of days about how quickly one can write lots of code with AI. I feel perplexed by this as I hate large programs. The largest thing I have written in the last decade is Flitter. It's only 30k lines and I believe very strongly that it is. Still. Too. Big. Even there, I wrote it purposely to allow the stuff I write *in* it to be very smol: mostly no more than 100 lines. That is the maximum I want to write in a day.
To me, all these people crowing about having written 10k lines of code in a day are idiots. If you need to write that much code in a day, you are manifestly working at the wrong level of abstraction to solve your problem.
@jonathanhogg
We need more done in actually high level languages.

@kirtai @jonathanhogg A large number of those lines of code are probably boilerplate indicative of abysmal library quality.

If a language's entire ecosystem is built on boilerplate and it's seen as normal, that is not okay.

Normalization of deviance

@lispi314 @jonathanhogg @kirtai I mean we didn't get here unintentionally. It's expensive and self sacrificial to invest a lot of time in infrastructure like programming languages and libraries (wherever you land in the stack) and make it very simple to solve the problems you are solving (and others to solve them too). It's also really hard, because not only is the problem you are solving an engineering problem but now also building the infrastructure to solve the problem. And the limiting factor is exploration not sitting down and hypothesising, researching, and cooking up some crap. Which yes is also part of the process too.
@lispi314 @jonathanhogg @kirtai you have to buy into an ideology of doing things this way. And if you're investing a lot of time in e.g. libraries for a given programming language, we already kind of know that our programming languages kinda suck and better be made irrelevant, so it also feels inefficient even when you do buy into it.
@jonathanhogg @kirtai @lispi314 and as someone who's spent a tonne of time investing in a certain stack to make things easier for other people to solve the same problem... things can wind up esoteric and difficult to onboard with (like imagine APL). It would unironically be quicker for them to use boilerplate even if long term that may not be true.
@jonathanhogg yep. And if they're working on an operating system (or any related system software, or anything that needs to stay up and running), they're committing malpractice that's going to get a lot of people killed:
https://mastodon.social/@JamesWidman/116133223470110717

@JamesWidman @jonathanhogg

Skynet didn't destroy the world by getting too smart-- it actually just started glitching and chasing its own tail in gibbering circles and everything broke.

I mean, it invented time travel, so gibbering circles were pretty much inevitable. As I understood it, the question was not how to destroy the world or eliminate humanity, but how to do so in a way that fails due to time travel, but ends up with the next iteration of Skynet being just a little bit more effective. It was like... playing the villain to motivate the humans to improve it, as the only way to solve the problems it was presented with.

And I mean, if you destroy the world and kill (almost) all humans, then change history so it didn't happen, then it didn't happen! Right?

Like that one Wakfu villain, except it was actually working.

CC: @[email protected] @[email protected]
@jonathanhogg In general I agree but the current state of everything-in-React means a 1k line change often makes sense at some level. It's horribly verbose, logically small changes make big diffs, the library ecosystem is full of half-baked projects, it's just a mess. Removing some of the friction to adding code makes it much worse, but it's not the only cause. Now, back to better dev tools, unfortunately Excel has somehow ended up being the go-to for a lot of the world. I dunno.
@mirth It's truly amazing what browsers are able to do now, but unfortunately that doesn't fix that JavaScript and the entire JavaScript ecosystem is a godawful mess
@jonathanhogg Browser engines are amazing. What we have is related to why Flash was never going to work for mobile. Nothing to do with the runtime, everything to do with the target most devs built for (desktop which 15 years ago was much faster than phones). Now half of web devs have Apple M series silicon which is twice or more as fast as most of the installed base of PCs. Whatever is just tolerably fast on a dev's machine is guaranteed to suck everywhere else. Sadness ensues.

@jonathanhogg

Heap of 10K lines and they probably have NO idea what is even going on in there.

@jonathanhogg thank you for this thread!

In the last years while the AI hype unfolded, I was lucky to get a closer view of Scratch, Snap and MIT App Inventor.

The ease of use, the speed of development and the abstraction of complex concepts into easy to use building blocks of the latter three were amazing.

Ever since AI came up my brain couldn't stop thinking that if so much code gets generated then we've been working at the wrong abstraction level all the time.

@mainec I teach block-based languages to grownups all the time and I just wish they weren’t viewed as “toys”
@jonathanhogg though watching the team behind them tinker with these tools at FOSDEM sure made the the block based tools look like as much fun as toys. Essentially bringing the joy and ease back to technology.
@jonathanhogg @mainec

Is there a blocks-based interface for working with Arduino /ESP specifically, that you are aware of? Event driven programming is so powerful yet so impenetrable for beginners.
@the_moth @mainec Microsoft Makecode supports a range of hardware – can’t remember if there are any ESP boards that are supported
@jonathanhogg You have also condemned yourself to spending the next four weeks frustratedly fixing the 10k lines of code. Idiot.
@jonathanhogg I write 10000000 lines a minute.
I run shell script commands.
It generates millions of lines of machinecode to execute my wishes.
Amazing!

@jonathanhogg
On one hand, I'm inclined to agree about the barrier to entry issue - boilerplate sucks, and having more people understand programming would be great.

But on the other hand, it feels like the amount of software in existence is already unmanagable, and the average quality is relatively low.

You say to move a layer up to avoid writing 10k lines, but the current way to do that results in huge dependency trees with 10s of thousands of lines of someone else's code.

1/

@jonathanhogg
All these dependencies have updates which introduce regressions and API breakage. And they also have vulnerabilities.

IME, these things can very quickly become unmanagable - you spend more time updating dependencies than writing your own code - unless you're very picky about your dependencies.

So is more people writing more software what the society needs?

@wolf480pl it is the current way of moving up a layer that I object to. We should be thinking of new ways of programming and instead are stuck making new frameworks. We imagine adding more cruft will somehow make it better. Eg., Arduino and Processing imagined that you could take a language wildly unsuited to beginners and make it palatable with a library

@jonathanhogg

"planet-boiling roulette wheel" is the name of my upcoming experimental jazzcore EP

@pikesley @jonathanhogg looking forward to watching them at EMF later this year
@jonathanhogg HyperCard was great.
@StaceyCornelius @jonathanhogg is a descendants/replacement extant now?

@Photo55 @StaceyCornelius @jonathanhogg Livecode is sort of descended from a Hypercard clone (https://livecode.com). And there are a number of runtime engines for old-school Hypercard decks (https://archive.org/details/hypercardstacks?&sort=-downloads). There’s also Decker, which is a spiritual inheritor (https://beyondloom.com/decker/).

Dang I miss Hypercard.

LiveCode Create - Build Software You'll Never Outgrow

One platform. Endless power. Build scalable apps with visual development, custom logic, and built-in AI assistance. From idea to launch, your perfect app awaits.

@jonathanhogg That's the kind of talk you usually hear just before someone invents themselves a new language. Just saying.
@jarkman Heh! Most of my programming these days involves creating or using my own languages 😆
@jonathanhogg :-) I would like to hear more about that sometime.

@jarkman I can absolutely bend your ear at EMF, but conveniently I also recently gave a talk about it at Alpaca! 😀

https://www.youtube.com/watch?v=D9khHD9sB7M&list=PLxqmZjMvoVzw773-Fo9ajkujFfOThuFOP&index=9

Flitter: A declarative language for structured visuals by Jonathan Hogg

YouTube
@jarkman @jonathanhogg Would love to have my ear bent about Flitter at EMF 😀. Are you planning to do your talk there? (I guess there’s that YouTube you posted, but I kind of like live performance 😜)
@gklyne @jarkman I wasn’t planning to. As a team lead I’m not supposed to put myself up for a talk as well, though I think that’s more of a guideline than a rule…
@jonathanhogg @jarkman Ack. Having now watched, I think your Alpaca talk is a pretty good intro. I see some resonance in your approach with OpenSCAD (different goals, of course).
@gklyne @jarkman yes, there’s definite parallels with OpenSCAD that I was unaware of when I originally created it. I am (constantly) on the verge of developing a new take on Flitter and I mean to explore that further
@jarkman @jonathanhogg Several years ago, I played around with using Haskell as a substrate for a DSL. Used a combinator parser (Parsec) to spit out a directly executable “compiled” function. I’ve occasionally thought it would be fun to do something similar for CSG.
@gklyne @jarkman I think CSG is a fantastic fit with functional/declarative languages. I added the support for Manifold to Flitter as a speculative exercise and only then discovered that it was an incredibly powerful tool for things I was trying to achieve
@jonathanhogg Thanks! I'll absorb that and then I can ask you better questions at EMF.

@jarkman @jonathanhogg I get the broader point here, but at the same time, as computers have moved to encompass more and more of the human sphere, is it actually reasonable to exect any languge to be actually general purpose?

Perhaps for some uses cases it's the right choice, but when I look at data-science code written by vernacular developers (experts whose expertise is in a domain other than computer science) I feel the freedom from those languages just gives more scope for error/mistake/poor style that will bite them later). Why can't we embrace more DSLs?

@michael @jarkman Fuck yes! I want a thousand languages to bloom. It seems like once everyone used to write their own language and we fell out of the habit. The Dragon Book used to be required reading for CS…
@jonathanhogg @michael @jarkman I once asked a very senior HPC developer at Red Hat what keeps him up at night and he said, paraphrasing and pulling from memory that's about 15 years old now, "we haven't created new computer science since the 1960s and I fear we'll exhaust what we know before we discover anything new," and I think about that a lot these days.

@thatsten
The 1960s were mostly math because most CS was done on blackboards (as one of my profs put it) because access to machines was very limited. Also, there was a "Cambrian explosion" of ideas in this new field - and after that, evolution slowed down.

@jonathanhogg @michael @jarkman

@michael @jarkman @jonathanhogg (IMO) we can't have more DSLs because everything useful is now plumbed together from a series of heterogenous parts and we've somehow decided they can only interoperate at the (barbaric) C ABI level, or the (absurdly inefficient) web level. So, we rely on general purpose languages using specialised libraries, instead of the other way around.
I think fixing this boundary/contract problem would fix a lot in s/w engineering.

@tobyjaffey
gRPC is pretty efficient, although Erlang is a better abstraction.

@michael @jarkman @jonathanhogg

@jonathanhogg I'd add that everything is built on frameworks now. Programming has mostly become configuring the framework and coming up with the correct business logic and decent UX / styling. And since most apps these days do the same kind of things, with different data, AI's job should be easy. Humans still manage to mess up the important bits like security, privacy, performance. And AI is even worse at those things.
@bit101 hold on, I've got another post incoming on exactly this… 😉
@jonathanhogg sorry if I spoiled it! :)

@jonathanhogg @bit101 jaha. I asked an LLM to make me an URL shortener website.
I read through the code, and saw "interesting" ways of doing SQL.
Me: "is this code secure?"
ChatGPT: "of course it is not secure"

No vibe coder ever asks that question to its bullshit generator.

So far, the research is in your favour regarding atrophy!