I feel we've reached a point where many if not most of the organizations we rely on are being overrun by toddlers.

We're constantly bring sold the belief that a chatbot is a viable substitute for thinking, skill, and hard work, of expertise and competence.

I've had conversations with people about AI agents and coding, how people want to have made something they've imagined without having to do the work of actually learning, designing and building. It's like a toddler demanding to be allowed to drive the fire truck without knowing how to drive, not being able to reach the pedals, and not knowing anything about fighting fires.

The want to be what they imagine a firefighter is from the perspective of a toddler. They don't know, they don't care, they just want to be in that truck and make the lights and sirens go.

That outlook is fine for a toddler, it's not acceptable for a grown adult drawing a paycheck.

This isn't just about AI, and I hope my friends who run their own experiments with local LLMs and who use chatbots to as a sounding board rather than as an obsequious servant or unpaid robotic coding intern understand I'm not talking about them.

We (the US) have a president who wants all the respect of the position without doing the work of being a national executive or showing competence and vision in leadership, a Secretary of Defense trolled into a half-assed, doomed, failure of an intervention, a HHS chief and all his giblet-brained underlings that fancy themselves health professionals armed with homeopathic levels of ability and overinflated delusions of adequacy. Don't forget Brilliant Auto Business Genius whose flagship project is a low-poly asset rejected concept art for 1997's "Carmageddon" that has sales worse than the Edsel. Every C-suite malingerer whose primary competencies are being tall, white, male, and credulously overconfident who wants all the monies but doesn't want to have employees or accountability or a product or service anyone wants or needs. "Gig" employers that cosplay as banks, hotels, taxi services, delivery services. Web search engines that spew randomized text rather than links to authoritative and correct information sources.

Atlas shrugged then laid off everyone who knew what they were doing because his best friend ChatGPT told him to. The same societal endgame but there is no Galt's Gulch full of Libertarian Übermensch, just hundreds of thousands of idled professionals helplessly watching toddlers "driving" fire trucks, "flying" planes, "writing" software, "creating" art, etc. A societal disaster, a complete civilizational self-own, promulgated by modern day tulip speculators and assorted fascist-adjacent financiers.

I don't see any of this getting better until the adults among us pick up the toddlers, take away their toys, and put them all down for a nice long nap.

I want to stress I'm not advocating that Puritan BS "if it was miserable for me to learn it should be miserable for you too". Better tools are a good thing, provided they are actually better.

A stochastic code generator is not equivalent to a deterministic compiler. A stochastic text generator is not equivalent to a spellchecker. I reject that these generative tools are better because they do not produce equivalent, better results. Results are more than the artifacts of the work; there's also the increased experience and learning of the producer which is basically absent from generative process. Writing and debugging assembly is not substantially different than writing and debugging a high-level compiled ot interpreted code. Writing a spec and prompting a chatbot aren't remotely like writing and debugging code - there's no understanding of the underlying implementation - how and why the thing works, what its strengths and weaknesses are, where more effort may be needed in the future to shore up weak or dubious code. You've lost that detailed understanding both of the artifacts and the process of construction. That may not be important if the product is inconsequential and disposable but it's vital for safety- and mission-critical systems or anything in active maintenance. Mature code that non-computer-people depend on to solve their problems.

If you aren't building anything if consequence it's easy to believe quality, process, and learning aren't important. And if you do this work just for the paycheck, health insurance, and air conditioning, your ability and willingness to care about the long term effects of your work is seriously diminished or compromised. That's a bigger problem with capitalism and resource allocation; chatbots are just more fuel for that fire. That's a different problem than what I'm addressing, important but not the same.

@arclight AI reminds me of when my CS education switched from teaching Modula 2 to Java mid-curriculum.

The idea was to drag-and-drop some applet together in Visual Cafe Whatever and code in the stubs to make it work.

People did _terribly_ bad on this. Like, out of 120 first years, 60+ applied voluntarily for out-of-hours extra schooling. (And more needed it.)

Next year, they dropped the Visual Blah, and went back to a barebones JDK, a text editor, and actually teaching people how to code.

@arclight The easy GUI shortcut taught people nothing about the structure of Java code, so people learned nothing about the structure of Java code.

Turns out that was kind of important if your plan involved passing exams, or indeed making anything other than a small graphical "toy" applet. Which is most things.

@arclight re: inconsequential and disposable vs. safety- and mission-critical systems, I'm pretty sure that's been shown to be a distinction the people who drive the production of software are incapable of seeing in advance.

It's not merely that such people are wholly unaccountable, although that's a huge factor. It's because there is such a web of complexity in "modern" (past 35 years or so) software that it's now impossible to tell up front when hooking this piece to that one creates a kill web. That trivial hunk of Javascript that, say, left-pads text turns out to be wound into so many systems that it's not clear until far too late that it's brought down, say, a hospital IT system, or one that does traffic control in a city by its absence.

That's just the bug side. Hostile actors have access to attack surfaces so huge and so varied that they are quite literally indefensible, and those surfaces are only growing.

What to do about this? Well, I've been saying for decades that we're gonna learn the same lessons civil engineers did that led to PE stamps: high-body-count catastrophes. I'm now starting to wonder whether even that glum prediction covers the harms that are coming.

@arclight yes, our culture today epitomises the old phrase: "For every complex problem there is an answer that is clear, simple, and wrong.'
@arclight “Homeopathic levels of ability” is a keeper. Definitely going in my word-hoard.

@arclight

a low-poly asset rejected art concept art for 1997's "Carmageddon" that has sales worse than the Edsel.

Good grief, if those things didn't already self-combust, they would now.

@max A few dats ago we saw one in town with a gray/white/black "urban" camouflage wrap. It was less tacticool than trying to hide from embarrassment.
@arclight Fantastic! Well said!

@arclight this is a very toddler like perspective.

I can build insanely more sophisticated things faster, with better tests, comprehensive documentation, and a better guard railed process than ever before.

For those that already can code and build and architect and document - this is a multiplier of monumental proportions.

@luke What are the consequences of your code failing?
@arclight depends on the project - but I trust the 1200 tests I have the agents running against it more than the 60 I rushed together myself.
I don't care about the tests; I want know what people lose or how they are harmed if the software doesn't work or produces misleading or incorrect results.
@luke @arclight it's interesting that you should bring madness into the discussion. How do you know you're not subject to it just from interacting with the plagiarizing glazebox? How do you know the thing(s) to which you delegated the work--work you literally cannot understand or check the way you would have if you'd done it yourself--aren't doing things that would be madness if you did them?
@arclight Counterpoint, I am seeing a steady stream of useful code being made by people who are not programmers. Most of these are for purely personal use, like an app for a family to coordinate activities. Or the defense editor at the Economist has made a web-page for tracking airplane movements in the middle east. A travel blogger I'm following made this: https://randomwire.com/mapthread/ which I found made the posts much more engaging.
Mapthread: Tell a Story with a Map

In 2016, Craig Mod and Dan Rubin published Koya Bound, a beautiful photo book covering an eight day hike on the Kumano Kodō pilgrimage trail in Japan. Alongside this they made a companion website w…

Randomwire

@trademark Which is fine for personal use, putting aside all the other unresolved problems with commercial chatbots.

There are problems when you share code you don't understand built with other people's unatttibuted work. You have no idea where that code gets used later and laundering attribution is wrong. It's even worse when you take money for it and call yourself a programmer. That's fraud.

@arclight There's certainly a lot of fraud going on, like what happens in every bubble. However whenever there's a bubble, it is important to determine if anything intrinsically valuable is being created. In this case I'm convinced it is, like it was in the dot-com bubble, but unlike crypto-currencies which are useless for anything other than criminial behaviour.