I feel we've reached a point where many if not most of the organizations we rely on are being overrun by toddlers.

We're constantly bring sold the belief that a chatbot is a viable substitute for thinking, skill, and hard work, of expertise and competence.

I've had conversations with people about AI agents and coding, how people want to have made something they've imagined without having to do the work of actually learning, designing and building. It's like a toddler demanding to be allowed to drive the fire truck without knowing how to drive, not being able to reach the pedals, and not knowing anything about fighting fires.

The want to be what they imagine a firefighter is from the perspective of a toddler. They don't know, they don't care, they just want to be in that truck and make the lights and sirens go.

That outlook is fine for a toddler, it's not acceptable for a grown adult drawing a paycheck.

This isn't just about AI, and I hope my friends who run their own experiments with local LLMs and who use chatbots to as a sounding board rather than as an obsequious servant or unpaid robotic coding intern understand I'm not talking about them.

We (the US) have a president who wants all the respect of the position without doing the work of being a national executive or showing competence and vision in leadership, a Secretary of Defense trolled into a half-assed, doomed, failure of an intervention, a HHS chief and all his giblet-brained underlings that fancy themselves health professionals armed with homeopathic levels of ability and overinflated delusions of adequacy. Don't forget Brilliant Auto Business Genius whose flagship project is a low-poly asset rejected concept art for 1997's "Carmageddon" that has sales worse than the Edsel. Every C-suite malingerer whose primary competencies are being tall, white, male, and credulously overconfident who wants all the monies but doesn't want to have employees or accountability or a product or service anyone wants or needs. "Gig" employers that cosplay as banks, hotels, taxi services, delivery services. Web search engines that spew randomized text rather than links to authoritative and correct information sources.

Atlas shrugged then laid off everyone who knew what they were doing because his best friend ChatGPT told him to. The same societal endgame but there is no Galt's Gulch full of Libertarian Übermensch, just hundreds of thousands of idled professionals helplessly watching toddlers "driving" fire trucks, "flying" planes, "writing" software, "creating" art, etc. A societal disaster, a complete civilizational self-own, promulgated by modern day tulip speculators and assorted fascist-adjacent financiers.

I don't see any of this getting better until the adults among us pick up the toddlers, take away their toys, and put them all down for a nice long nap.

I want to stress I'm not advocating that Puritan BS "if it was miserable for me to learn it should be miserable for you too". Better tools are a good thing, provided they are actually better.

A stochastic code generator is not equivalent to a deterministic compiler. A stochastic text generator is not equivalent to a spellchecker. I reject that these generative tools are better because they do not produce equivalent, better results. Results are more than the artifacts of the work; there's also the increased experience and learning of the producer which is basically absent from generative process. Writing and debugging assembly is not substantially different than writing and debugging a high-level compiled ot interpreted code. Writing a spec and prompting a chatbot aren't remotely like writing and debugging code - there's no understanding of the underlying implementation - how and why the thing works, what its strengths and weaknesses are, where more effort may be needed in the future to shore up weak or dubious code. You've lost that detailed understanding both of the artifacts and the process of construction. That may not be important if the product is inconsequential and disposable but it's vital for safety- and mission-critical systems or anything in active maintenance. Mature code that non-computer-people depend on to solve their problems.

If you aren't building anything if consequence it's easy to believe quality, process, and learning aren't important. And if you do this work just for the paycheck, health insurance, and air conditioning, your ability and willingness to care about the long term effects of your work is seriously diminished or compromised. That's a bigger problem with capitalism and resource allocation; chatbots are just more fuel for that fire. That's a different problem than what I'm addressing, important but not the same.

@arclight re: inconsequential and disposable vs. safety- and mission-critical systems, I'm pretty sure that's been shown to be a distinction the people who drive the production of software are incapable of seeing in advance.

It's not merely that such people are wholly unaccountable, although that's a huge factor. It's because there is such a web of complexity in "modern" (past 35 years or so) software that it's now impossible to tell up front when hooking this piece to that one creates a kill web. That trivial hunk of Javascript that, say, left-pads text turns out to be wound into so many systems that it's not clear until far too late that it's brought down, say, a hospital IT system, or one that does traffic control in a city by its absence.

That's just the bug side. Hostile actors have access to attack surfaces so huge and so varied that they are quite literally indefensible, and those surfaces are only growing.

What to do about this? Well, I've been saying for decades that we're gonna learn the same lessons civil engineers did that led to PE stamps: high-body-count catastrophes. I'm now starting to wonder whether even that glum prediction covers the harms that are coming.