One of the hot technologies preceding one of the several “AI Winters” we’ve seen since software was even a thing was called Expert Systems. And they were hot for well over a decade!

…but as the hype started cooling and people looked at the cost/benefit analysis and the real capabilities of such a system in the long term, the whole house of cards came crashing down. Below is the summary the Wikipedia page gives on why they failed. Look familiar?

What I’m getting at is that it doesn’t really matter if LLMs improve the quality of their results to match a novice human worker.

When you remove the subsidies from incredibly rich investors and the massive FAANG war chests, and the momentary early adopter hype from people willing to pay for trash for the novelty, when you take into account the training costs in hardware, energy, and environmental costs…

The math just doesn’t add up, and I don’t believe it ever will. VCs and war chests are able to artificially prop stuff up for a very long time in the (false) hope that it’ll pay off but… the math will never add up. And if and when regulation (like copyright law) catches up, it’s going to be a BLOODBATH

@zkat Frustratingly, the hope isn't necessarily false for the VCs — but they're hoping for an acquisition or an IPO, not a working product.
@neia that’s fair. Sometimes it’s easy to forget that scammy early adopters often make out like thieves, even if the thing as a whole is a massive failure (see: cryptocurrency and NFTs, which made a few people extremely rich and many more people much poorer)
@neia What I don’t quite get yet is what FAANG gets out of it. They don’t just get to exit and call it a day. I guess this is just more short-term gains in the form of rising stock prices (and executive compensation based on releasing anything branded AI, regardless of whether it earns anything)? I can see how FAANG companies would take a significant stock price hit if they decided not to pursue this hype cycle, but it’s still coming at a huge cost. The level of sacrifice in all sorts of dimensions these companies are having to make just to be able to keep funding this moonshot is incredible.
@zkat What about individual executives? If I'm a VP and I embrace a bold new AI initiative, that's likely to get my org additional funding. I get more prestige. I could get a bonus from it. I can take this story to my next position, getting even more money and prestige.

But if I say that AI has at most very limited use in my org and we aren't going to invest in it, what happens? I saved my company from a $70mil boondoggle that would have caused a modest dip in stock prices four or five quarters from now. The value I added was in something that didn't happen. Nobody remembers it, I get no prestige, I get no extra bonuses, I get no extra resources for my org.
@neia that’s what I meant by executive compensation yah
@zkat Sorry, my reading comprehension is below par today
@neia No it’s fine I was just clarifying. I’m kinda writing word soup right now
@neia @zkat This.
As Deming said, the manager (or exec) does not work for the benefit of the company.

@zkat @neia it’s because they were able to sustain incredible growth over three decades as software was able to “eat the world.” That process is for the most part complete—e.g. social media companies couldn’t keep adding users indefinitely.

Prior to this, they were desperate—e.g. Facebook going so hard on the Metaverse they renamed themselves.

However, the prospect they could replace workers with LLMs is too alluring and profitable to give up on, even if impossible

@zkat @neia There's also the possibility that they are spending some of the warchest as a form of insurance against their competitors managing to (somehow) pull it off.

The chances are utterly miniscule, but the costs could be existential.

@zkat @neia What did Apple spend on that doomed car project, $15B?
@zkat @neia That's the whole VC/startups model: ponzi schemes. Nobody wants a working product or productive business. That's a pain and liability. They want a story they can sell to the next round of marks.
@zkat also, advanced human workers started out as novice human workers.
@zkat I hears that OpenAI is spending $3 to make every dollar they earn. Will you still pay for prompts when they haven't gotten any better and the price has tripled? How about 5x?
@zkat oh damn, I remember complaining about these parallels a long while back, when the first few LLMs hit the public sphere

@zkat Tangential anecdote: an ex-colleague made a Prolog implementation of a an expert system designed for decision making around human euthanasia.

Thankfully it was an experiment rather than something that was going to be used.

Expert systems were too inflexible and when the flexibility was built in, too inaccurate. The computing power wasn't there to enable the statistical grinding that could result in outputs that could fool uninformed humans. Wait, also credulous informed humans.

@zkat This is good to remember. I’d add a bit to #2. Our system foundered because some data are cheap to collect. Expert systems get based on those, regardless of whether they’re the right thing or not.

@zkat For those interested in a accessible summary of AI history we have this: https://www.youtube.com/watch?v=b9chqJ2TgzA

Having a step back and putting LLMs into the wider perspective of the development of AI really is helpful to seeing a lot of flaws in the hype.

history of the entire AI field, i guess

YouTube