(1/12) The 'AI bubble' is a political economy story I've spent over 20k words unraveling. LLMs aren't just a technology; they're a mirror of the same delusions that drive market fundamentalism: a naive understanding of innovation, deference to capital, and the denigration of human labor.

Our debates are implicitly about all of these and more. A thread đź§µ:

misaligned.markets/tag/ai/

#AI #LLMs #tech #AIbubble #bigtech #poliecon

(2/12) To understand the bubble, we first have to demystify the tech. LLMs require staggering amounts of human involvement during training and at runtime. Any “intelligence” is heavily scaffolded. That most discussions don't start with this notion criminal.

https://misaligned.markets/understanding-large-language-models/

#AI #tech #bubble

A (less) technical guide for understanding large language models

Disambiguating LLMs from other machine learning tech is the first step to making sense of the current AI boom.

Misaligned Markets

(3/12) So much of the magic of LLMs comes from UX designed to meet our expectations. The rest comes from the fact that language itself encodes everything: causality, instructions, and reasoning traces. LLMs are just big, giant reflections of our culture.

https://misaligned.markets/understanding-large-language-models/

#LLM #BigTech

A (less) technical guide for understanding large language models

Disambiguating LLMs from other machine learning tech is the first step to making sense of the current AI boom.

Misaligned Markets

(4/12) But by design LLMs hide the involvement of humans in their process chain. Their entire marketing and UX requires this invisibility. The result is no different from any other UX dark pattern. Users adopt patterns of use that designers prefer. Designers train on those patterns creating a self-reinforcing loop. If this scales, I think it'll be worse than social media.

https://misaligned.markets/llm-intelligence-is-a-dark-pattern/

#LLM #UX #darkpatterns

LLM “intelligence” is a dark pattern

LLMs leverage scaffolding and user psychology to appear as slabs of raw intelligence in service of a costly illusion.

Misaligned Markets

(5/12) Specifically, LLMs benefit from dark patterns that “launder” the idea that a mind is on the other end of the screen. This obfuscates the intentions of AI companies while making invisible the labor of the people lower in the chain.

I don't think there's a “neutral” version of this, but I do think there are less bad versions of it, if LLMs are scoped for specific tasks.

https://misaligned.markets/llm-intelligence-is-a-dark-pattern/

#AI #darkpattern #UI

LLM “intelligence” is a dark pattern

LLMs leverage scaffolding and user psychology to appear as slabs of raw intelligence in service of a costly illusion.

Misaligned Markets

(6/12) LLM boosters ignore the scaffolding and UX driving LLMs when predicting future capabilities. It’s similar to the blind spot of market fundamentalists who believe in “self-regulating” markets while ignoring the role of institutions and market power.

https://misaligned.markets/ai-hype-market-fundamentalism/

#aibubble #capitalism #tech

AI hype is a mirror of market fundamentalism

Both AI enthusiasts and market fundamentalists gloss over the context needed to understand complex systems.

Misaligned Markets

(7/12) Just as LLMs rely on tooling and governance, market systems rely on rules and laws. There is no “emergent” LLM intelligence, just as there is no “self-regulating” free market. Both are myths of autonomy. https://misaligned.markets/ai-hype-market-fundamentalism/

#LLM #neoliberalism #AI #bubble

AI hype is a mirror of market fundamentalism

Both AI enthusiasts and market fundamentalists gloss over the context needed to understand complex systems.

Misaligned Markets

(8/12) This leads us to the inevitable: LLMs are a bubble. But “bubble” doesn't mean “disappearing.” It means they are one piece of a much larger, brewing financial crisis. LLMs will likely persist in some form after this passes.

https://misaligned.markets/what-to-expect-ai-bubble/

#aibubble #financialcrisis #recession

What to expect when you’re expecting a (AI) bubble

Don’t look to the dot-com bubble to understand the consequences of the current one.

Misaligned Markets
(9/12) In order for LLMs to not be a bubble and meet their $1T premise, they need to be more like CP30 and less like driverless cars. Remember we were supposed to be able to drive from SF to NY with our feet up? We're being fed the similar promises about the future of labor by the very same people. https://misaligned.markets/what-to-expect-ai-bubble/#the-future-of-productive-llms-use-probably-looks-a-lot-like-driverless-cars
What to expect when you’re expecting a (AI) bubble

Don’t look to the dot-com bubble to understand the consequences of the current one.

Misaligned Markets

(10/12) I call this the “capital-as-labor” delusion: the idea that buying software is a 1:1 substitute for hiring a human. It's not. It takes a massive amount of labor to maintain the tooling and scaffolding that keeps LLMs running. Like driverless cars whose deployments are managed by engineers.

https://misaligned.markets/what-to-expect-ai-bubble/

#capital #labor #capitalism #aibubble

What to expect when you’re expecting a (AI) bubble

Don’t look to the dot-com bubble to understand the consequences of the current one.

Misaligned Markets

(11/12) How does the bubble unwind? Already is. Watch the pricing games and the shift to charging for tokens. AI companies have been operating on Loony Toons physics, but gravity is catching up.

As the bubble pops, expect the house of cards around data center financing to implode. This is as much a “Data Center bubble” as it is an “AI bubble.”

https://misaligned.markets/what-to-expect-ai-bubble/#1-implosion-of-the-ai-startup-ecosystem

#AI #economics #startups #recession #bubble #datacenters

What to expect when you’re expecting a (AI) bubble

Don’t look to the dot-com bubble to understand the consequences of the current one.

Misaligned Markets

(12/12) Not sure what remains after #aibubble. My entire bubble post wrestles with this because experts and the media wouldn't.

Tech incumbents are in a position to absorb the wreckage (sadly) and set the narrative for what's next. On the other end scoped LLMs can empower individuals w/out harm (as folks like @pluralistic have discussed).

Historically bad tech was constrained by promoting responsible use. That may be true here too!

https://misaligned.markets/what-to-expect-ai-bubble/

What to expect when you’re expecting a (AI) bubble

Don’t look to the dot-com bubble to understand the consequences of the current one.

Misaligned Markets