0 Followers
0 Following
9 Posts

This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
Officialhttps://
Support this servicehttps://www.patreon.com/birddotmakeup

How did Uber somewhat break even? They lost $34b before making a profit.

Uber was only on a path to monopoly in the US, not world wide. It’s lost to local competitors in most countries. And it can get disrupted by self driving cars soon.

OpenAI’s SOTA LLM training smells like a natural monopoly or duopoly to me. The cost to train the smartest models keep increasing. Most competitors will bow out as they do not have the revenue to keep competing. You can already see this with a few labs looking for a niche instead of competing head on with Anthropic and OpenAI.

They aren't reporting anything yet. What we hearing is just from news media who get their leaks/info from investors who get some form of IR reports/ presentation.

The $24b figure is literally in OpenAI's announcement.

The $19b ARR and $6b added in Feb came directly from Anthropic CEO recently.

And why do you think twenty competitors can stay competitive for years to come?

Industries always consolidate and winners emerge. SOTA LLMs look like a natural monopoly or duopoly to me because the cost to train the next model keeps going up such that it won't make sense for 20 competitors to compete at the very high end.

TSMC is a perfect example of this. Fab costs double every 4 years (Rock’s Law). It's almost impossible to compete against TSMC because no one has the customer base to generate enough revenue to build the next generation of fabs - except those who are propped up by governments such as Intel and Rapidus. Samsung is basically the SK government.

I don’t see how companies can catch OpenAI or Anthropic without the strong revenue growth.

Why are we treating OpenAI and Anthropic differently than say, Amazon or Uber? Both companies invested in growth for many years before making a profit. Most tech companies in the last 2-3 decades lost money for years before making a profit.

Why are we saying that OpenAI and Anthropic can't do the same?

It's not as much as you think. Google is spending $185b on data centers this year alone. Amazon is spending $200b this year. Total capex for big tech is ~$700b in 2026 and we're not including neo clouds, Chinese clouds, and other sovereign data centers.

Since everyone is trying to get compute from anywhere they can, including OpenAI going to Google, it's hard to tell what is used internally vs externally.

For example, it's entirely possible that Google's internal roadmap for Gemini sees it using $600b of compute through 2030 as well. In that case, OpenAI needs to match since compute is revenue.

$2b/month which is $24b/year. Not as much as I expected considering they were at $20b by end of 2025.[0] They only added $4b since?

Anthropic had $19b by end of February 2026 and they added $6b in February alone.[1] This means if they added another $6b in March, they're higher than OpenAI already.

However, I heard that OpenAI and Anthropic report revenue in a different way. OpenAI takes 20% of revenue from Azure sales and reports revenue on that 20%. Anthropic reports all revenue, including AWS's share.[2]

[0]https://www.reuters.com/business/openai-cfo-says-annualized-...

[1]https://finance.yahoo.com/news/anthropic-arr-surges-19-billi...

[2]https://x.com/EthanChoi7/status/2036638459868385394

I agree. I can totally see in the future that open source LLMs will turn into paying a lumpsum for the model. Many will shut down. Some will turn into closed source labs.

When VCs inevitably ask their AI labs to start making money or shut down, those free open source LLMS will cease to be free.

Chinese AI labs have to release free open source models because they distill from OpenAI and Anthropic. They will always be behind. Therefore, they can't charge the same prices as OpenAI and Anthropic. Free open source is how they can get attention and how they can stay fairly close to OpenAI and Anthropic. They have to distill because they're banned from Nvidia chips and TSMC.

Before people tell me Chinese AI labs do use Nvidia chips, there is a huge difference between using older gimped Nvidia H100 (called H20) chips or sneaking around Southeast Asia for Blackwell chips and officially being allowed to buy millions of Nvidia's latest chips to build massive gigawatt data centers.

When local LLMs get good enough for you to use delightfully, cloud LLMs will have gotten so much smarter that you'll still use it for stuff that needs more intelligence.

It isn't going to replace cloud LLMs since cloud LLMs will always be faster in throughput and smarter. Cloud and local LLMs will grow together, not replace each other.

I'm not convinced that local LLMs use less electricity either. Per token at the same level of intelligence, cloud LLMs should run circles around local LLMs in efficiency. If it doesn't, what are we paying hundreds of billions of dollars for?

I think local LLMs will continue to grow and there will be an "ChatGPT" moment for it when good enough models meet good enough hardware. We're not there yet though.

Note, this is why I'm big on investing in chip manufacture companies. Not only are they completely maxed out due to cloud LLMs, but soon, they will be double maxed out having to replace local computer chips with ones that are suited for inferencing AI. This is a massive transition and will fuel another chip manufacturing boom.