Disappointing to read #stratechery tie itself into a knot trying to defend the Pentagon. For some reason, Ben thinks that the Pentagon is allowed to dictate material terms but Anthropic isn’t. Even a free market, realpolitik libertarian can see the rational inconsistency.

I don’t care for either team, but let’s not pretend that the Pentagon suffers from any real democratic oversight. From the in-auditable budgets to the banal secrets - voters are provided almost nothing https://stratechery.com/2026/anthropic-and-alignment/

Anthropic and Alignment

Anthropic is in a standoff with the Department of War; while the company’s concerns are legitimate, it position is intolerable and misaligned with reality.

Stratechery by Ben Thompson
lol @ AI generated images of actual people. That doesn't look like Dario at all.
#stratechery
Google, Nvidia, and OpenAI

OpenAI and Nvidia are both under threat from Google; I like OpenAI’s chances best, but they need an advertising model to beat Google as an Aggregator.

Stratechery by Ben Thompson
U.S. Intel

The U.S. taking an equity stake in Intel is a terrible idea; it also happens to be the least bad idea to make Intel Foundry viable.

Stratechery by Ben Thompson
https://stratechery.com/2025/tech-philosophy-and-ai-opportunity/ Tech Philosophy and #AI Opportunity – #Stratechery by Ben Thompson
Tech Philosophy and AI Opportunity

Positioning AI contenders — and losers — by their tech philosophy and business potential.

Stratechery by Ben Thompson

Notes on LLMs

As the zeitgeist has moved on from the furore created by “Something is Rotten in the State of Cupertino,” there are some very interesting follow up posts that came through.

There was a great post by Mills Baker – “What Apple’s LLM Fumbles Say About LLMs (Rather Than About Apple)“. Mills is more optimistic than Ben because LLMs are likely overestimating their ability to solve the last mile problem. The summary of the argument goes:

  • personal context is large and varied
  • the surface for controlling apps is equally large
  • LLMs are stochastic

This combination suggests there is a longer time horizon before a competitive platform might present itself. As a result, Apple was both right in removing claims that they have Apple Intelligence and what it could do and they can still win because they are still a super aggregator of personal context.

I think Apple will continue to be a super aggregator of personal information. However, the control surface of apps that the LLM needs to control feels like us thinking from the previous paradigm.

Ben Thompson’s latest addition is a place that resonates most with me:

So no, Apple is not doomed, at least not for now. There is, however, real cause for concern: just as tech success is built years in advance, so is failure, and there are three historical examples of once-great companies losing the future that Apple and its board ought to consider carefully.

Ben’s argument is that the future of these companies are written when they miss a generational event. The reason these groups of people often miss a generational event is because they might have been the architect of the current S-curve.

A summarized version goes: Apple reigned in the current smartphone era of computing. They have optimized themselves into a juggernaut of a business that has incredible margins by selling hardware running optimized, custom software that focuses on the whole widget + privacy. They charge a premium for this. However, this optimization of value generation might be the reason they miss the next generational leap around LLMs.

Specifically, Ben’s article highlights how the container “app” itself is probably not the right paradigm for this world of LLMs.

The new bridge is a user interface that gives you exactly what you need when you need it, and disappears otherwise; it is based on AI, not apps. The danger for Apple is that trying to keep AI in a box in its current paradigm will one day be seen like Microsoft trying to keep the Internet locked to its devices: fruitless to start, and fatal in the end.

There is a very interesting commentary related to this in the latest notes from Alex Komoroske. There were three takeaways:

  • sandboxing – a thing that enabled the modern web and modern mobile experiences might be a “limiter” for the world of infinite software that needs access to the personal context. Put another way, if Apple and Google and Microsoft don’t solve for a different runtime security protection, prompt injection alone will be a large enough vector and will be a ceiling to how useful it can get.
  • If we need a new runtime model (and we do), what’s the role of Apple, Google, and Microsoft? There might be a possibility they can come up with the new model where they can remain the personal information super aggregators. However, in this world they need to establish ways for infinite software to access this personal context in the local device / on the cloud.
  • The new world will likely be worse but better in a unique way. In this case, it likely solves for integration – how might we run snippets of code that cannot be trusted to safely integrate and operate on personal data? Is that better done locally or on the cloud? What’s the business model in this case? One that solves this becomes the new platform for AI.

I don’t question the existing aggregators if asked the question if they want to solve for this future, will say yes and even start projects to find an answer. I doubt if they will be the courageous ones that will do what’s necessary to move us to this new world when they have the world’s best money printer.

#ai #amazon #anthropic #claude #daringfireball #google #komoroske #llm #mcp #openai #stratechery

Something Is Rotten in the State of Cupertino

Who decided these personalized Siri features should go in the WWDC keynote, with a promise they’d arrive in the coming year, when, at the time, they were in such an unfinished state they could not be demoed to the media even in a controlled environment? Three months later, who decided Apple should double down and advertise these features in a TV commercial, and promote them as a selling point of the iPhone 16 lineup?

Daring Fireball
American Disruption

A new take on Trump’s tariffs, including using a disruption lens to understand the U.S.’s manufacturing problem, and why a better plan would leverage demand, not kill it.

Stratechery by Ben Thompson

Just look at the U.S. labs: ...The route of least resistance has simply been to pay Nvidia. DeepSeek, however, just demonstrated that another route is available: heavy optimization can produce remarkable results on weaker hardware and with lower memory bandwidth; simply paying #Nvidia more isn’t the only way to make better models.

@stratechery #Stratechery #AI #Training #China #DeepSeek #Restraints #MotherOfInvention
https://stratechery.com/2025/deepseek-faq/

DeepSeek FAQ

DeepSeek has completely upended people’s expectations for AI and competition with China. What is it, and why does it matter?

Stratechery by Ben Thompson
I hope that the business and tech writers, like #Stratechery, that have breathlessly described #Musk as a “consequential” genius over the past year will now spend as much column space addressing what he’s about to do to gut labor protections and oppress women.

Very interesting article from #stratechery about the US perception of the EU regulations on big companies (DMA). The perception about personal data is very different in the EU from the US. This is the most level-headed article I've read from the US perspective on this topic.
EU is making business very dangerous and expensive for the GAFA companies. It's not sure the intended goals are achieved though.

https://stratechery.com/2024/the-e-u-goes-too-far/

The E.U. Goes Too Far

Recent E.U. regulatory decisions cross the line from market correction to property theft; if the E.U. continues down this path they are likely to see fewer new features and no new companies.

Stratechery by Ben Thompson