MCP is Dead; Long Live MCP!

Understanding the social media zeitgeist around CLIs and the premature death of MCP

As soon as MCP came out I thought it was over engineered crud and didn’t invest any time in it. I have yet to regret this decision. Same thing with LangChain.

This is one key difference between experienced and inexperienced devs; if something looks like crud, it probably is crud. Don’t follow or do something because it’s popular at the time.

I still don't really understand what LangChain even is.

MCP is a fixed specification/protocol for AI app communication (built on top of an HTTP CRUD app). This is absolutely the right way to go for anything that wants to interoperate with an AI app.

For a long time now, SWEs seem to have bamboozled into thinkg the only way you can connect different applications together are "integrations" (tightly coupling your app into the bespoke API of another app). I'm very happy somebody finally remembered what protocols are for: reusable communications abstractions that are application-agnostic.

The point of MCP is to be a common communications language, in the same way HTTP is, FTP is, SMTP, IMAP, etc. This is absolutely necessary since you can (and will) use AI for a million different things, but AI has specific kinds of things it might want to communicate with specific considerations. If you haven't yet, read the spec: https://modelcontextprotocol.io/specification/2025-11-25

Specification - Model Context Protocol

Model Context Protocol

Why is this the right way to go? It's not solving the problem it looks like it's solving. If your challenge is that you need to communicate with a foreign API, the obvious solution to that is a progressively discoverable CLI or API specification --- the normal tool developers use.

The reason we have MCP is because early agent designs couldn't run arbitrary CLIs. Once you can run commands, MCP becomes silly.

There is a clear problem that you'd like an "automatic" solution for, but it's not "we don't have a standard protocol that captures every possible API shape", it's "we need a good way to simulate what a CLI does for agents that can't run bash".

It's significantly more difficult to secure random clis than those apis. All llm tools today bypass their ignore files by running commands their harness can't control.
I'm fuzzy when we're talking about what makes an LLM work best because I'm not really an expert. But, on this question of securing/constraining CLIs and APIs? No. It is not easier to secure an MCP than it is a CLI. Constraining a CLI is a very old problem, one security teams have been solving for at least 2 decades. Securing MCPs is an open problem. I'll take the CLI every time.
For the Agent to use CLI, don't we have to install CLI in the run-time environment first? Instead for the MCP over streamable HTTP we don't have to install anything and just specify the tool call in the context in't it?
This rolls up to my original point. I get that if you stipulate the agent can't run code, you need some kind of systems solution to the problem of "let the agent talk to an API". I just don't get why that's a network protocol coupling the agent to the API and attempting to capture the shape of every possible API. That seems... dumb.

Environments where it’s hard to install and configure cli tools and where you want oauth flows.

That is mcp for coding agents is dumb. For gui apps installed at non-cli using non-permissive endpoints it makes a lot of sense.

Right, this is the part of the problem I get, but a universal meta-API seems like they're overshooting it by a lot.
no, it's all about auth. MCP lets less-technical people plug their existing tools into agents. They can click through the auth flow in about 10 seconds and everything just works. They cannot run CLIs because they're not running anything locally, they're just using some web app. The creator of the app just needed to support MCP and they got connectivity with just about everything else that supports MCP.
Write better CLIs for the agents of the less-technical people. The MCPs you're talking about don't exist yet either. This doesn't seem complicated; MCP seems like a real dead end.

A lot of the reasons to use MCP are contained in the architecture document (https://modelcontextprotocol.io/specification/2025-11-25/arc...) and others. Among them, chief is security, but then there's standardization of AI-specific features, and all the features you need in a distributed system with asynchronous tasks and parallel operation. There is a lot of stuff that has nothing to do with calling tools.

For any sufficiently complex set of AI tasks, you will eventually need to invent MCP. The article posted here talks about those cases and reasons. However, there are cases when you should not use MCP, and the article points those out too.

Architecture - Model Context Protocol

Model Context Protocol
If the chief reason to use MCP is security, I'm sold: it's a dead letter, and we're not going to be using it a couple years from now.

MCPs are great for some use cases

In v0, people can add e.g. Supabase, Neon, or Stripe to their projects with one click. We then auto-connect and auth to the integration’s remote MCP server on behalf of the user.

v0 can then use the tools the integration provider wants users to have, on behalf of the user, with no additional configuration. Query tables, run migrations, whatever. Zero maintenance burden on the team to manage the tools. And if users want to bring their own remote MCPs, that works via the same code path.

We also use various optimizations like a search_tools tool to avoid overfilling context

I can add Supabase or Stripe to my project with zero clicks just by setting up a .envrc.
As yourself: what kind of tool I would love to have, to accomplish the work I'm asking the LLM agent to do? Often times, what is practical for humans to use, it is for LLMs. And the reply is almost never the kind of things MCP exports.

You interact with REST APIs (analogue of MCP tools) and web pages (analogue of MCP resources) every day.

I'd recommend that you take a peek at MCP prompts and resources spec and understand the purpose that these two serve and how they plug into agent harnesses.

So you love interacting with web sites sending requests with curl?
And if you need the price of an AWS service, you love to guess the service name (querying some other endpoint), then ask some tool the price for it, get JSON back, and so forth? Or you are better served by a small .md file you pre-compiled with the services you use the most, and read from it a couple of lines?

> I'd recommend that you take a peek at MCP prompts and resources spec

Don't assume that if somebody does not like something they don't know what it is. MCP makes happy developers that need the illusion of "hooking" things into the agent, but it does not make LLMs happy.