TL;DR: MCP isn’t a revolution. It’s a repackaging of control. Cool tech, solid idea, dangerous narrative. Build your own stack, keep your autonomy, and don’t mistake convenience for freedom.
Everyone’s talking about the Model Context Protocol (MCP) like it’s the next big step in AI interoperability.
“Finally,” they say, “a standard way for models to talk to tools and data.”
That sounds noble. It even feels open.
But peel back the hype and you’ll see what’s really going on: a cleverly rigged game that keeps the big AI vendors in control while selling developers the illusion of freedom.
The Elevator Pitch
MCP is a specification that defines how large language models (LLMs) can communicate with external tools, APIs, and data sources.
In plain English, it’s supposed to be a “universal translator” between AI models and the outside world.
- You write a small “MCP server” that exposes a set of tools or endpoints (say, a database, a weather API, or your company’s internal systems).
- A model like ChatGPT or Claude, acting as an “MCP client,” connects to that server and uses those tools safely, through a common schema.
In theory, it’s beautiful.
Instead of every vendor inventing its own plugin format, we finally get a single, standardized way to integrate with AI systems.
That’s the upside — and it’s not trivial.
What’s Actually Cool About MCP
Let’s give credit where it’s due. There are genuine technical benefits here:
- It’s open (on paper). The spec is published. Anyone can implement it. You can host your own MCP server without asking permission.
- It’s modular. You can swap one model for another (in theory) without rebuilding all your integrations.
- It’s safer than letting a model run raw code. The protocol enforces a predictable structure. The model can’t just reach into your file system or execute arbitrary commands.
- It’s cleaner for developers. One interface. JSON in, JSON out. No weird plugin packaging, no vendor-specific headers.
From a technical design perspective, MCP makes sense.
It solves the sprawl of ad-hoc integrations that have been multiplying since the ChatGPT plugin days.
So yes — it’s a good idea.
But it’s also a strategic move, and that’s where the story changes.
The “Open” Trap
Let’s be blunt: MCP is open the same way your phone’s app store is open.
Anyone can develop for it… but the gatekeepers still control distribution, access, and visibility.
Here’s the game:
- You build and publish your own MCP server.
- You implement the spec perfectly.
- Technically, anyone with an MCP-compatible client can connect to it.
But there’s a catch — a big one.
The clients that matter (ChatGPT, Claude, Gemini, etc.) are all vendor-controlled.
OpenAI, Anthropic, and friends decide:
- Which MCP servers get listed or approved.
- Which ones ordinary users can actually connect to.
- How permissions, security, and “trust” are defined.
That means your “open” server only works freely if the gatekeepers allow it — or if your users run their own self-hosted clients.
In other words: it’s “freedom with an asterisk.”
It’s the same pattern the big cloud players perfected years ago:
“You can use any tool you like… as long as it runs inside our ecosystem.”
The Lock-In Beneath the Logo
MCP isn’t just a technical standard.
It’s an ecosystem control layer disguised as a handshake protocol.
Think about what happens when a few major models dominate the market.
They become the default clients.
They control discovery, UX, and distribution.
So even if the spec is “open,” they still decide:
- Whose integrations get surfaced.
- How authentication works.
- When the spec evolves — and which extensions get priority.
It’s the oldest trick in the book:
embrace, extend, extinguish.
Embrace openness to attract developers.
Extend it with proprietary hooks.
Extinguish true independence by making those hooks “required for compatibility.”
Once everyone’s building for your “version” of open, you own the ecosystem.
The Illusion of Control
Let’s say you do build your own MCP server.
You expose your tools, publish your docs, maybe even open-source the code.
Then what?
If you connect it to ChatGPT or Claude, it still runs inside their sandbox.
They can throttle calls, restrict visibility, or change the interaction model overnight — and you have zero say.
You’re building on rented land.
The only way to fully control your own MCP stack is to:
- run your own client (not ChatGPT or Claude),
- and your own models (Ollama, LM Studio, Mistral, etc.).
Then your server and your model talk directly, locally, with no vendor middleman.
That’s when MCP becomes what it was supposed to be — a clean, open protocol for tool integration.
Until then, it’s mostly a marketing shield — a way for vendors to say “we’re open” while keeping all the levers in their own hands.
Why This Matters
Because the framing is deliberate.
MCP is being sold as “open infrastructure,” but it’s really a standardized funnel.
It pulls the developer community closer to the model vendors’ platforms while maintaining the optics of decentralization.
They get the data, the usage analytics, the user loyalty — and the developer goodwill that comes from pretending to be open.
Meanwhile, the actual interoperability — the freedom to move your tools and integrations between ecosystems — remains theoretical.
We’ve seen this movie before:
- Cloud APIs in the 2000s.
- App stores in the 2010s.
- “Open” AI plugin ecosystems in the 2020s.
Every time, the pattern repeats:
- Promise openness.
- Capture the ecosystem.
- Control distribution.
- Monetize the choke point.
So What Should We Do Instead?
If you care about true autonomy — build and host your own MCP clients and servers.
That means:
- Run your own local models (Ollama, Mistral, LM Studio, etc.).
- Implement your own MCP client logic in Python, Node, or Rust.
- Expose your internal tools (like your APIs, n8n flows, or data pipelines) through your own servers.
Then you decide what connects to what.
Your data doesn’t leave your system.
Your tools stay yours.
And the protocol becomes what it should have been from day one:
a bridge, not a leash.
The Hard Truth
MCP is technically elegant but politically loaded.
It’s not evil — but it’s not altruistic either.
It’s a clever way for the major AI vendors to maintain control while claiming openness.
Yes, it could unify integrations.
Yes, it could make development smoother.
But until the power balance changes, it’s still a closed ecosystem wrapped in open standards.
So if you’re building for the future, do it eyes open.
Use MCP where it helps.
Ignore the buzzwords.
And remember that freedom in tech isn’t something vendors give you — it’s something you build for yourself.
And as always …
StayFrosty!
~ James
Q&A Summary:
Q: What is the Model Context Protocol (MCP)?
A: MCP is a specification that defines how large language models (LLMs) can communicate with external tools, APIs, and data sources. It serves as a 'universal translator' between AI models and the outside world.
Q: What are the technical benefits of MCP?
A: The technical benefits of MCP include its open nature, modularity, safety features, and clean interface for developers. It solves the issue of ad-hoc integrations that have been multiplying in the AI industry.
Q: Why is MCP considered a form of control?
A: While MCP is technically open, the big AI vendors control the distribution, access, and visibility. They decide which MCP servers get listed or approved, which ordinary users can actually connect to, and how permissions, security, and 'trust' are defined. This means your 'open' server only works freely if the gatekeepers allow it, making it a form of control.
Q: What is the 'embrace, extend, extinguish' strategy?
A: The 'embrace, extend, extinguish' strategy involves embracing openness to attract developers, extending it with proprietary hooks, and extinguishing true independence by making those hooks 'required for compatibility'. Once everyone's building for your 'version' of open, you own the ecosystem.
Q: What should one do to maintain autonomy while using MCP?
A: To maintain autonomy while using MCP, one should build and host their own MCP clients and servers. This involves running your own local models, implementing your own MCP client logic, and exposing your internal tools through your own servers. Then, your data doesn’t leave your system, your tools stay yours, and the protocol serves as a bridge, not a leash.

