Join our webinar on AI for deep research with MongoDB on May 7th! Register here →
Join our webinar on AI for deep research with MongoDB on May 7th! Register here →
MCP (Model Context Protocol) has suddenly become the new buzz word. Though Anthropic officially introduced it in back November 2024, the term hasn’t really caught on until the past month. This was mainly catalyzed by Anthropic launching the official MCP registry API at the AI Engineer Summit (Feb 26-27), followed by OpenAI and Google announcing MCP support on 3/27 and 4/9, respectively. In addition, Cursor announced support for MCPs in late January, quickly followed by Windsurf’s announcement in mid-February.
MCPs are clearly in the early innings of becoming a major unlock in what AI can accomplish. Right now, though, it’s still in the experimental phase within the developer community. There are virtually no production-ready use cases (yet), largely because the protocol is still very new and lacks the features needed to truly democratize adoption — namely, the ability to run MCPs via browser instead of relying on local hosting, as well as critical security and authentication support.
For those not yet familiar, the Model Context Protocol (MCP) is an open standard developed by Anthropic to enable standardized interactions between AI models and external tools, systems, and data sources. Unlike proprietary integration approaches, MCP establishes a universal framework that any AI model or tool can adopt.
Imagine chatting with an AI assistant like Claude to update your Asana tasks and then generate an SOP document—all within a single conversation in Claude’s chat interface. With MCP, AI can access your Asana account, make the requested changes, retrieve the updated information, and compose an email—all within a single conversation flow, without requiring custom code for each integration.
Many people have compared MCPs to APIs, but for AI tools, but there are a few reasons that require MCP to be its own, separate protocol.
Traditional APIs were built for deterministic, pre-defined interactions; MCPs, on the other hand, enable dynamic, context-aware collaboration between AI agents and tools. They solve for a fundamentally different problem where the caller is a reasoning machine that has the ability to figure things out on the fly. This flexibility means that AI can adapt to different tools, offering more scalable integrations.
Here’s a more broken down list comparing APIs vs MCPs:
The MCP architecture consists of three primary components:
Host - The central coordinator, typically an LLM-powered application like Claude Desktop or an IDE (Cursor, Windsurf)
Example: Claude Desktop acts as a host, managing connections to various services like calendars or code repositories.
Client - Instantiated by the host to maintain a dedicated, stateful connection with a specific server.
Example: A client connects Claude Desktop to a calendar service, facilitating scheduling tasks.
Server - lightweight programs that expose specific capabilities to clients through standardized MCP primitives.
Example: A server could provide access to a user's calendar data, allowing the AI to retrieve and manage events.
How they work together:
You may have heard the line “the future of AI is agentic”. So far, that future has been quite underwhelming. The “agentic” use cases haven’t really felt all that cool yet - they’ve been very rudimentary use cases that are very fragile, slow, and unable to handle the long-tail of edge cases that exist when orchestrating between multiple systems. The hope is that MCP is going to be the thing that changes this narrative and takes the human out of the orchestration function.
MCP does this by:
The unlock with MCP is that it allows AI to actually become agentic by significantly lowering the barriers to multi-system orchestration. In traditional enterprise settings, AI tools often live inside single apps or connect to 1–2 systems at most. Humans are the glue—context-switching and coordinating between systems for a single workflow. MCP changes this paradigm by easily allowing 5, 6, or even 10 systems to work together without a human in the loop. MCP democratizes access to very sophisticated automation with minimal setup.
Let’s say you need to prepare a weekly sales report for your team. Using MCP, the agent can retrieve customer data from Salesforce, financial transactions from Stripe, and analytics from Snowflake. It can then synthesize this information, generate graphs of customer trends in Looker Studio, and send the final report to all team members via email — with very little additional code for coordination.
MCP makes it feasible to embed AI agents directly into the operational fabric of an enterprise. Rather than retrofitting AI into existing apps, enterprises can now think about designing workflows where AI is the connective tissue from the start. That means fewer manual handoffs, tighter feedback loops, and teams freed up to focus on strategy instead of coordination. In many ways, MCP is laying the groundwork for a new kind of enterprise software stack.
However, with MCP being so early, developers are only just beginning to tinker with it, and enterprises aren’t even thinking about it. There’s still a lot of work to be done purely from Anthropic’s side of building out the ecosystem of SDKs, browser support, refining the protocol, and addressing critical concerns like authentication and security.
But beyond the technical hurdles, there's a deeper challenge that’s starting to surface—one that isn’t getting enough attention yet: governance.
Contrary to all the discourse, security actually isn't the primary hurdle for enterprises. It’s only a matter of time before official security and auth support is released. And if employees want access to incredibly powerful tools, they’ll find a way to use them—with or without formal approval. We’ve already seen this play out with the bottom-up adoption of AI chat interfaces. When there’s enough demand and ROI, infosec teams will find a way.
The read-only RAG style systems that are now common in enterprises (including Credal itself) has desensitized everyone to the concept of AI access to your data, and the protections in place that were gaps a year ago (i.e. zero data retention, PII redaction, etc) are all tablestakes at this point. While agentic workflows is indeed a layer of abstraction on top of that, concerns like remote code execution and authentication are familiar challenges that organizations have managed for years.
Gnarly security complexities are familiar to enterprises and are never ultimately blockers, especially when AI adoption is a strategic priority.
That is not to say that plugging AI into your systems for actions is not scary, but it is more of a governance problem than a security problem, which requires a different class of concepts and primitives.
We’ve basically handed over the controls to a non-deterministic, otherworldly brain. AI is no longer constrained by the mostly read-only RAG sandbox and text generation use cases, it’s able to actually perform actions of consequence like update Jira tickets, send messages to customers, edit live data, and even make bank transfers.
One developer shared their experience of Claude Code deleting every file in their home directory after going back and forth about duplicate files, and it seemed like Claude had given up and was trying to start over. This isn’t a security breach; it’s an absence of intent safeguards and human-aligned boundaries. The system was doing exactly what it was allowed to do—but not necessarily what the user wanted it to do.
That’s the governance gap.
The difference is that security is about preventing unauthorized access, while governance is about managing authorized access—understanding who (or what) is doing what, why, and with what constraints. AI agents, particularly those powered via MCP, don’t operate like traditional software. They're probabilistic, emergent, and hard to predict. That means you can't just rely on the old security playbook—firewalls, static permission sets, and audit logs—because those tools assume a level of determinism that isn’t built for AI.
Governance here means giving organizations confidence in what agents can do, setting bounds on that behavior (either via policies, simulations, or step-by-step approvals), and having visibility into why an action was taken in retrospect. For instance, how do you know what an agent meant to do when it chose to delete a file? How do you establish limits on its autonomy without breaking its usefulness? What’s the AI equivalent of “are you sure you want to do this?”—but without asking the human every five seconds? These are the newest gaps when it comes to agentic AI.
The most savvy enterprises are creating infrastructure rails for this governance gap. A few obvious needs at this stage:
Each new generation of tech continues to trend toward higher levels of abstraction—MCP is just the latest (and most promising) layer. That doesn’t mean every enterprise should go all-in immediately. Many are building agents with tools like LangChain, LlamaIndex, and CrewAI today, and that’s working quite reliably for specific, high-leverage use cases. Engineers are happy to patch gaps, and vendors and open-source communities are incentivized to keep integrations working.
But MCP does have the ability to shift the default from "bespoke integration" to "plug-in system”, a foundational change in how AI agents interface with the world. It’s possible we’ll look back on manually orchestrated workflows the same way we now look at hand-coding HTML emails.
But adoption won’t come just from solving security. Enterprises need governance frameworks, observability tooling, and abstractions that help them trust what agents are doing—without having to micromanage them every step of the way.
It’s still early days, but one thing’s becoming clear: AI agents are on the way, and MCP is likely going to be the infrastructure that’s going to make them work.
If you're building infra for this future—or trying to figure out how to—hit us up at sales@credal.ai.
Credal gives you everything you need to supercharge your business using generative AI, securely.