.png)
Google’s Agent2Agent (A2A) Protocol came out with a bang last year, backed by partners such as Atlassian, Box, Salesforce, and Intuit. The protocol was released with the intention of being a universal language for AI agent, and when Google donated it to the Linux Foundation, it seemed to be on its way to overtaking MCP as the de facto communication standard. So why isn’t anyone talking about it anymore?
At a high level, A2A is an open protocol that standardizes how agents communicate, agnostic of the underlying vendor or framework. The key unlock is that A2A focuses on explicit communication and task completion, using agent cards to advertise capabilities and unique task IDs to relay specific context, artifacts, and user instructions. How it works:
This structure was created so that enterprises could build a multi-agent ecosystem, where individual agents operate with expertise within specific niches and communicate necessary context and tasks to each other. This would prevent context rot and instead, introduce a microservice-like architecture for AI operations where each agent remains focused and constrained to a clearly defined role.
A2A’s lack of widespread adoption came down to two things: use cases and implementation burden. With regards to use cases, A2A advertised features that went beyond MCP's capabilities, such as management of long-running tasks, stateful communication, enterprise-grade security and agent discovery. But the reality was that MCP supported many of these features already. A2A promoted the statefulness of the protocol as an advantage to using it. However, while the MCP protocol is stateless, MCP servers are inherently stateful; they can retain context over time and communicate that context to an agent via a task ID. A2A emphasized the need for specialized agents with multi-turn communication in between them, but MCP also provided this functionality through tools (and the ability to use them as agents) and bidirectional communication and adjustments between user and agent during task execution. A2A’s security measures and agent discovery were the key features novel to AI ecosystems, but there remain questions around whether these could be implemented as extensions to MCP.
Given the limited impact on use cases and with MCP’s growing adoption, A2A’s implementation burden ended up outweighing its benefits. Implementing the new protocol would mean managing two protocols, their compatibilities across all dependencies, AND an A2A-specific communication layer. And while A2A was ready for enterprise from its release, MCP was ready for any developer that wanted to experiment with adding agentic development to their ecosystem. A2A’s advanced features meant that developers would need to spend days learning complex agent orchestration concepts, and then build and manage an entire communication layer for the new protocol. MCP allowed you to connect Claude to Notion, Jira, and GitHub in under 10 minutes. In the race for adoption, MCP shone because it was a platform with a low learning curve that provided utility to developers very quickly. It connected to tools and AI assistants they were already using and familiar with.
The irony is that the enterprises A2A was building for have begun to adopt MCP at a rapid rate, but they use it without the security and infrastructure that was deemed indispensable by the A2A protocol. A host of MCP vulnerabilities and exploits have been discovered, with companies experiencing production data breaches, system compromises, and more.
At Credal, we recognize the need for utility and convenience when it comes to agentic development, but we also want to stress the need for security when using it in an enterprise context. That's why we've built a secure MCP and agent registry that integrates with every service and data source you use in your daily workflows while letting you inherit and sync data source permissions, audit data access, and safeguard PII.
As the number of agents scales across an organization, governance complexity grows exponentially. For example, consider the difference between an agent updating someone's personal spreadsheet versus one updating the financial model that feeds the board's P&L. Both are "update a spreadsheet" actions, but the risk profiles are significantly different. Credal's governance is purpose-built for this: role-based monitoring visibility, admin approval for accessing query logs on sensitive agents, rate limiting to prevent agents from overwhelming internal systems, et. cetera.
Any agent built in Credal can be exposed as an MCP server and consumed from Claude, Slack, ChatGPT, or internal applications, all without rebuilding the agent for each surface. All agent traffic to downstream systems is routed through a single governed point, where Credal can enforce rate limits, audit access, and prevent any single agent or surface from overwhelming internal infrastructure. And unlike read-only connector models, Credal's action providers support write operations: updating spreadsheets, sending messages, creating tickets, pushing status updates.
We’re focused on building a platform developers want to use, while also making sure it’s one that enterprises can trust without a second thought.
Credal gives you everything you need to supercharge your business using generative AI, securely.