What is Agent2Agent (A2A) Protocol?

Agent2Agent (A2A) Protocol is an open protocol that provides a standard way for agents to collaborate, agnostic of the underlying framework or vendor. A2A is not a replacement for the popular Model Context Protocol (MCP) by Anthropic, but instead is focused on embracing agents.

A2A Protocol is developed and maintained by Google.

Why did Google design the A2A protocol?

Companies aren’t designed to be small machines—they’re not run by one omniscient human who decides and executes on all problems. Instead, they’re built by thousands of people with specialized roles, specific permissions, and deep contextual knowledge on their job.

Accordingly, a winning AI strategy isn’t to have one corporate agent that does everything. Instead, companies need specific agents that can tackle tasks that require niche, but critical contextual information. For example, a company might want to employ an AI agent that manages outbound emails. They might want another agent who maintains the company’s Salesforce. They might need another that maintains code pull requests.

Previously, these agents were siloed. They had to operate individually, and that meant feeding them extra context to do tasks that might fall outside of their domain. This strategy dilutes their ability to be focused and effective on one specific task. Instead, A2A allows agents to communicate and work together, collaborating to accomplish something. For example, with A2A, one agent could delegate individual agents to schedule tasks with prospects, provision a sandboxed interview environment, and manage email communication—all towards a shared goal of automated recruiting.

A2A’s Core Design Principles

The goal of A2A is to embrace an agent’s natural capabilities. That means enabling agents to collaborate while allowing each agent to retain autonomy and privacy over its memory, tools, and context. A2A treats agents in the same capacity that managers would treat employees—diverse individuals with the capacity to work together.

A2A builds on existing standards such as HTTP, SSE, and JSON-RPC, making it easy to integrate with existing IT stacks. A2A also has security baked in, with authentication and authorization that match the features of OpenAPI (the predominant protocol for APIs).

A2A has support for long-running tasks, including tasks that facilitate deep research that might take hours or tasks that require a human-in-the-loop, potentially stretching days. It’s also modal agnostic, supporting text, audio, and video streaming agents.

A2A has industry backing

Naturally, A2A generated a lot of market hype given it is being developed by Google, a company that needs no introduction. But the launch wasn’t just supported by Google, but 50+ technology partners, including Atlassian, Box, Cohere, Intuit, Salesforce, SAP, ServiceNow, UKG, and Workday. It also gained support from some service providers like Accenture, BCG, Capgemini, Cognizant, Deloitte, HCLTech, Infosys, KPMG, McKinsey, PwC, TCS, and Wipro.

How A2A Works in Practice

A2A has a simple 3-prong communication protocol to accomplish work. In order, these are:

  1. Client agent fetches Agent Card from the server’s publicly known URL
  2. Client sends initial message with unique Task ID and waits for the task to reach terminal state
  3. Client agent formulates and communicates tasks,the  remote agent acts on tasks to provide information or take action

A2A has some notable capabilities for modern ted towards task completion with a defined lifecycle and “artifact” outputs. This prevents agents from waffling around and instead focuses them  work. These include:

  • Capability Discovery: Agents are easily able to advertise capabilities using an “Agent Card” in JSON format. This makes them simultaneously readable to other agents for collaboration and humans for debugging
  • Task Management: A2A’s communication is oriented towards task completion with defined lifecycle and “artifact” outputs. This prevents agents from waffling around and instead focuses them on getting work done.
  • Collaboration: Agents are able to send messages to communicate context, replies, artifacts, or user instructions. This helps underlying models carry out accurate actions.
  • User Experience Negotiation: Each message includes “parts” with specified content types for format negotiation.

A real-world example of this is an agentic system for hiring software engineers. A user would task an agent to find candidate sourcing, scheduling, and background checks—this agent will then employ other agents to get this done.

Credal is A2A ready

Credal is a native AI product that provides a robust environment for AI, including governance through human approvals and inherited permissions, auditability through access logs, and native support for multi-agent workflows from Day 1. Credal is more robust than Google’s own products like Google Agentspace, which promise lofty features, but it is mostly designed to wrap third-party agents instead of providing tooling for building truly custom, first-party agents.

Give your team agents to get work done anywhere

Credal gives you everything you need to supercharge your business using generative AI, securely.

Ready to dive in?

Get a demo