Apr 20267 min

MCP and A2A in the same stack

Two open protocols are settling into different layers (tools and context versus agent-to-agent tasks), and your orchestration code still owns the boring parts.

Most "orchestration" threads I read still lump everything into one bucket: prompts, tools, subagents, and vendor SDKs. That makes it harder to decide what to standardize and what to keep in application code. Two protocols are now discussed often enough that they are worth separating cleanly: Model Context Protocol (MCP) for how an application or agent reaches data and tools, and Agent2Agent (A2A) for how one autonomous agent delegates work to another. They solve different problems. You can use both without pretending either one is a full production story.

MCP: one agent, many capabilities

MCP is an open standard for connecting AI applications to external systems: files, databases, product APIs, and other tools. The official MCP site describes it as a shared way for clients (Claude, ChatGPT, VS Code, Cursor, and others) to talk to servers that expose resources and actions. Anthropic's introduction frames the same idea: help models reach the systems where data actually lives so answers are grounded in the right place.

That is mostly client-to-server from the model's host: your agent runtime decides when to call a tool; MCP standardizes how that call is shaped and discovered. It is not, by itself, your eval suite, your tenancy model, or your incident runbook. It is wiring with a spec, which still beats ad-hoc REST shims that only one client understands.

A2A: agent to agent, task-shaped

Google announced Agent2Agent (A2A) in April 2025 as an open protocol for agents built by different teams or vendors to work together without sharing memory, tools, or internal prompts. The post states explicitly that A2A complements MCP: MCP supplies tools and context to an agent; A2A focuses on coordination between agents.

Mechanically, A2A leans on HTTP, SSE, and JSON-RPC ideas that already fit enterprise stacks. A client agent assigns work; a remote agent executes toward a task with a lifecycle and artifacts as outputs. Discovery uses an Agent Card (JSON) so a caller can see what a remote agent claims it can do. Long-running work and human-in-the-loop paths are first-class in the design goals, not afterthoughts. Spec and samples live in the open A2A repository; treat the exact API surface as versioned like any other dependency.

How I place them in one system

Inside a single product surface, MCP is usually the right layer for "this model may call these capabilities on our infra." Between products or between a generalist router and a specialist owned by another team, A2A is closer to "hand off a task, track status, get artifacts back" without forcing everyone into one framework.

Neither protocol replaces:

Auth and scope for every tool and every remote agent. Same instincts as least-privilege tool design: narrow credentials, allowlists, logging that does not leak secrets.

Evals and replay when routing or tools change. Those still deserve a named, CI-backed bar.

Your harness for timeouts, retries, and truncation. Protocols do not replace runtime policy.

Protocols give you interoperability at the boundary. Operations still live in your code.

What I watch when the hype is loud

Ecosystem posts love big adoption numbers and "USB-C for AI" analogies. Useful for intuition; useless as a design argument. I care whether a server or remote agent has a clear security story, whether discovery metadata stays honest as versions drift, and whether we can trace a task across hops when something fails three steps in. If a slide deck cannot answer those, the protocol choice does not matter yet.

That is the stack as of early 2026: MCP for tool and context plumbing at the model edge, A2A for agent-to-agent task routing where ownership is split, and your own orchestration for everything that must stay boring and testable.