Exposing Your Application Through Multiple Protocols

A common misconception, particularly among developers who arrive at Beach from agent-framework backgrounds, is that the application must choose between A2A and MCP, or between either of those and a conventional REST API, when planning how it will be reached by other software. The misconception is understandable; it is also wrong. Beach is a protocol-agnostic interior surrounded by replaceable edges, and a single Beach application can be reached through any number of protocols at the same time. The orchestrator at the centre does not know — and need not know — which transport delivered a given request.

The right question is therefore not "which protocol do I pick?" but "which capabilities of my application should be exposed through which protocol surfaces, and to whom?" That is a deliberation about audiences, integration economics, and how each capability is most naturally consumed. It is not a deliberation about the architecture of the application.

This article walks through the question requirements-first.

The shape of the problem

Suppose you are building an internal travel-planning application for a corporate travel function. The application has, broadly, three sorts of integration partner:

  1. End-user touchpoints — a web chat panel that internal travellers use to plan trips; eventually a mobile application; possibly a voice agent in time. These are the consumer-facing channels.
  2. Agent peers — a partner travel-agency's booking agent that you collaborate with on bookings; an internal expense-policy agent that confirms whether a proposed booking is within the traveller's allowance; a future on-call concierge agent that handles emergency rebookings.
  3. Conventional system integrations — the internal HR system that confirms employment status; the company CRM that wants to know when a traveller has booked a substantial trip; a finance dashboard that wants to query "what is the company's current outstanding flight commitment?" via REST.

A team without Beach would be tempted to build three different applications, one for each kind of partner. The travel orchestrator would be duplicated across all three; the partner-booking integration would be reimplemented as needed; the audit log would fragment.

Beach's answer is to build the application once and expose it through whatever combination of protocol surfaces each integration partner expects.

The protocol surfaces, and what each is good for

Beach's @cool-ai/beach-transport package ships four classes of inbound adapter at the time of writing — A2A, MCP, REST/webhook, and SSE — each of which wires into the same orchestrator without modification. There are also four classes of outbound. The sections that follow take each protocol in turn and name the integration patterns it suits best.

A2A — for collaborative, conversational work between agents

The Agent-to-Agent protocol is a JSON-RPC-shaped specification, originating in the Google A2UI/A2A working group, which has emerged as the consensus method for one autonomous agent to call another. It is conversational: a Message arrives; a Message is returned; the conversation may extend across several turns; results may stream; the called agent may declare itself "suspended" pending human approval and resume later.

The shape is appropriate when:

  • Both sides are themselves agents, with their own reasoning and tool surfaces.
  • The interaction is genuinely multi-turn — clarification, follow-up, partial results delivered asynchronously.
  • Each side wishes to expose its full agent capability, rather than a curated subset of stateless tools.

In the travel-planning example, the integration with the partner-agency's booking agent is a textbook A2A case. The application says "I have a traveller who needs a quiet boutique hotel in Rome for the third week of September; the company's preferred-supplier list takes precedence; budget is up to £400 per night." The partner agent reasons, asks a clarifying question, perhaps holds an option pending an approval from its own internal-supplier team, and returns a structured offer. None of that is a single function call.

The wiring, both as an exposing peer and as a consumer, is in Being consumed by other applications and Consuming other agents. What matters at the point of decision is that A2A is the protocol of choice when the integration partner is itself an agent and the interaction has the shape of a conversation.

MCP — for typed, function-shaped tools

The Model Context Protocol exposes individual functions — typed inputs, typed outputs, no conversational state — that any compatible client can invoke. An agent (yours, third-party, an IDE plug-in, a desktop assistant) reads your MCP server's tool list and decides when to call each tool.

The shape is appropriate when:

  • The integration is RPC-shaped: the caller has decided what it wants and is asking the application to do that single thing.
  • The caller is heterogeneous: the tools should be reachable not only from a custom partner agent but from Claude Desktop, Cursor, Copilot Studio, internal tooling, and any future MCP client.
  • The work is stateless from the caller's perspective; when continuation is needed, it is reified as another tool call rather than as session state.

For the travel application, the suite of stateless functions one might naturally expose — lookup-current-traveller-tier, query-supplier-availability, search-historic-bookings-by-traveller — is MCP-shaped. The functions are predictable and well-typed, and the caller's relationship to them is "I have a question, please answer it." There is no continuation; a caller that needs five lookups makes five MCP calls.

A useful rule of thumb: anything an internal-engineering team would have built as a microservice with a thin OpenAPI surface is naturally MCP-shaped. The MCP serialisation gives the same RPC contract with the further advantage that any compliant agent runtime understands it without bespoke integration code.

REST — for conventional system integrations

Whether a Beach application can have a REST API on top of its LLM-backed orchestrator and its deterministic handlers deserves a direct answer: yes, of course it can. A REST API is another inbound adapter. It translates HTTP requests into routed events the same way the SSE adapter does, and translates the resulting reply back into an HTTP response. The orchestrator does not know.

This matters because many integration partners are not agents and have no interest in becoming agents. The finance dashboard mentioned earlier will be a Power BI tile or an internal Vue front-end whose only requirement is "GET this URL, JSON returns, render in a table". The CRM that wants to know about substantial bookings will accept a webhook POST and write a row.

For both the inbound and outbound REST cases:

  • Inbound REST. The application exposes a conventional GET /api/v1/bookings or POST /api/v1/quotes. The handler translates the HTTP request into a routed event (source: 'rest', eventType: 'get_bookings'); the orchestrator runs (or, for a simple lookup, a deterministic handler runs); the reply is serialised back as JSON and returned. The request and response are synchronous from the caller's point of view, even though Beach internally treats them as routed events.
  • Outbound REST. The application calls another service's REST API. This is most naturally implemented as a tool the orchestrator can invoke through the ToolRegistry, where the tool's handler issues an axios or fetch call. There is nothing Beach-specific about the outbound; it is an ordinary HTTP client call from inside a tool.

A worked example of an inbound REST adapter is in the @cool-ai/beach-transport documentation; the SSE inbound uses the same pattern internally. A REST adapter is on the order of fifty lines of TypeScript: parse the request, build a routed event, dispatch, await the manifest's settled reply, serialise.

Nothing about Beach commits an adopting team to a non-traditional API surface. Many adopters will find that their application's most conventional consumers — the finance dashboard, the CRM, the BI layer — interact entirely through REST, while only their newer or more experimental integration partners arrive over A2A or MCP. Beach accommodates all of this without architectural compromise.

Webhooks — for fire-and-forget event delivery

Webhooks are the simplest case: an HTTP POST arrives carrying a payload from another system; your application acknowledges with a 200 and processes asynchronously, with no expected response beyond the acknowledgement. The corresponding outbound case is sending fire-and-forget notifications to other systems' webhook endpoints.

In the travel example, the inbound webhook surface might receive payment-status events from a finance provider, calendar events from the corporate diary system, or HR events when an employee's status changes. None of these are calls expecting an answer; each is a notification that may, downstream, cause the orchestrator to act.

Webhooks are also the natural outbound for fire-and-forget notifications — the "deploy completed" Slack notification, the "booking confirmed" message into the company's collaboration tool, the audit row sent to a centralised observability pipeline. They are universally understood and require no agent-framework cooperation on the receiving end.

The consequence: one application, many surfaces

The architecture that emerges for the travel-planning application is one orchestrator (and one set of deterministic handlers), exposed through all of the following at once:

  • A2A at /.well-known/agent-card.json and /a2a, for the partner-booking peer and the internal expense-policy peer.
  • MCP at /mcp, exposing a curated tool set for any agent runtime that wishes to query the application.
  • SSE at /chat (inbound) and /events (outbound), for the corporate web chat panel.
  • REST at /api/v1/..., for the finance dashboard, the BI layer, and any internal back-office consumer.
  • Webhook receivers at /webhooks/finance, /webhooks/hr, /webhooks/calendar, for upstream system notifications.
  • Webhook senders to the company's Slack, the centralised observability collector, and the partner-agency's booking-confirmation webhook.

Each of these is a thin adapter. None of them touches the orchestrator's prompt, its tools, its session state, or its audit log. The only place protocol-specific knowledge lives is at the edge.

Six protocols, one application; one orchestrator, one audit trail. This is what Beach is for.

Choosing what to expose where

Once it is accepted that the application can speak through every protocol at the same time, the deliberation that remains is: which capabilities should be exposed through which protocol, and to whom?

The criteria, in approximate order of importance:

Audience. A capability that other agents will consume should generally be exposed through A2A; one that other engineering teams' systems will consume should additionally — or instead — be exposed through REST or MCP. When both audiences exist, expose the capability through both.

Conversation versus invocation. A capability that benefits from clarification, partial results, or asynchronous delivery is naturally A2A. One whose contract is "I ask, you answer once" is naturally MCP or REST.

Approval requirements. Capabilities that touch external-world state — booking, payment, deployment — should be marked approval: 'required' on the Agent Card and gated behind the application's HITL approval flow regardless of which protocol surfaces them. The protocol does not change the approval requirement; it is a property of the capability itself.

Authentication model. A2A and MCP have lightweight authentication patterns (bearer tokens, mTLS); REST endpoints typically need to fit into the application's existing API-key or session-cookie framework. When a capability must satisfy a stricter authentication regime than another, it may be most cleanly exposed through the surface that already has that regime configured.

Stability of the interface. A REST API typically commits the application to a versioned endpoint that downstream consumers will pin against, with the attendant migration cost when the time comes to change it. An MCP tool can be added or deprecated more freely, because the typical consumer is an agent that re-fetches the tool list on each session. A2A capabilities, similarly, are rediscovered on each Agent Card fetch.

In practice, most production Beach applications expose their full capability surface through both A2A and MCP (the marginal cost of doing so is small) and additionally publish a small REST surface — typically read-only — for the conventional consumers who need it.

What to avoid

A recurring mistake is to fit a fundamentally conversational integration into MCP on the grounds that "MCP is more universally understood." When the work has the shape of a conversation, forcing it through MCP means inventing out-of-band session state, hand-rolling continuation tokens, and ultimately reimplementing A2A poorly. The energy is better spent advertising A2A and accepting that not all consumers will speak it on day one.

It is equally a mistake to expose every internal tool as MCP-callable simply because the option exists. MCP tools are part of the public contract; once another agent depends on a tool, the application cannot freely change it. Curate the exposed list to capabilities the team genuinely intends to support across versions, and keep specialist-scope internal tools private.

Finally, it is a mistake to assume that REST will always be the safest fallback when an integration partner cannot be persuaded onto A2A or MCP. When the integration's natural shape is multi-turn — a research peer that genuinely needs to ask clarifying questions and stream partial results — forcing it into REST means inventing a session model on top of HTTP. That model will be worse, not better, than what A2A already provides. The decision is not "REST is always safer"; it is "match the protocol to the shape of the work".

Related