Your AI application has the wrong shape
Most AI-enabled applications being built today will need to be rebuilt in eighteen months. Not because the code is bad. Not because the engineers made poor choices. Because the shape is wrong — and a shape that is wrong at the foundation cannot be fixed without starting again.
This is not a prediction. It is already happening. Teams that built around GPT-3 rebuilt for GPT-4. Teams that built around direct API calls are rebuilding for MCP. Teams that built for MCP are now looking at A2A and A2UI and wondering what to do. The problem is not that the protocols change — protocols always change. The problem is that the applications are built around the protocols, so when the protocols change, the applications change with them.
There is a different way to build. It is not new; the principles are decades old. But almost nobody building AI applications today is applying them, because nothing in the ecosystem makes it the default. This article is about what the right shape is, why it is rare, and what it takes to build it deliberately.
The shape almost everyone is building
A typical AI-enabled application looks like this: there is an LLM at the centre, and around it there is a collection of integrations — MCP servers for tools, an A2A client for peer agents, a REST API for external data, a WebSocket for the UI. The integrations talk directly to the LLM layer. The LLM layer talks directly to the business logic. Business logic reaches back out to the integrations when it needs data.
This shape is natural. Every tutorial, every quick-start guide, every framework points this way. It is the shape that emerges when you follow the path of least resistance.
It has one fatal property: the inside of the application knows what the outside looks like. When MCP changes — and it will — every part of the application that knows about MCP has to change. When A2A adds a new message type — and it will — every part of the application that handles A2A messages has to change. The blast radius of a protocol change is the entire application, because the protocol reaches all the way in.
This is not a theoretical concern. MCP was published in November 2024. A2A was published in April 2025. A2UI followed shortly after. ACP arrived from IBM. Each of these touched production applications. Each required changes that, in a well-shaped application, would have been confined to an edge component. In a typical application, they required changes everywhere.
It is worth keeping the early nineteenth century in mind here. Canal companies were, for a decade or two, the obviously correct investment in transporting goods over long distances. Engineers built canals; financiers financed canals; entire towns were laid out around canals. The canal companies were right about the problem — goods needed to move — and wrong about the shape of the answer. Railways arrived, and the canal investments were not bad; they were simply beside the point. A great deal of the AI infrastructure being built today will age the same way. The question worth asking is not which protocol is the canal and which is the railway. The question is whether your architecture lets you use whichever turns out to be which.
The shape that survives
The right shape is not complicated. It is an application of principles that experienced architects apply instinctively to every other kind of integration problem.
The interior of the application — the business logic, the domain model, the participants that do the actual work — should know nothing about the protocols. Protocols are an edge concern. At the inbound edge, a component translates the external protocol into the application's internal language. At the outbound edge, a component translates the application's internal results into the external protocol. The interior speaks only in the application's own terms.
When a protocol changes, only the edge component changes. The interior does not know that anything happened. When a new protocol arrives, a new edge component is added. The interior does not change. This is what "permanently at the edge of the standard" means in practice: the application's position relative to current protocols is always current, because the current protocol is always in an edge component that can be replaced.
This is separation of concerns applied to protocols. It is not a new idea. What is new is that the AI protocol landscape is moving fast enough that the cost of ignoring this principle is being paid immediately, on a timescale of months rather than years.
Why participants, not just protocols
Separating protocols from the interior is necessary but not sufficient. The interior also needs a stable shape for the things that do the work.
In a well-shaped application, the participants — the LLM actors, the deterministic handlers, the long-running processes, the human-in-the-loop approvals — are all first-class, interchangeable components against a common interface. The application does not have an LLM at the centre with other things bolted around it. It has a routing core that delivers events to participants and collects results, and the participants are whatever the application needs them to be.
This matters because the right approach to AI changes. In 2024, the right answer for many tasks was a large frontier model. In 2025, for many of those same tasks, the right answer is three small fast models in a pipeline with a classifier at one step. In 2026, for some of those tasks, the right answer may be a deterministic workflow with no model at all. An application built with an LLM at the centre has to be restructured each time the right answer changes. An application built with a routing core and interchangeable participants does not: you replace the participant, or add one, or remove one. The shape of the application stays the same.
The LLM is not the centre of the application. It is one kind of participant that is useful today and may be replaced. Building as if it is the centre is the second most common mistake in AI application architecture, after coupling the protocol to the interior.
What makes this hard in practice
The principles are simple. Applying them consistently is hard, for two reasons.
The first is that the path of least resistance points the other way. Every framework, every SDK, every tutorial shows you how to get an LLM talking to a tool as quickly as possible. The fastest path always couples the protocol to the interior. The applications that result work immediately and become unmaintainable after six months.
The second is that individually-reasonable exceptions accumulate. A well-shaped application starts with protocol-agnostic participants. Then a feature request arrives that would be easier to implement if the participant knew which protocol the request came from. The request is reasonable. The exception is granted. Then another. Then another. Six months later, the participant is full of protocol-specific conditionals, the interior is coupled to the edge, and the shape is gone. No single exception was wrong; the accumulation was.
Avoiding this requires two things: a structural commitment and a governance commitment. The structural commitment is the shape itself — the inbound adapter, the outbound plugin, the protocol-agnostic interior. The governance commitment is the willingness to refuse exceptions to the rule even when the individual exception seems reasonable.
Neither of these can be enforced by good intentions. They have to be built into the shape the development team inherits, and maintained by whoever governs the architecture.
What this looks like concretely
An application built with the right shape has the following properties, which are verifiable by inspection rather than trust:
Inbound adapters are thin. Each one knows one protocol and nothing else. An MCP adapter knows MCP. An A2A adapter knows A2A. A webhook adapter knows webhooks. None of them knows anything about the application's business logic or the other adapters.
Outbound plugins are thin. Each one knows one external system and nothing else. A REST plugin knows REST. A GraphQL plugin knows GraphQL. A SQL plugin knows SQL. None of them knows anything about where the request came from.
The interior speaks in events. The router receives events. Participants receive events and produce results. The manifest registry holds calls open until results arrive. None of these components has any concept of a protocol. A reader of the interior code cannot tell, from the code itself, which protocols the application supports. That is a feature, not a defect.
Protocol support is additive. Adding A2A to an application that speaks MCP is adding an inbound adapter and potentially an outbound plugin. The interior does not change. Removing a protocol is removing those components. The interior does not change.
Participant types are interchangeable. Replacing an LLM actor with a deterministic handler, or adding a human-approval step, or inserting a classifier at one point in the flow — all of these are participant changes. The routing core and the edge components do not change.
If an application has all five of these properties, it has the right shape. If it does not, it will pay the cost of protocol churn.
The honest objection
The honest objection to this approach is that it is slower to build initially than the path of least resistance. The fastest way to get an LLM talking to a tool is not to build an inbound adapter and an outbound plugin first. It is to wire the tool directly to the LLM and ship.
This is true. The right shape takes more up-front work. The question is not whether to pay the cost but when. Pay it up front, in the form of building the shape correctly, or pay it later, repeatedly, as each protocol change requires touching the entire application. For an application that will be in production for a year or more, the up-front cost is almost always worth it. For a prototype that will be thrown away in three months, it may not be.
The mistake is building as if you are building a prototype and finding yourself, twelve months later, maintaining production code that has a prototype's shape.
The scaffold, not the discipline
The structural commitment is easier to make if the shape is already there. An application built on a scaffold that makes the inbound adapter, the outbound plugin, and the protocol-agnostic interior the default — rather than something the team has to design and enforce from scratch — starts with the right shape and has to work to lose it.
This is what separates a scaffold from a framework. A framework tells you how to build your application. A scaffold gives your application the right shape, then gets out of the way. Inside the shape, the application is yours: your domain logic, your participants, your routing rules, your choice of LLM provider, your choice of durable execution runtime. The scaffold contributes the structure. Everything else is yours.
The AI protocol landscape will continue to change. The applications that survive those changes without being rebuilt are the ones that treat protocols as edges rather than as centres. Building that shape deliberately — or adopting a scaffold that makes it the default — is the decision that separates the applications still running in 2028 from the ones that will need to be rebuilt in 2026.
The scaffold described in this article is Beach, an open-source project from Cool AI. Beach carries no vendor relationship, has no runtime dependency, and charges nothing to use — if it disappears, applications built on it keep running. Packages are published at npmjs.com/~cool-ai.