Exposing a REST API on Top of a Beach Application
A reasonable question arises early in any conversation about Beach with a developer who has been building conventional web applications for some years: "can I just put a REST API in front of all of this?" The answer is yes, without qualification, and this article is the practical walkthrough.
A REST API is another inbound adapter, of the same family as the SSE adapter that ships with the canonical pipeline. It receives HTTP requests, translates each into a routed event, and waits for the resulting reply (or replies) before serialising it back as JSON. Nothing in the orchestrator changes. Nothing in the handlers changes. Nothing in the audit trail changes. The REST surface is, quite literally, an additional set of app.get() and app.post() route handlers that hand work to the router.
What follows covers the requirement, the wiring, two distinct shapes (synchronous lookup, orchestrator-driven action), and the operational concerns that arise once such an API is in production.
The requirement
Consider an internal expense-management application built on Beach. The orchestrator (an LLM actor with a small set of finance tools) handles the conversational interface for employees who want to submit, query, or amend their expense claims. The application has been running for some weeks via the corporate chat channel, and the finance team has been satisfied.
Three new integration requirements have arrived from elsewhere in the organisation:
- The company's BI team would like to pull a daily report of expense submissions, broken down by department and category, into Power BI for senior-leadership dashboards. They expect a REST endpoint that returns JSON they can ingest on a schedule.
- The accounts-payable team would like to mark expense claims as paid via a small internal admin console; they would prefer not to use a chat interface for this. They expect a
POST /api/v1/claims/:id/mark-paidthey can call from their existing Vue admin tool. - The internal-audit team would like to query the audit log for any claim above a certain threshold over the past quarter. They have an existing internal-audit dashboard that consumes REST endpoints and would prefer one more there to a new tool.
None of these consumers wishes to learn A2A, install an MCP client, or open a chat panel. They expect HTTP and JSON, with the same authentication model that the rest of the company's internal applications use.
This is straightforwardly addressable in Beach. We shall expose three REST endpoints — GET /api/v1/expense-summary, POST /api/v1/claims/:id/mark-paid, and GET /api/v1/claims/audit — backed by the same orchestrator and handlers that already power the chat interface.
The two shapes
Most REST endpoints in a Beach application fall into one of two shapes:
- Synchronous-lookup endpoints, where the request maps to a single deterministic handler and the response is the handler's result. These are RPC-shaped and need no orchestrator involvement.
- Orchestrator-driven endpoints, where the request triggers a full agent turn — perhaps because the work involves reasoning about typed input, perhaps because side-effecting tools require approval interception, perhaps because the same orchestrator already encapsulates the business logic and one would prefer not to duplicate it.
Both shapes are wired the same way at the inbound; they differ only in what the routed event is dispatched to.
Step 1 — The Express adapter
Whether the application is built on Express, Fastify, Hono, or anything else, the principle is identical: a route handler in the HTTP framework dispatches to the router and awaits a result.
The example below uses Express for clarity.
import express from 'express';
import { randomUUID } from 'node:crypto';
import { ManifestRegistry, Manifest } from '@cool-ai/beach-core';
const app = express();
app.use(express.json());
const manifestRegistry = new ManifestRegistry();
app.use(authenticateMiddleware); // your existing auth
app.use(rateLimitMiddleware); // your existing rate limiter
The manifestRegistry is what allows the HTTP route to wait for the orchestrator's reply; the next section shows how.
Step 2 — A synchronous-lookup endpoint
The simplest case is an endpoint that maps to a single deterministic handler. The summary endpoint for the BI team is exactly this: read from the database, compute, return.
app.get('/api/v1/expense-summary', async (req, res) => {
const { from, to } = req.query as { from?: string; to?: string };
if (!from || !to) {
return res.status(400).json({ error: 'from and to are required' });
}
// Open a manifest with one expected slot.
const manifestId = `rest:summary:${randomUUID()}`;
const reply = await new Promise<unknown>((resolve, reject) => {
const manifest = new Manifest({
id: manifestId,
expected: ['result'],
timeoutMs: 10_000,
onComplete: (filled) => resolve(filled.get('result')),
onTimeout: () => reject(new Error('timeout')),
});
manifestRegistry.register(manifest);
router.routeEvent({
source: 'rest',
eventType: 'expense_summary_requested',
data: { from, to, manifestId },
});
});
res.json(reply);
});
Wired with a routing rule:
router.loadRoutingConfig({
rules: [
/* ... */
{ source: 'rest', eventType: 'expense_summary_requested', handler: 'expense-summary-handler' },
],
});
And a handler:
router.register('expense-summary-handler', async (event) => {
const { from, to, manifestId } = event.data as ExpenseSummaryRequest;
const summary = await db.expenses.summarise({ from, to });
manifestRegistry.deliver(manifestId, 'result', summary);
});
The HTTP route opens a manifest, dispatches an event, awaits settlement, and serialises. The handler does the actual work and fills the manifest's slot. The orchestrator was not involved; this is purely a deterministic lookup, faster and cheaper than going via an LLM.
The pattern here — open a manifest, dispatch, await, serialise — is the canonical shape for any synchronous REST handler in Beach. It is mildly more verbose than calling the database directly from inside app.get, and the temptation to skip the router on grounds of efficiency is real. Resist it. Bypassing the router skips audit, replay, observability, and downstream filtering. The marginal cost of routing is microseconds; the architectural value is real.
Step 3 — An orchestrator-driven endpoint
The mark-paid endpoint is more interesting. Marking a claim as paid is a side effect with real consequences: the ledger entry must be created, the employee notified, the audit log updated. The endpoint could be implemented as a deterministic handler — and many teams will choose that — but in this application the orchestrator already encapsulates the policy ("only finance-team members may mark a claim paid; only claims in approved status may be marked; the notification to the employee should match their preferred channel"). Reimplementing that in a REST handler would duplicate work the orchestrator already does correctly for the chat channel.
The REST request therefore routes through the orchestrator as though it were any other channel:
app.post('/api/v1/claims/:id/mark-paid', async (req, res) => {
const { id: claimId } = req.params;
const { paidAt, reference } = req.body as { paidAt?: string; reference?: string };
const userId = req.user.id;
const sessionId = `rest:user:${userId}`; // one rolling session per REST caller
const turnId = randomUUID();
const manifestId = `rest:mark-paid:${turnId}`;
const reply = await new Promise<{ parts: unknown[]; turnState: string }>((resolve, reject) => {
const manifest = new Manifest({
id: manifestId,
expected: ['main_reply'],
timeoutMs: 30_000,
onComplete: (filled) => resolve(filled.get('main_reply') as any),
onTimeout: () => reject(new Error('timeout')),
});
manifestRegistry.register(manifest);
router.routeEvent({
source: 'rest',
eventType: 'message_received',
data: {
channelId: 'rest',
sessionId,
turnId,
manifestId,
userId,
parts: [{
partType: 'user-message',
text: `Mark expense claim ${claimId} as paid${reference ? ` with reference ${reference}` : ''}${paidAt ? ` (paid on ${paidAt})` : ''}.`,
}],
},
});
});
// The orchestrator's reply may include a structured result we can return verbatim.
const data = (reply.parts.find((p: any) => p.partType === 'response')?.data ?? {});
res.json({ status: reply.turnState, ...data });
});
Two important details:
First, the channelId: 'rest' value is the only place protocol-specific information is named in the routing data; the orchestrator never reads it. The reply-dispatcher in the canonical pipeline needs a batchedChannels entry (or its REST analogue) so the reply lands back in the manifest the HTTP route is awaiting, but otherwise this is identical to the email-channel pattern.
Second, the orchestrator may emit interim respond() calls during its tool loop — the chat channel sees these streamed. A REST endpoint discards interim parts and returns only the settled reply, exactly as email does. See Streaming versus batched edges for the underlying reasoning.
A small "REST output" handler that listens for assistant:reply_ready events with channelId === 'rest' and fills the manifest closes the loop:
router.register('rest-output', async (event) => {
const { manifestId, parts, turnState } = event.data as RestOutputData;
manifestRegistry.deliver(manifestId, 'main_reply', { parts, turnState });
});
Wired to the routing as a destination of assistant:reply_ready for the REST channel.
Step 4 — The audit-query endpoint
The audit endpoint sits between the two shapes. The work is a database query — no orchestrator reasoning required — but the response shape is rich and the query parameters are awkward enough to invite a small LLM-driven natural-language query layer in front. The first version, kept deterministic:
app.get('/api/v1/claims/audit', async (req, res) => {
const filter = parseAuditFilter(req.query);
const result = await db.expenseAudit.query(filter);
res.json(result);
});
If, in time, the audit team begins to ask for "show me anything that looks unusual this quarter" rather than precise filters, the endpoint can be refactored to dispatch through an audit-specialist actor in the same shape as the mark-paid endpoint. The HTTP signature does not change; the implementation behind it does.
Authentication, rate limiting, and CORS
A Beach application with a REST surface has the same authentication concerns as any other internal HTTP API. Beach does not impose an authentication model; authenticate at the Express middleware layer (or your hosting platform's reverse proxy) and attach a typed user object to the request. The handler receives the user on event.data if you place it there.
Rate limiting follows the same advice. Beach's router does not rate-limit; if your REST endpoints could be hammered, fit them with express-rate-limit or equivalent at the HTTP layer.
CORS, similarly, is not a Beach concern. Configure it at the Express layer for the origins that should be permitted to call your REST endpoints from a browser.
Versioning
The advice given in API design textbooks holds: prefix your endpoints with /api/v1/..., plan for the day when /api/v2/... exists, and treat your REST surface's contract as a published version that consumers will pin against. The orchestrator behind the endpoint can change freely; the REST contract is the part that has stability commitments to external consumers.
If a REST endpoint's response shape needs to change, prefer adding fields (which will be ignored by consumers that do not know about them) over changing or removing fields. When a breaking change is unavoidable, expose it under /api/v2/ and keep /api/v1/ working until consumers have migrated.
What to avoid
It is occasionally tempting to expose every routed event as a REST endpoint on the grounds of "completeness." This is a poor discipline. REST endpoints are part of a public contract; once an endpoint exists, removing it is a breaking change. Expose only those capabilities that genuinely warrant external HTTP access; keep everything else internal to the routing layer.
It is similarly tempting to use a single endpoint that takes a free-text query and dispatches to the orchestrator. This collapses to "we have a REST API that is a thin wrapper around the chat interface." If that is what the consumer wants, give it to them, but be precise: it is a REST shape over a conversational backend, and its contract is necessarily looser than a typed REST endpoint. Document the response shape carefully and do not pretend it is more deterministic than it is.
It is a mistake to skip the router when implementing a REST handler "for performance." The router's overhead is negligible; the loss of audit, replay, and observability is not.
Finally, a REST endpoint that runs the orchestrator should set a sensible timeoutMs on its manifest. An orchestrator that hangs because a downstream specialist is slow will hold the HTTP connection open until the client gives up; no consumer team enjoys that experience. Pick a timeout, decide what to return on timeout, and document the behaviour.
What you have at this point
A Beach application that simultaneously exposes a chat channel for end users, an A2A peer surface for collaborating agents, and a REST API for conventional system integrations — all driven by the same orchestrator, the same handlers, and the same audit log. The REST surface is not a second-class citizen; it is simply another adapter at the edge.
Where you take this next depends on your integration roadmap. The MCP surface is the natural next addition for tool-shaped capabilities; the webhook surface is the natural addition for fire-and-forget event delivery from upstream systems.
Related
- Exposing your application through multiple protocols — the broader framing for protocol choice.
- Manifests — the synchronous-lookup pattern via Delivery Manifests.
- Streaming versus batched edges — why REST treats interim
respond()parts the way email does. - Being consumed by other applications — the A2A and MCP cases alongside REST.