Getting Started with Beach
Beach is designed to make AI applications more durable and long-lasting. It does this by being event-routed, with a protocol-agnostic interior: every cross-component message passes through a router, every asynchronous flow is a manifest awaiting its result, and every channel translates protocol at the edge rather than inside. Most Beach applications place an LLM actor at the centre as the orchestrator, and that is what we will build here. Beach itself does not require one — a purely deterministic Beach application is possible, and the same primitives serve it — but the design intent, and the typical shape, is AI-shaped.
The walkthrough below takes about five minutes. It assumes Node 20 or later, an Anthropic API key in the environment, and a Redis instance reachable at localhost:6379 for the streaming reply path. By the end, an HTTP request to a local Express server will have reached the orchestrator, the orchestrator will have called Claude, and the reply will have streamed back over Server-Sent Events.
Install
npm install \
@cool-ai/beach-core \
@cool-ai/beach-session \
@cool-ai/beach-llm \
@cool-ai/beach-starter \
@cool-ai/beach-missives \
@anthropic-ai/sdk \
ioredis
The five Beach packages are the layers we touch in this guide: the router, the session manager, the LLM driver, a starter scaffold of canonical handlers, and a store for the application's missive log. Beach defers the underlying API and pub/sub work to Anthropic's SDK and to ioredis.
The shape we are building
The canonical Beach pipeline reads as follows:
SSE inbound → EventRouter → message-matcher
→ channel-inbound (creates the turn)
→ YOUR ORCHESTRATOR (calls runTurn)
→ reply-dispatcher
→ chat-collector (Redis pub/sub → SSE stream)
@cool-ai/beach-starter ships every handler in that chain except the orchestrator. The orchestrator is the one piece that Beach cannot write for the application, because it embeds the prompt, the tool list, and the domain model. Everything else is generic.
Step 1 — The router and the session manager
import { EventRouter } from '@cool-ai/beach-core';
import { SessionTurnManager } from '@cool-ai/beach-session';
import { InMemoryMissiveStore } from '@cool-ai/beach-missives/stores';
const router = new EventRouter();
const manager = new SessionTurnManager({ router });
const store = new InMemoryMissiveStore();
The EventRouter is the only path along which components in Beach talk to each other; the SessionTurnManager runs LLM turns inside that routing model; the MissiveStore keeps a permanent record of every inbound and outbound message. Three of Beach's central primitives in three lines. A fourth, the ManifestRegistry, joins them as soon as the application needs to gate a batched outbound or assemble parallel research — see Manifests.
Step 2 — Register the orchestrator
import { AnthropicProvider, ToolRegistry } from '@cool-ai/beach-llm';
import Anthropic from '@anthropic-ai/sdk';
import type { TurnRequestedData } from '@cool-ai/beach-starter';
const provider = new AnthropicProvider(new Anthropic());
const tools = new ToolRegistry();
const conciergeConfig = {
id: 'concierge',
model: 'claude-haiku-4-5',
systemPrompt: 'You are Concierge, a friendly assistant. Reply concisely.',
tools: [],
};
router.register('my-orchestrator', async (event, context) => {
const { channelId, sessionId, turnId, inboundMessage } = event.data as TurnRequestedData;
const respond = await manager.runTurn({
sessionId, turnId, slotKey: 'main',
actorId: 'concierge',
actorConfig: conciergeConfig,
provider,
registry: tools,
inboundMessage,
});
await context.routeEvent({
source: 'assistant',
eventType: 'reply_ready',
data: {
channelId, sessionId, turnId,
parts: respond.parts,
turnState: respond.turnState,
},
});
});
Notice what the orchestrator does not do. It does not parse HTTP requests; it does not format responses for the wire; it does not own the SSE connection; it does not write to the missive store. All of that is the responsibility of the canonical pipeline. The orchestrator's job is the part that no framework could write on its behalf: take a turn, run the LLM against the right prompt and tools, and route the result onward.
Step 3 — Register the canonical pipeline
import { registerCanonicalHandlers } from '@cool-ai/beach-starter';
import Redis from 'ioredis';
const redis = new Redis();
registerCanonicalHandlers(router, {
orchestratorHandler: 'my-orchestrator',
resolveSession: (data) => data.sessionId as string,
streamingChannels: ['sse'],
chatPublish: async (sessionId, parts) => {
await redis.publish(`reply:${sessionId}`, JSON.stringify(parts));
},
store,
});
registerCanonicalHandlers registers message-matcher, channel-inbound, reply-dispatcher, and chat-collector under the names that the routing configuration expects. The application decides which channels stream (sse) and which batch (email, file-mailbox). One streaming channel is enough for now.
Step 4 — Load the routing configuration
import routingConfig from '@cool-ai/beach-starter/templates/routing.json' with { type: 'json' };
const rules = routingConfig.rules.map((r) =>
r.handler === 'YOUR_ORCHESTRATOR' ? { ...r, handler: 'my-orchestrator' } : r,
);
router.loadRoutingConfig({ rules });
The routing configuration is what wires channel:message_received events to message-matcher, channel:message_matched to channel-inbound, session:turn_requested to the orchestrator, and so on. It is declarative: a reader of the repository can see, without running the code, exactly where every event goes. The template uses the placeholder YOUR_ORCHESTRATOR, and we substitute the handler name we registered above.
Step 5 — Wire the SSE inbound and outbound
import express from 'express';
const app = express();
app.use(express.json());
app.post('/chat', async (req, res) => {
const { sessionId, text } = req.body;
await router.routeEvent({
source: 'channel',
eventType: 'message_received',
data: {
channelId: 'sse',
sessionId,
parts: [{ partType: 'user-message', text }],
},
});
res.json({ ok: true });
});
app.get('/events', async (req, res) => {
const { sessionId } = req.query;
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
const sub = new Redis();
await sub.subscribe(`reply:${sessionId}`);
sub.on('message', (_channel, payload) => {
res.write(`data: ${payload}\n\n`);
});
req.on('close', () => sub.disconnect());
});
app.listen(3000, () => console.log('Beach running on :3000'));
The inbound POST /chat route translates an HTTP request into a routed event and returns at once. The outbound GET /events route subscribes to Redis and streams whatever chat-collector publishes back to the browser. Both routes are thin: the protocol-handling code lives at the edge and the orchestration logic lives in the interior, separated by the router.
Step 6 — Send a message
# Terminal 1: subscribe to replies for session 'demo'
curl -N "http://localhost:3000/events?sessionId=demo"
# Terminal 2: send a message
curl -X POST http://localhost:3000/chat \
-H "Content-Type: application/json" \
-d '{"sessionId":"demo","text":"Say hello in three words."}'
Terminal 1 should print something like:
data: [{"partType":"response","text":"Hi, friend!"}]
That is the full pipeline at work. HTTP arrives, the router hands it through the canonical handlers to the orchestrator, the orchestrator calls Claude, and the response routes back through reply-dispatcher and chat-collector, across Redis, and out through the SSE response to the terminal.
What we have built
The application now has an event-routed core where every message passes through the router; a session manager that holds the turn lifecycle; a missive store that records every message for audit and replay; and a streaming SSE outbound that a future consumer could replace with WebSockets, with voice, or with anything else without the orchestrator's prompt or tool list changing by a single character.
That last point is worth dwelling on. The orchestrator does not know that it is being talked to by SSE. It would not know if the channel were swapped for IMAP, or for an A2A peer, or for an MCP client. The protocol-agnostic interior is one of the things Beach is for; we have just exercised it without thinking about it.
Where to go from here
- Using the starter scaffold — the full options surface for
registerCanonicalHandlers, including batched channels and specialist research. - Creating routing rules — the shape of
routing.json,whenpredicates, and cascade rules. - Setting up email — swap SSE for IMAP and SMTP, gated by a Delivery Manifest.
- Pushing results to the user — the inject pattern for background research that the user should see.
- When to use deterministic handlers and when to use LLMs — for the next thing added to the pipeline.