Using the Starter Scaffold

@cool-ai/beach-starter is a small package that wraps the canonical Beach pipeline as a set of reusable handlers. The application registers its orchestrator; the starter handles everything around it.

What the scaffold gives you

Six handlers, registered under canonical names:

Handler Consumes Emits
message-matcher channel:message_received channel:message_matched
channel-inbound channel:message_matched session:turn_requested
reply-dispatcher assistant:reply_ready streaming:deliver or batched:deliver
chat-collector streaming:deliver publishes to the application's streaming transport
response-collector batched:deliver writes a missive draft to the store
filter-and-distribute researcher:results_ready calls manager.inject()

Alongside the handlers, the package ships a routing.json template that wires those names together.

The minimum wiring

import { EventRouter } from '@cool-ai/beach-core';
import { SessionTurnManager } from '@cool-ai/beach-session';
import { registerCanonicalHandlers } from '@cool-ai/beach-starter';
import { InMemoryMissiveStore } from '@cool-ai/beach-missives/stores';
import routingConfig from '@cool-ai/beach-starter/templates/routing.json' with { type: 'json' };

const router = new EventRouter();
const manager = new SessionTurnManager({ router });
const store = new InMemoryMissiveStore();

router.register('my-orchestrator', /* the orchestrator handler */);

registerCanonicalHandlers(router, {
  orchestratorHandler: 'my-orchestrator',
  resolveSession: (data) => data.sessionId as string,
  streamingChannels: ['sse'],
  chatPublish: async (sessionId, parts) => { /* publish to the streaming transport */ },
  store,
});

router.loadRoutingConfig(/* the routing template with YOUR_ORCHESTRATOR substituted */);

The orchestrator is the only handler the application writes itself. It receives session:turn_requested events and emits assistant:reply_ready once runTurn() returns.

All registerCanonicalHandlers options

interface CanonicalHandlerOptions {
  orchestratorHandler: string;
  resolveSession: (data: InboundEventData) => string | Promise<string>;

  streamingChannels?: string[];
  batchedChannels?: string[];
  chatPublish?: (sessionId: string, parts: MissivePart[]) => Promise<void>;
  store?: MissiveStore;

  filterAndDistribute?: {
    manager: SessionTurnManager;
    summarize: (data: unknown) => Message | Promise<Message>;
  };
}

The fields are independent. A chat-only application reaches for orchestratorHandler, resolveSession, streamingChannels, and chatPublish. An email-only application drops streamingChannels and chatPublish, and adds batchedChannels and store. A multi-channel application sets all of them.

resolveSession

Beach asks the application to map an inbound event to a sessionId, because the strategy is genuinely application-specific. A few examples illustrate the spread:

// SSE — the sessionId arrives from the HTTP session, embedded in the inbound payload
resolveSession: (data) => data.sessionId as string

// Email — thread by RFC 5322 In-Reply-To, fall back to the origin messageId
resolveSession: (data) => (data.inReplyTo ?? data.origin?.messageId) as string

// File-mailbox — derive from the filename
resolveSession: (data) => path.basename(data.filename as string, '.txt')

// Anything async — load from a database
resolveSession: async (data) => {
  const userId = data.userId as string;
  return await db.getActiveSession(userId) ?? createSession(userId);
}

streamingChannels and chatPublish

Streaming channels — SSE, WebSockets, voice — deliver the reply as it is produced. The reply-dispatcher routes their replies to streaming:deliver, and chat-collector invokes chatPublish to forward parts onto whichever transport the application uses.

The common pattern is Redis pub/sub:

chatPublish: async (sessionId, parts) => {
  await redis.publish(`reply:${sessionId}`, JSON.stringify(parts));
}

The SSE endpoint subscribes to reply:<sessionId> and forwards each payload as an event-stream chunk.

batchedChannels and store

Batched channels — email, SMS, file-mailbox — hold the reply until the turn settles, then send a single message. The reply-dispatcher routes to batched:deliver, and response-collector writes a missive draft to the store. The outbound edge (the IMAP/SMTP wrapper, say) reads the store inside Manifest.onComplete and sends from there.

The choice of store depends on the application's durability needs:

import { InMemoryMissiveStore } from '@cool-ai/beach-missives/stores'; // tests
import { JsonFileStore }     from '@cool-ai/beach-missives/stores';    // prototypes
import { SqliteStore }       from '@cool-ai/beach-missives/stores';    // single-node production
import { RedisStore }        from '@cool-ai/beach-missives/stores';    // multi-node production

filterAndDistribute

Optional. When the orchestrator hands work off to specialist research that runs out of band — a background agent, an MCP peer call, a durable workflow — each result fires researcher:results_ready. The filter-and-distribute handler does two things with it:

  1. It persists the full result to the missive store, so the audit trail keeps the unsummarised data.
  2. It calls manager.inject() with a summarised version, which feeds back into the orchestrator's awaiting turn.
filterAndDistribute: {
  manager,
  summarize: async (raw) => ({
    role: 'user',
    content: `Research result: ${JSON.stringify(raw).slice(0, 500)}`,
  }),
}

The orchestrator's LLM sees the summary as a tool result and decides whether to reply, to ask for more, or to keep waiting. See pushing results to the user for the longer story on why the result routes through the orchestrator rather than landing in the user's view directly.

Multi-channel example

registerCanonicalHandlers(router, {
  orchestratorHandler: 'my-orchestrator',
  resolveSession: (data) => {
    if (data.channelId === 'sse')   return data.sessionId as string;
    if (data.channelId === 'email') return (data.inReplyTo ?? data.origin?.messageId) as string;
    throw new Error(`Unknown channelId: ${data.channelId}`);
  },
  streamingChannels: ['sse'],
  batchedChannels:   ['email'],
  chatPublish: async (sessionId, parts) => { await redis.publish(`reply:${sessionId}`, JSON.stringify(parts)); },
  store: new SqliteStore('./missives.db'),
  filterAndDistribute: {
    manager,
    summarize: async (raw) => ({ role: 'user', content: summarise(raw) }),
  },
});

One application, two channels, one orchestrator. The orchestrator remains channelId-blind: it sees an inboundMessage and emits parts, with no idea whether the reply is going to a chat panel or to an inbox.

Extending the scaffold

The scaffold is the canonical pipeline. Use it as it stands. Replacing one of the six canonical handlers with a custom implementation is, almost always, the wrong move: the handler is the contract Beach asks every application to keep, and an application that swaps it out has chosen to discard the architecture. At that point there is little reason to be using Beach at all.

The instinct to replace a canonical handler is, more often than not, a misdiagnosis. Ask first whether what is wanted is an edge component the pipeline does not yet have — a Channel Formatter at the outbound edge, a Composer specialist that writes channel-shaped prose around the orchestrator's structured output, a fresh inbound adapter for a new transport. Those are additions; the pipeline keeps its shape.

Three cases that look at first sight like reasons to leave the scaffold behind, and the additions that handle them properly:

  • Human-in-the-loop approvals. The session manager already supports a 'suspended' turn state for tools marked requiresApproval: true. The orchestrator suspends, the UI renders the approval request, the user's decision routes back as an approval-response event, and the existing scaffold resumes the turn. No custom orchestrator is needed; the discipline is in the four-part flow described in Reference: tool-registry.
  • Channel-specific reply formatting. A reply that needs to read as a chat message in one channel and as a formal email in another should not be reformatted upstream of the outbound edge. The orchestrator emits structured parts; a Channel Formatter at the edge — a small deterministic component, registered alongside the SMTP wrapper — converts those parts into the channel-native artefact. The orchestrator stays channel-blind. See the channel-aware-orchestrators entry in Anti-patterns.
  • An application that is both a chat agent and an A2A peer. Add an A2A inbound adapter alongside the chat inbound; both feed the same router and the same canonical handlers. No replacement is required, and the orchestrator does not learn what protocol the request arrived on. See Being consumed by other applications.

In each case the addition sits at the edge and the canonical handlers stay where they are. The shape of the pipeline is the architecture; the desire to reach into the centre is, in itself, the diagnosis that an edge component is missing.

Related