Migrating to ActorFn

ActorFn is the canonical shape for any turn-advancing piece of work in Beach. Triage classifiers, rules engines, queue runners, database-backed responders, single-call structured-output classifiers — anything that decides what happens next — runs through registerActor and gets the same lifecycle as an LLM actor (cancellation signal, delivery fan-out, observability) without any per-actor engineering.

Beach applications written before registerActor shipped tend to carry one of three pre-migration patterns: a hand-rolled orchestrator that bridges session:turn_requested events to manual actor invocations; a deferred-init wiring dance that resulted from circular imports; a raw provider SDK call inside an event handler that re-implements forced-tool-use against the provider directly. All three migrate cleanly. The benefits — uniform lifecycle, replay-via-evals, mechanical cancellation — fall out of the migration without per-handler engineering.

Why this migration matters

Three failure modes recur in pre-registerActor code:

  • Hand-rolled orchestration plumbing collapses under deferred initialisation. Code that wires session:turn_requested → custom-runner → manual reply events typically has the runner instantiated separately from the router, and consumes it via a deferred-init function to dodge circular imports. The deferred init is fragile under module-load-order edge cases — the structural cause is the construction shape, not the consumer's code.
  • Raw provider SDK callers cannot be replayed. A handler that calls provider.complete(...) or Anthropic.messages.create(...) directly from inside an event handler bypasses Beach's actor pipeline. The call doesn't pass through callActor, doesn't emit events to the audit log, and doesn't register as an actor. beach-evals cannot replay it; beach-inspect cannot show it; cancellation signals don't reach it.
  • Lifecycle plumbing duplicated per handler. Cancellation signals, delivery fan-out, observability — every hand-rolled runner re-implements what the router already does when an ActorFn is registered. The duplication is invisible until a deploy interrupts a handler mid-call and the replay code doesn't quite match what the router expects.

ActorFn + registerActor collapses all three. The migration is mechanical for most call sites; the few that aren't reveal architectural decisions worth surfacing explicitly.

The shape

import { ActorFn, ActorInvokeOptions, RespondCall } from '@cool-ai/beach-core';

const triage: ActorFn = async (opts: ActorInvokeOptions): Promise<RespondCall> => {
  const last = opts.messages[opts.messages.length - 1];
  const text = typeof last?.content === 'string' ? last.content : '';
  const decision = text.length < 20 ? 'too-short' : 'route-to-specialist';
  return {
    parts: [{ partType: 'response', text: `Triage: ${decision}` }],
    turnState: 'complete',
  };
};

router.registerActor('task-triage', triage);

The function signature is (opts: ActorInvokeOptions) => Promise<RespondCall>. The router wraps the call with the same lifecycle every actor gets: cancellation signal, session lookup, and delivery fan-out to session destinations. After settlement, the router emits one delivery:* event per destination in the session — no manual routeEvent call needed.

Migration paths

Three starting shapes, three recipes.

Migration 1 — hand-rolled orchestrator

The pre-migration pattern bridges session:turn_requested → custom-runner → manual reply routing via an EventRouter handler.

Before (hand-rolled):

// orchestrator.ts — bridges turn-requested to a custom runner
router.register('orchestrator', async (event, ctx) => {
  const { sessionId, turnId, inboundMessage } = event.data as TurnRequestedData;

  const result = await runConciergeTurn({
    sessionId, turnId,
    actorConfig: conciergeConfig,
    provider,
    registry: tools,
    inboundMessage,
  });

  // Silent recovery if the LLM forgot to call respond()
  const parts = result.parts ?? [{ partType: 'response', text: 'Done.' }];
  const turnState = result.turnState ?? 'complete';

  await ctx.routeEvent({
    source: 'assistant', eventType: 'reply_ready',
    data: { sessionId, turnId, parts, turnState },
  });
});

After (ActorFn via registerActor):

import { createLLMActor } from '@cool-ai/beach-llm';

const conciergeActor = createLLMActor({ actorConfig: conciergeConfig, provider, registry: tools });
router.registerActor('concierge', conciergeActor);

A routing rule ({ source: 'session', eventType: 'turn_requested', handler: 'concierge' }) replaces the custom bridging handler. The router emits delivery events after the actor settles; no manual routeEvent call is needed.

What goes away: the silent-recovery ?? fallback, the manual reply-routing call, the per-turn signal forwarding. The router handles all of it.

What stays: the actor's prompt, tool list, and system instructions. Those are properties of the actor, not the orchestrator.

Migration 2 — raw provider SDK classifier

The pre-migration pattern instantiates the provider SDK directly inside an event handler and runs a forced-tool-use loop for a single structured-output decision.

Before (raw provider call):

import Anthropic from '@anthropic-ai/sdk';

router.register('email_triage', async (event, ctx) => {
  const anthropic = new Anthropic();
  const result = await anthropic.messages.create({
    model: 'claude-sonnet-4-6',
    max_tokens: 4096,
    tools: [{ name: 'triage_decision', description: 'Classify the email',
      input_schema: { type: 'object', properties: { class: { type: 'string' } } } }],
    tool_choice: { type: 'tool', name: 'triage_decision' },
    messages: [{ role: 'user', content: event.data.text }],
  });
  // … parse result.content for the tool_use block, regex-extract the decision …
  await ctx.routeEvent({ source: 'triage', eventType: 'classified',
    data: { /* extracted decision */ } });
});

After (ActorFn with callActor):

import { ActorFn } from '@cool-ai/beach-core';
import { callActor }  from '@cool-ai/beach-llm';

const emailTriageActor: ActorFn = async (opts) => {
  const result = await callActor({
    config: triageActorConfig,
    provider,
    registry: triageRegistry,
    messages: opts.messages,
    sessionId: opts.sessionId,
    turnId: opts.turnId,
    slotKey: opts.slotKey,
  });
  return {
    parts: [{ partType: 'domain-data', dataType: 'triage-decision',
      data: result.respond.parts.find(p => p.partType === 'domain-data')?.data }],
    turnState: 'complete',
  };
};

router.registerActor('email-triage', emailTriageActor);

The callActor call passes through Beach's actor pipeline (audit, replay, cancellation) rather than going around it. If the classifier really does not need an LLM at all (a regex-and-rules classifier, a database lookup), the ActorFn does the work directly without callActor. Either way the shape is the same — the rest of the system does not know whether the actor ran an LLM or not.

Migration 3 — deferred-init wiring (wireCanonicalPipeline)

The pre-migration pattern wraps canonical handler registration in a deferred-init function because the canonical handlers depend on a session manager + chat publisher + orchestrator that transitively close on the router.

Before (deferred init):

// event-router/index.ts — circular import workaround
let pipelineWired = false;
export function wireCanonicalPipeline(deps: {
  chatPublish: ChatPublisher;
  orchestratorHandler: string;
}): void {
  if (pipelineWired) return;
  registerCanonicalHandlers(router, {
    chatPublish: deps.chatPublish,
    orchestratorHandler: deps.orchestratorHandler,
    resolveSession: (data) => data.threadId,
  });
  pipelineWired = true;
}

// index.ts — has to call this at the right moment
wireCanonicalPipeline({ chatPublish, orchestratorHandler: 'concierge' });

After (createBeachStack):

import { createBeachStack } from '@cool-ai/beach-starter';
import { createLLMActor }   from '@cool-ai/beach-llm';

const conciergeActor = createLLMActor({ actorConfig, provider, registry });

const stack = createBeachStack({
  router,
  orchestrator: { id: 'concierge', fn: conciergeActor },
  resolveSession: (data) => data.threadId,
  chatPublish,
});
stack.mount();

The orchestrator.id field names the actor id the canonical inbound pipeline routes session:turn_requested events to; orchestrator.fn is the ActorFn registered and invoked per turn. createBeachStack calls router.registerActor(id, fn) internally on mount(). The construction-order problem disappears because createBeachStack accepts all dependencies upfront. The deferred-init function and its pipelineWired guard go away.

Failure modes the migration absorbs

The silent-recovery fallback for missing respond()

Pre-migration orchestrators routinely synthesise a fallback when the LLM produces text-only output without a respond() tool call:

const parts = result.parts ?? [{ partType: 'response', text: 'Done.' }];
const turnState = result.turnState ?? 'complete';

The fallback hides a real failure mode — the LLM has dropped its discipline — by inventing parts that the user never got. After the migration, a missing respond() propagates as ActorMissingRespondError, which carries the LLM's actual textBlocks so the consumer can recover the produced content rather than substituting a placeholder. Combined with a tool_choice strategy that forces respond() from iteration 2, the failure is either prevented or recoverable — never silently masked.

The deferred-init circular-import dance

The wireCanonicalPipeline pattern (Migration 3 above) is structural — circular imports between the router and the canonical handlers are unavoidable when each is constructed separately and they reference each other. createBeachStack collapses the construction by accepting all dependencies upfront.

Migration order — recommendation

  1. The orchestrator first. The biggest reduction is here; the pattern is the most visible to operators. Register the LLM actor via registerActor and replace the hand-rolled bridge with a routing rule.
  2. createBeachStack if the codebase has a wireCanonicalPipeline or equivalent deferred-init dance. Closes the circular-import root cause; everything else gets easier afterwards.
  3. Classifier callers that today use raw provider SDK calls. These migrate to ActorFn either with callActor (for LLM-shaped classifiers) or as direct deterministic logic (for non-LLM classifiers). Both shapes are first-class.
  4. Long-running specialists. Last because the migration is more involved (concurrent turn coordination, separate session ids) and the existing pattern is usually already isolating the specialist enough to be replay-safe.

What the migration does NOT change

  • The actor's prompt, tool list, and system instructions. Properties of the actor, not the orchestrator.
  • Routing rules. After the actor settles, the router emits delivery events. Routing rules react to those events exactly as before.
  • Cancellation semantics. The signal cascade is now uniform across LLM and non-LLM actors; no per-actor wiring needed.
  • Filter-and-distribute, manifests, durable execution. All compose with the new shape unchanged.

Pitfalls

Migrating one classifier in isolation while leaving the orchestrator hand-rolled. Both are still pre-migration shapes; you have cleaned a leaf without cleaning the root. The orchestrator migration is the highest-leverage move; do it first.

Treating ActorFn as "for non-LLM only". It is the canonical wrapper for any deterministic turn-advancer. A classifier that calls an LLM through callActor from inside its function body is a perfectly canonical shape.

Keeping the silent-recovery fallback "just in case". After the migration, a missing respond() should propagate as an error, not a placeholder. The visible failure is the property that lets you fix the prompt or wire tool_choice enforcement.

Skipping createBeachStack because the deferred-init dance "still works". Until the next deploy interrupts a circular-import edge case. The construction shape is the structural cause; the recipe collapses it. Do not carry it forward.

Related