Actors vs Handlers — When to Use Each

A Beach application is a collection of participants — handlers and actors — coordinated by the router. Most pieces of work could plausibly be either an LLM actor (an ActorConfig invoked through runTurn or callActor) or a deterministic handler (a function registered with router.register). The choice has real consequences for latency, cost, predictability, and how easily the application can be debugged.

This guide gives criteria for each, with worked examples.

Quick test

Three questions, asked of the work in front of you:

  1. Is the input shape stable? A typed payload with predictable fields leans toward a handler; free text with implicit intent leans toward an actor.
  2. Is the output a single right answer or a judgement call? A single right answer (database lookup, calculation, sort, filter, format) leans toward a handler; a judgement (which is best, what should we do, is this a complaint or a request) leans toward an actor.
  3. Could a non-AI engineer implement this in 50 lines of TypeScript? If yes, handler. If they would be writing prompts, validators, and prompt-injection checks, actor.

Three handler-leaning answers point at a handler; three actor-leaning ones at an actor; a mix usually means the work needs to be split into two participants.

When to use a deterministic handler

Scheduling and routing decisions

A user message arrives with priority: 'high'. Should it go to the urgent triage actor or the normal one? That is a routing rule with a when predicate, not an LLM call.

router.loadRoutingConfig({
  rules: [
    { source: 'user', eventType: 'message', handler: 'urgent-triage',
      when: { payload: { priority: { equals: 'high' } } } },
    { source: 'user', eventType: 'message', handler: 'concierge' },
  ],
});

If the priority field is a structured input — set by a webhook or a UI control — this is deterministic. If "is this urgent?" requires reading the message body and judging, that is an actor's job: a small fast classifier. See "Hybrid" below.

Data fetches and lookups

router.register('fetch-user-bookings', async (event) => {
  const userId = (event.data as { userId: string }).userId;
  return await db.bookings.findByUser(userId);
});

Database queries, REST calls, file reads. The orchestrator may call this as a tool via the respond() flow, but the implementation is deterministic.

Aggregation, formatting, computation

A handler that takes 12 hotel objects and returns the 3 with the best price-to-rating ratio. A handler that converts a Missive into a CRM activity record. A handler that calculates a quote total.

Side-effecting work is fine here too — sending an SMS, writing a row, marking a record as confirmed — provided the inputs are typed and the outcome is predictable.

Approval policy enforcement

router.register('check-spend-policy', async (event) => {
  const { amount, currency, userId } = event.data;
  const limit = await db.policies.getLimit(userId, currency);
  return { approved: amount <= limit };
});

Do not ask an LLM "is this within policy?" — ask the LLM to propose the action, and then check policy deterministically. The actor is the orchestrator; the handler is the gate.

When to use an LLM actor

Free-text understanding

A user typed a paragraph. What did they actually want? An LLM is the only thing that handles that without a brittle parsing tower. This is the orchestrating actor's job.

const conciergeConfig = {
  id: 'concierge',
  model: 'claude-haiku-4-5',
  systemPrompt: '…',
  tools: ['fetch-user-bookings', 'search-flights', 'create-quote'],
};

The LLM reads "I want a quiet hotel near a metro station for under £200" and decides what tool calls to make. None of that decomposition can be expressed as a routing rule.

Composing prose

The user asks "what tasks do I have?" and the application has a list of twelve. A deterministic handler can produce JSON; only an LLM can write "You've got two due today, four scheduled this week, and six older items I'd suggest you triage." Prose-shaped output is an actor's work.

The constraint: the LLM must call respond() with structured parts — text plus optional structured data — never raw text. Beach enforces this; see Reference: respond-tool.

Triage and classification (when the input is unstructured)

"Is this email a complaint, a question, or junk?" can be a small fast actor (Haiku-class, low temperature, narrow prompt). The output is a structured decision the next handler reads.

const triageConfig = {
  id: 'email-triage',
  model: 'claude-haiku-4-5',
  systemPrompt: 'Classify the following email as: complaint | question | junk | other. Reply only with the class.',
  tools: [],
  domainDataSchema: {
    type: 'object',
    properties: { class: { enum: ['complaint', 'question', 'junk', 'other'] } },
    required: ['class'],
  },
};

The output goes through respond() with domainDataSchema enforcing the shape. Downstream handlers can do if (event.data.class === 'complaint') deterministically.

Multi-step planning

"Find me a holiday based on what you remember from previous chats." The actor consults memory tools, picks a search strategy, decides which research specialists to dispatch in parallel, and decides how to react to their results. None of that planning can be a deterministic flow.

When to split the work

The most common application shape is: orchestrating LLM at the top, deterministic handlers as its tools. The LLM decides what to do; the handlers do it.

user message
  → concierge actor (LLM, top of the orchestrator)
    → search_flights tool       (handler — REST call to a supplier)
    → check_user_preferences    (handler — DB lookup)
    → format_quote              (handler — typed transform)
  → respond() with the assembled answer

The actor is one participant. Each tool is a participant. Beach treats them identically — routeEvent, audit log, replay, all the same.

You can also split by stage:

inbound email
  → triage actor              (LLM — small, fast, narrow prompt)
  → routing rule on `class`   (deterministic — when:equals)
  → concierge actor (complaint path)  OR  archive handler (junk path)

The triage actor turns unstructured input into a structured decision. From that point on, a deterministic routing rule takes over.

Hybrid: LLM-judged inputs to deterministic handlers

router.register('schedule-task', async (event) => {
  const { task, deadline, priority } = event.data as ScheduleArgs;
  // All three fields came from an actor's structured respond() output.
  // Deterministic from here.
  await taskStore.create({ task, deadline, priority });
});

The LLM did the hard part (interpreting "remind me on Friday morning" → deadline: <ISO timestamp>). The handler stores it. Don't put taskStore.create() in an actor's tool loop with the deadline parsing — separate concerns.

Migration paths

Starting deterministic, becoming LLM-driven

A team starts with a regex that classifies emails. It works for 70% of cases. Edge cases pile up. Replace the regex with an actor, keeping the same input and output shapes and swapping only the implementation. Routing rules and downstream handlers do not change.

Starting LLM-driven, becoming deterministic

A team starts with an LLM doing the spend-policy check. It hallucinates limits. Replace the LLM with a handler that queries the policy table. The orchestrator still decides what to charge; the deterministic handler decides whether to allow it.

Cost optimisation: extracting work from a slow actor into faster ones

The orchestrator (Sonnet) handles every email. Most of them are simple confirmations that don't need the big model. Add a Haiku triage actor in front that decides if the message is "simple-acknowledge" or "needs-orchestrator" and routes accordingly. The simple cases never reach Sonnet.

router.loadRoutingConfig({
  rules: [
    // Triage first
    { source: 'email', eventType: 'received', handler: 'triage-actor' },
    // Triage's output routes to one of two paths
    { source: 'assistant', eventType: 'triage_complete',
      handler: 'auto-acknowledge',
      when: { payload: { class: { equals: 'simple-acknowledge' } } } },
    { source: 'assistant', eventType: 'triage_complete',
      handler: 'concierge',
      when: { payload: { class: { equals: 'needs-orchestrator' } } } },
  ],
});

Two LLM calls per simple email (Haiku triage + nothing) instead of one (Sonnet orchestrator). Huge cost win at small accuracy cost.

Hybrid example — task management

A productivity assistant that schedules tasks from email:

Stage Implementation Why
Email arrives Inbound adapter Channel concern
"Is this actionable?" Triage actor (LLM, Haiku) Free text, judgement call
If actionable, schedule Orchestrator (LLM, Sonnet) Multi-step planning, calendar reasoning
Find user's calendar Handler DB lookup
Pick a time Tool call from orchestrator Judgement — informed by calendar handler
Create the task Handler Typed transform, side effect
Send confirmation email Composer (LLM, Haiku) + Formatter (handler) Prose + structured

Six participants in one flow. Each does one thing. Each is the right kind for its job.

Pitfalls

One giant actor that does everything. The "concierge" that fetches data, formats output, applies policy, and writes records works for demos and breaks in production: token costs spiral, the audit trail is opaque, one prompt change breaks half the application. Decompose.

Pretending an LLM is deterministic. "I'll just give it temperature 0 and ask it to multiply." Use a calculator. That LLMs can do arithmetic does not make them the right tool.

Pretending a handler is intelligent. Regex-based intent classification stays charming until the application ships to a second language or a third user persona. Where the input is unstructured prose, an LLM is the right tool.

Skipping the structured-output discipline. An actor whose output is parsed by regex is broken by definition. Use respond() with a domainDataSchema, so the structure is enforced at the API boundary rather than by hopeful string-parsing downstream.

Over-decomposing into too many small actors. Three Haiku calls in a row for triage → routing → formatting can cost more than one Sonnet call that does all three. Measure before splitting.

Related