Your First Actor

An actor in Beach is what other systems would call an LLM agent: a particular model, addressed with a particular system prompt, given a particular list of tools, and expected to produce its output by calling a tool whose schema Beach defines. The discipline that distinguishes a Beach actor from any other LLM invocation is that the actor does not produce free text. It produces structured parts and a turn-state marker, through a tool called respond(), which Beach injects into the actor's tool list automatically.

The reasons for that discipline are set out elsewhere — chiefly in Reference: design principles under principles 2.3 and 2.8 — and we will not rehearse them here. What this article does is walk through the construction of a minimum working actor, with enough surrounding context that you can decide where in your application a similar actor would belong.

For the broader question of when to reach for an actor at all rather than a deterministic handler, see Actors versus handlers.

The minimum

import { ToolRegistry, AnthropicProvider, callActor } from '@cool-ai/beach-llm';
import { respondToolSnippet, turnStatesSnippet } from '@cool-ai/beach-llm';
import Anthropic from '@anthropic-ai/sdk';

const tools    = new ToolRegistry();
const provider = new AnthropicProvider(new Anthropic());

const concierge = {
  id: 'concierge',
  model: 'claude-haiku-4-5',
  systemPrompt: [
    respondToolSnippet,
    turnStatesSnippet,
    'You are Concierge, a friendly assistant for a travel-planning application. Reply concisely.',
  ].join('\n\n'),
  tools: [],
};

const result = await callActor({
  config:    concierge,
  messages:  [{ role: 'user', content: 'Suggest a quiet beach destination in Europe.' }],
  provider,
  registry:  tools,
  sessionId: 'demo',
  slotKey:   'main',
});

console.log(result.respond.parts);
console.log(result.respond.turnState);

That is a working actor. The two snippets at the top of the system prompt — respondToolSnippet and turnStatesSnippet — teach the LLM the contract that Beach expects every actor to follow. The first explains the shape of the respond() tool and the parts the LLM must produce by calling it; the second enumerates the valid turnState values and what each implies for the application around the actor. Both are required: an actor whose system prompt omits them will, sooner or later, emit free text or an invalid turn state, and Beach's parser will reject the result.

The output of callActor is a RespondCall whose parts array carries the structured reply and whose turnState field declares whether the actor has finished ('complete'), is waiting on background work ('awaiting'), is asking the user a question ('clarifying'), or is in one of the other documented states. Downstream code branches on the turn state to decide what happens next.

Adding tools to the actor

A tool, here, is a function that the actor may call partway through its turn — to fetch data, to perform a calculation, to consult a partner agent, or to take any other action that the LLM cannot do on its own. Tools are registered with the ToolRegistry and listed by name in the actor's tools array.

tools.register({
  name: 'search-destinations',
  description: 'Search the destination database for matches by climate and price band.',
  scope: 'specialist',
  inputSchema: {
    type: 'object',
    properties: {
      climate:     { type: 'string', enum: ['warm', 'cold', 'mild'] },
      maxPriceGbp: { type: 'number' },
    },
    required: ['climate'],
  },
  handler: async (args, ctx) => {
    return await db.destinations.find(args);
  },
});

const concierge = {
  /* ... */
  tools: ['search-destinations'],
};

When the actor decides to call the tool, callActor invokes the handler, feeds the result back into the actor's loop as a tool result, and continues until the actor calls respond(). The actor may call as many tools as it likes — and the same tool repeatedly — before settling.

The scope field deserves a word. A tool registered with scope: 'specialist' is private to the calling actor; the call does not pass through the router, and so it is invisible to filtering rules, to audit observers, and to any downstream handlers that might wish to react. A tool registered with scope: 'router' does pass through the router, with all the observability that follows. The choice depends on whether the tool's invocation is something other parts of the application care about. Reference: tool-registry has the longer story.

Structuring the actor's output

The respond() tool will, by default, accept any structured parts the LLM produces. That latitude is appropriate for a general conversational actor whose replies vary in shape; it is wrong for a triage actor, a classifier, or any actor whose output downstream code must consume as typed data. For those cases, tighten the contract with a domainDataSchema on the actor's configuration.

const triage = {
  id:     'triage',
  model:  'claude-haiku-4-5',
  systemPrompt: '/* … the prompt that teaches classification … */',
  tools:  [],
  domainDataSchema: {
    type: 'object',
    properties: {
      class:     { enum: ['question', 'complaint', 'junk'] },
      reasoning: { type: 'string' },
    },
    required: ['class'],
  },
};

const result   = await callActor({ config: triage, /* … */ });
const decision = result.respond.parts.find((p) => p.partType === 'response')?.data;
// decision satisfies { class: 'question' | 'complaint' | 'junk'; reasoning?: string }

Beach embeds the schema in the respond() tool's input schema at invocation time, so the LLM is forced to produce conforming output. Downstream handlers can then read the typed data field directly, without recourse to free-text parsing. If the LLM somehow produces output that does not conform, callActor returns a parse error rather than silently delivering ill-shaped data; what happens next depends on the onParseError strategy that has been configured. The default retries once, with a corrective tool result.

What not to do

A handful of mistakes recur often enough to merit naming.

Do not parse free text from the actor's reply. If you find yourself reaching for a regular expression against result.respond.parts[0].text to extract a field that the actor "should have" included, the contract is broken. Either the field belongs in domainDataSchema and the actor should be required to produce it as typed data, or the work the actor is doing should be split — a specialist actor for the typed extraction, a generalist actor for the conversational reply that follows.

Do not list respond() in the actor's tools array. Beach injects it automatically; listing it again throws an error at registration time. The same applies to registering a tool whose name is respond.

Do not omit the respondToolSnippet and turnStatesSnippet from the system prompt. The LLM has no innate knowledge of Beach's discipline; the snippets are how the LLM learns it. The error mode when these are absent is silent — the actor produces what looks like a reasonable reply, and callActor returns a parse error.

Do not set temperature: 0 and assume that the actor's behaviour is now deterministic. LLMs at temperature zero are more consistent, but they are not deterministic in any rigorous sense, and downstream code should not rely on the assumption.

Wiring an actor into the canonical pipeline

The callActor example above is fine for one-off invocations and for tests. A production application that handles real channel traffic wires the actor into the canonical pipeline as the orchestrator. The orchestrator handler is the small piece of code that calls runTurn — the session-aware equivalent of callActor — and emits the resulting reply as a routed event.

import { SessionTurnManager } from '@cool-ai/beach-session';
import { EventRouter } from '@cool-ai/beach-core';
import type { TurnRequestedData } from '@cool-ai/beach-starter';

const router         = new EventRouter();
const sessionManager = new SessionTurnManager({ router });

router.register('my-orchestrator', async (event, context) => {
  const { sessionId, turnId, channelId, inboundMessage } = event.data as TurnRequestedData;

  const respond = await sessionManager.runTurn({
    sessionId,
    turnId,
    slotKey:    'main',
    actorId:    concierge.id,
    actorConfig: concierge,
    provider,
    registry:   tools,
    inboundMessage,
  });

  await context.routeEvent({
    source:    'assistant',
    eventType: 'reply_ready',
    data: {
      sessionId,
      turnId,
      channelId,
      parts:     respond.parts,
      turnState: respond.turnState,
    },
  });
});

runTurn is callActor with the session-state machinery added: it records the turn in the session manager, it holds the turn open while turnState is 'awaiting', and it supports inject() for re-injecting background results. For the full pipeline that surrounds this orchestrator handler, see Getting Started.

Related