Agents

An agent is an LLM-powered orchestrator that binds a model, system prompt, and set of tools into a callable unit.

Declaration

cleat
agent Reviewer {
    model: "claude-sonnet-4-20250514"
    system: "You are a code reviewer. Analyze diffs and provide structured verdicts."
    tool: [fetch_diff, analyze_complexity]
    max_turns: 10
    needs { llm, net }
}

Fields

FieldRequiredDescription
model:yesLLM model identifier string
system:yesSystem prompt string
tool:yesList of tool names the agent can call
max_turns:noMaximum LLM call + tool cycles (default: 10)
needsyesEffect requirements — must include llm

Calling an agent

cleat
let result = Reviewer.run("Review this PR")?
// result is a string (the agent's final text response)

Agent.run(prompt) enters a loop:

  1. Sends the conversation to the LLM with available tool schemas
  2. If the LLM returns text → done, returns the text
  3. If the LLM returns a tool call → dispatches the tool, captures the result, continues
  4. If max_turns exceeded → returns an error

Structured output

The distinctive feature. When Agent.run() is used with a type annotation, the agent returns a typed struct instead of a raw string:

cleat
type ReviewResult = struct {
    fn">@description("Whether the code is safe to merge")
    approved: bool,
    fn">@description("Specific issues found during review")
    issues: []string,
    summary: string,
    notes: string = "",  // optional field
}

let review: ReviewResult = Reviewer.run("Review PR #42")?
// review.approved is bool
// review.issues is []string
// review.summary is string

At compile time, the compiler:

  1. Extracts a JSON Schema from ReviewResult
  2. @description annotations become description fields in the schema
  3. Fields with defaults are excluded from required
  4. Enum fields generate enum constraints

At runtime:

  1. The agent runs its normal tool-calling loop → gets text
  2. The text is sent through a structured extraction pass with the JSON Schema
  3. The response is deserialized into the Go struct

Schema example

The ReviewResult struct generates:

json
{
  "type": "object",
  "properties": {
    "approved": {"type": "boolean", "description": "Whether the code is safe to merge"},
    "issues": {"type": "array", "items": {"type": "string"}, "description": "Specific issues found during review"},
    "summary": {"type": "string"},
    "notes": {"type": "string"}
  },
  "required": ["approved", "issues", "summary"]
}

Note: notes is not in required because it has a default value.

Compiler enforcement

Testing agents

cleat
test "mock the agent's tools" {
    using fetch_diff = fn(pr: int) -> Result[string, string] {
        return Ok("mock diff content")
    }
    let r = fetch_diff(42)
    assert(r.IsOk())
}

Agents require ANTHROPIC_API_KEY to run with a real LLM. In tests, mock the tools via using to verify behavior without API calls.

Supervisor integration

For production use, wrap agent calls with the supervisor for retry, timeout, and tracing:

cleat
import "std/supervisor"

let result = supervisor.run_with_retry("Reviewer", "Review PR #42", 3)?
let summary = supervisor.summary()?
io.println(summary)  // "turns: 5, tool_calls: 3, duration: 2340ms, status: ok"
Edit this page on GitHub