Agents
An agent is an LLM-powered orchestrator that binds a model, system prompt, and set of tools into a callable unit.
Declaration
cleat
agent Reviewer { model: "claude-sonnet-4-20250514" system: "You are a code reviewer. Analyze diffs and provide structured verdicts." tool: [fetch_diff, analyze_complexity] max_turns: 10 needs { llm, net } }
Fields
| Field | Required | Description |
|---|---|---|
model: | yes | LLM model identifier string |
system: | yes | System prompt string |
tool: | yes | List of tool names the agent can call |
max_turns: | no | Maximum LLM call + tool cycles (default: 10) |
needs | yes | Effect requirements — must include llm |
Calling an agent
cleat
let result = Reviewer.run("Review this PR")? // result is a string (the agent's final text response)
Agent.run(prompt) enters a loop:
- Sends the conversation to the LLM with available tool schemas
- If the LLM returns text → done, returns the text
- If the LLM returns a tool call → dispatches the tool, captures the result, continues
- If max_turns exceeded → returns an error
Structured output
The distinctive feature. When Agent.run() is used with a type annotation, the agent returns a typed struct instead of a raw string:
cleat
type ReviewResult = struct { fn">@description("Whether the code is safe to merge") approved: bool, fn">@description("Specific issues found during review") issues: []string, summary: string, notes: string = "", // optional field } let review: ReviewResult = Reviewer.run("Review PR #42")? // review.approved is bool // review.issues is []string // review.summary is string
At compile time, the compiler:
- Extracts a JSON Schema from
ReviewResult @descriptionannotations becomedescriptionfields in the schema- Fields with defaults are excluded from
required - Enum fields generate
enumconstraints
At runtime:
- The agent runs its normal tool-calling loop → gets text
- The text is sent through a structured extraction pass with the JSON Schema
- The response is deserialized into the Go struct
Schema example
The ReviewResult struct generates:
json
{
"type": "object",
"properties": {
"approved": {"type": "boolean", "description": "Whether the code is safe to merge"},
"issues": {"type": "array", "items": {"type": "string"}, "description": "Specific issues found during review"},
"summary": {"type": "string"},
"notes": {"type": "string"}
},
"required": ["approved", "issues", "summary"]
}Note: notes is not in required because it has a default value.
Compiler enforcement
- Agent must declare
needs { llm }(uses an LLM) - All tools' effects must be covered by the agent's
needs - Every tool in
tool:must exist as a declaredtool(not just a function) - Duplicate tools are rejected
- Empty model or system prompt → error
- For structured output: target type must have JSON-serializable fields
Testing agents
cleat
test "mock the agent's tools" { using fetch_diff = fn(pr: int) -> Result[string, string] { return Ok("mock diff content") } let r = fetch_diff(42) assert(r.IsOk()) }
Agents require ANTHROPIC_API_KEY to run with a real LLM. In tests, mock the tools via using to verify behavior without API calls.
Supervisor integration
For production use, wrap agent calls with the supervisor for retry, timeout, and tracing:
cleat
import "std/supervisor" let result = supervisor.run_with_retry("Reviewer", "Review PR #42", 3)? let summary = supervisor.summary()? io.println(summary) // "turns: 5, tool_calls: 3, duration: 2340ms, status: ok"