---
title: Agents
description: Run the default agent loop with tools.
type: guide
summary: Create agents, run the default loop, inspect results, and understand multi-turn tool use.
---

# Agents



Use an agent when the model needs to call tools and continue with the tool
results. The default agent loop is built from the same primitives you can use
directly: `ai.stream`, `ToolRunner`, messages, and events.

## Create an agent

An agent wraps `ai.stream` in a loop. It streams model output, executes requested
tools, appends tool results to history, and repeats until the model returns a
final assistant message.

```python
agent = ai.agent(tools=[contact_mothership])

async with agent.run(model, messages) as stream:
    async for event in stream:
        if isinstance(event, ai.events.TextDelta):
            print(event.chunk, end="", flush=True)
```

Use `ai.agent` for the default loop. Subclass `ai.Agent` and override
`async def loop()` when you need to change control flow.

## Run the default loop

Create an agent with tools, then call `agent.run`:

```python title="agent_loop.py"
import asyncio
import ai


@ai.tool
async def contact_mothership(query: str) -> str:
    """Contact the mothership for important decisions."""
    return "Soon."


async def main() -> None:
    model = ai.get_model("anthropic/claude-sonnet-4")
    agent = ai.agent(tools=[contact_mothership])
    messages = [
        ai.system_message(
            "Use the contact_mothership tool when asked about the future."
        ),
        ai.user_message("When will the robots take over?"),
    ]

    async with agent.run(model, messages) as stream:
        async for event in stream:
            if isinstance(event, ai.events.TextDelta):
                print(event.chunk, end="", flush=True)

    print(stream.output)


if __name__ == "__main__":
    asyncio.run(main())
```

## Inspect the run result

The stream yields model events and agent events. After the run finishes,
`stream.messages` contains the updated history, and `stream.output` contains
the final assistant output.

## Understand multi-turn behavior

Each loop turn streams one assistant message. If the message contains tool
calls, the agent executes them, appends one tool-result message, and starts the
next model turn.

```python
async with agent.run(model, messages) as stream:
    async for event in stream:
        if isinstance(event, ai.events.ToolCallResult):
            for result in event.results:
                print(result.tool_name, result.result)
        elif isinstance(event, ai.events.TextDelta):
            print(event.chunk, end="", flush=True)

history = stream.messages
```

Use `stream.messages` when you want to persist the complete conversation after
the run.

## Pass params and structured output

Pass provider options with `params`. Pass a Pydantic model with `output_type`
when the final assistant text should validate as JSON:

```python
import pydantic


class Forecast(pydantic.BaseModel):
    answer: str
    eta: str


async with agent.run(
    model,
    [ai.user_message("Return a JSON mothership forecast.")],
    output_type=Forecast,
    params={"temperature": 0},
) as stream:
    async for event in stream:
        if isinstance(event, ai.events.TextDelta):
            print(event.chunk, end="", flush=True)

forecast = stream.output
print(forecast.eta)
```

## Choose agents or direct streaming

Use `ai.stream` when you want one model response and you will handle any tool
calls yourself. Use an agent when the SDK should execute Python tools, append
tool results, and keep looping until the assistant returns a final answer.

```python
# Direct stream: inspect tool calls yourself.
async with ai.stream(model, messages, tools=[get_weather.tool]) as stream:
    async for event in stream:
        ...

# Agent: execute registered tools.
agent = ai.agent(tools=[get_weather])
async with agent.run(model, messages) as stream:
    async for event in stream:
        ...
```


---

For a semantic overview of all documentation, see [/sitemap.md](/sitemap.md)

For an index of all available documentation, see [/llms.txt](/llms.txt)