The Agent.run() function runs the agent and generates a response, either as a RunResponse object or a stream of RunResponse objects.
Many of our examples use agent.print_response() which is a helper utility to print the response in the terminal. It uses agent.run() under the hood.
Running your Agent
Here’s how to run your agent. The response is captured in the response.
from typing import Iterator
from agno.agent import Agent, RunResponse
from agno.models.openai import OpenAIChat
from agno.utils.pprint import pprint_run_response
agent = Agent(model=OpenAIChat(id="gpt-4o-mini"))
# Run agent and return the response as a variable
response: RunResponse = agent.run("Tell me a 5 second short story about a robot")
# Print the response in markdown format
pprint_run_response(response, markdown=True)
RunResponse
The Agent.run() function returns a RunResponse object when not streaming. It has the following attributes:
Understanding MetricsFor a detailed explanation of how metrics are collected and used, please refer to the Metrics Documentation.
See detailed documentation in the RunResponse documentation.
Streaming Responses
To enable streaming, set stream=True when calling run(). This will return an iterator of RunResponseEvent objects instead of a single response.
From agno version 1.6.0, the Agent.run() function returns an iterator of RunResponseEvent, not of RunResponse objects.
from typing import Iterator
from agno.agent import Agent, RunResponseEvent
from agno.models.openai import OpenAIChat
from agno.utils.pprint import pprint_run_response
agent = Agent(model=OpenAIChat(id="gpt-4-mini"))
# Run agent and return the response as a stream
response_stream: Iterator[RunResponseEvent] = agent.run(
"Tell me a 5 second short story about a lion",
stream=True
)
# Print the response stream in markdown format
pprint_run_response(response_stream, markdown=True)
For even more detailed streaming, you can enable intermediate steps by setting stream_intermediate_steps=True. This will provide real-time updates about the agent’s internal processes.
# Stream with intermediate steps
response_stream: Iterator[RunResponseEvent] = agent.run(
"Tell me a 5 second short story about a lion",
stream=True,
stream_intermediate_steps=True
)
Handling Events
You can process events as they arrive by iterating over the response stream:
response_stream = agent.run("Your prompt", stream=True, stream_intermediate_steps=True)
for event in response_stream:
if event.event == "RunResponseContent":
print(f"Content: {event.content}")
elif event.event == "ToolCallStarted":
print(f"Tool call started: {event.tool}")
elif event.event == "ReasoningStep":
print(f"Reasoning step: {event.content}")
...
You can see this behavior in action in our Playground.
Storing Events
You can store all the events that happened during a run on the RunResponse object.
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.utils.pprint import pprint_run_response
agent = Agent(model=OpenAIChat(id="gpt-4o-mini"), store_events=True)
response = agent.run("Tell me a 5 second short story about a lion", stream=True, stream_intermediate_steps=True)
pprint_run_response(response)
for event in agent.run_response.events:
print(event.event)
By default the RunResponseContentEvent event is not stored. You can modify which events are skipped by setting the events_to_skip parameter.
For example:
agent = Agent(model=OpenAIChat(id="gpt-4o-mini"), store_events=True, events_to_skip=[RunEvent.run_started.value])
Event Types
The following events are yielded by the Agent.run() and Agent.arun() functions depending on the agent’s configuration:
Core Events
| Event Type | Description |
|---|
RunStarted | Indicates the start of a run |
RunResponseContent | Contains the model’s response text as individual chunks |
RunCompleted | Signals successful completion of the run |
RunError | Indicates an error occurred during the run |
RunCancelled | Signals that the run was cancelled |
Control Flow Events
| Event Type | Description |
|---|
RunPaused | Indicates the run has been paused |
RunContinued | Signals that a paused run has been continued |
| Event Type | Description |
|---|
ToolCallStarted | Indicates the start of a tool call |
ToolCallCompleted | Signals completion of a tool call, including tool call results |
Reasoning Events
| Event Type | Description |
|---|
ReasoningStarted | Indicates the start of the agent’s reasoning process |
ReasoningStep | Contains a single step in the reasoning process |
ReasoningCompleted | Signals completion of the reasoning process |
Memory Events
| Event Type | Description |
|---|
MemoryUpdateStarted | Indicates that the agent is updating its memory |
MemoryUpdateCompleted | Signals completion of a memory update |
See detailed documentation in the RunResponseEvent documentation.
An agent can be provided with structured input (i.e a pydantic model) by passing it in the Agent.run() or Agent.print_response() as the message parameter.
from typing import List
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.tools.hackernews import HackerNewsTools
from pydantic import BaseModel, Field
class ResearchTopic(BaseModel):
"""Structured research topic with specific requirements"""
topic: str
focus_areas: List[str] = Field(description="Specific areas to focus on")
target_audience: str = Field(description="Who this research is for")
sources_required: int = Field(description="Number of sources needed", default=5)
# Define agents
hackernews_agent = Agent(
name="Hackernews Agent",
model=OpenAIChat(id="gpt-4o-mini"),
tools=[HackerNewsTools()],
role="Extract key insights and content from Hackernews posts",
)
hackernews_agent.print_response(
message=ResearchTopic(
topic="AI",
focus_areas=["AI", "Machine Learning"],
target_audience="Developers",
sources_required=5,
)
)