import asyncio
from agno.agent import Agent
from agno.models.meta import Llama
async def main():
agent = Agent(
model=Llama(id="Llama-3.3-70B"),
markdown=True,
)
await agent.aprint_response(
"Share a two-sentence horror story.",
stream=True
)
asyncio.run(main())
Create a virtual environment
Terminal
and create a python virtual environment.python3 -m venv .venv
source .venv/bin/activate
Set your LLAMA API key
export LLAMA_API_KEY=YOUR_API_KEY
Install libraries
pip install llama-api-client agno
Run Agent
python cookbook/models/meta/async_stream.py
Was this page helpful?