Apart from running your evaluations on the CLI, you can also track them on the Agno platform. This is useful to keep track of results and share them with your team.
Do it following these steps:
1
Authenticate
You can authenticate using your CLI or API key.Using your CLI:
Copy
Ask AI
ag setup
Using your API key:Get your API key from Agno App and use it to link your locally running agents to the Agno platform.
Copy
Ask AI
export AGNO_API_KEY=your_api_key_here
2
Track your evaluations
When running an evaluation, set monitoring=True to track all its runs on the Agno platform:
Copy
Ask AI
from agno.agent import Agentfrom agno.eval.accuracy import AccuracyEvalfrom agno.models.openai import OpenAIChatevaluation = AccuracyEval( model=OpenAIChat(id="gpt-4o"), agent=Agent(model=OpenAIChat(id="gpt-4o")), input="What is 10*5 then to the power of 2? do it step by step", expected_output="2500", monitoring=True, # This activates monitoring)# This run will be tracked on the Agno platformresult = evaluation.run(print_results=True)
You can also set the AGNO_MONITOR environment variable to true to track all evaluation runs.