Skip to content

Install Agno v2.5+ [2026]: Get Multi-Agent Framework Running

Deploy Agno's Python framework for multimodal AI agents. Covers Python 3.12+ setup, pip vs uv install methods, first agent code, and AgentOS connection - plus 3 install gotchas the docs skip.

7 min readIntermediate

Multimodal agents in production, not dependency hell. pip or uv? Python 3.12 required or will 3.9 work? Why does the five-minute install turn into an hour of GitHub issue hunting?

What you need to know.

What You’re Installing (and Why Version Numbers Matter)

Agno is an open-source Python framework for building and running AI agents. Runtime, control plane, security architecture – all there for deploying agents that process text, images, audio, and video.

Latest stable release: v2.5.17 (as of April 2026). September 9, 2025: Agno v2.0 dropped, completely rewrote Agent, Team, and Workflow classes. Following a mid-2025 tutorial? Code won’t work with v2.x. Period.

The framework runs in your infrastructure. No data leaves your environment unless you connect to the optional AgentOS UI for monitoring.

System Requirements: The Actual Minimums

Check this first:

  • Python: 3.7+ supported, but Python 3.12+ recommended for v2.x. Official one-liners specify --python 3.12.
  • OS: Linux, macOS, Windows. Windows native execution has known script issues – WSL2 recommended.
  • RAM: 2GB minimum for basic agents. Multi-agent systems with vector databases? 8GB+.
  • Disk: 500MB for core framework. Installing model-specific extras and vector databases? 2-5GB.

LLM provider API keys aren’t required for installation. You’ll need at least one to run agents – OpenAI, Anthropic, Google, or a local Ollama setup.

Method 1: pip Install (Virtual Environment Recommended)

Most people start here. Agno’s official docs recommend installing in a Python virtual environment.

# Create and activate virtual environment
python3 -m venv ~/.venvs/agno
source ~/.venvs/agno/bin/activate # macOS/Linux
# OR on Windows:
# .venvsagnoScriptsactivate

# Install core framework
pip install -U agno

# Verify installation
python -c "import agno; print(agno.__version__)"

-U flag pulls the latest version. See version output? Core framework is installed.

Now a model provider. OpenAI:

pip install openai
export OPENAI_API_KEY="sk-your-actual-key-here"

Anthropic Claude:

pip install anthropic
export ANTHROPIC_API_KEY="your-key"

Agno supports plug-and-play LLM integrations including OpenAI, Claude, Gemini, and open-source models via Ollama. Pick one.

Pro tip: Don’t name your test script agno.py. ModuleNotFoundError incoming – Python imports your local file instead of the installed package. Name it agent_test.py or my_agent.py.

Method 2: uv Install (Faster Dependency Resolution)

Official docs recommend uv for faster dependency resolution. pip works too. No uv yet? Grab it from astral-sh/uv on GitHub.

# Initialize project with uv
uv init my-agent-project
cd my-agent-project

# Add Agno and dependencies
uv add agno openai

# Activate the environment uv created
source .venv/bin/activate

# Verify
python -c "import agno; print(agno.__version__)"

uv auto-generates pyproject.toml and pins exact dependency versions. This works better for projects you’ll deploy later.

Method 3: uvx One-Liner (AgentOS Instant Start)

Want to avoid venv setup and run AgentOS immediately? The official GitHub README provides a one-liner using uvx:

export ANTHROPIC_API_KEY="your-key-here"

uvx --python 3.12 
 --with "agno[os]" 
 --with anthropic 
 --with mcp 
 fastapi dev agno_assist.py

What this does: uses Python 3.12, installs agno[os] (includes FastAPI backend), adds Anthropic Claude support, includes MCP (Model Context Protocol) tools, starts a FastAPI development server on http://localhost:8000.

One problem: you need to create agno_assist.py first. The uvx command doesn’t generate it.

Write Your First Agent (Verification Step)

Create test_agent.py:

from agno.agent import Agent
from agno.models.openai import OpenAIChat

agent = Agent(
 model=OpenAIChat(id="gpt-4o-mini"),
 description="You are a helpful assistant.",
 markdown=True
)

agent.print_response("Say hello in 10 words.", stream=True)

Run it:

python test_agent.py

Streaming response? Good. ImportError or ModuleNotFoundError? Something broke. Check: virtual environment is activated (which python should point to your venv), Agno version is 2.x (pip show agno), model provider package is installed (pip show openai).

Install AgentOS for Production Deployment

Core framework runs agents locally. AgentOS serves your agent as a stateless, session-scoped FastAPI backend with monitoring built in.

Install AgentOS extras:

pip install "agno[os]"

Create agno_assist.py:

from agno.agent import Agent
from agno.db.sqlite import SqliteDb
from agno.models.anthropic import Claude
from agno.os import AgentOS
from agno.tools.mcp import MCPTools

agent = Agent(
 name="Agno Assist",
 model=Claude(id="claude-sonnet-4-6"),
 db=SqliteDb(db_file="agno.db"),
 tools=[MCPTools(url="https://docs.agno.com/mcp")],
 add_history_to_context=True,
 num_history_runs=3,
 markdown=True
)

agent_os = AgentOS(agents=[agent], tracing=True)
app = agent_os.get_app()

Start the server:

fastapi dev agno_assist.py

Server runs on http://localhost:8000 by default. Check http://localhost:8000/docs for auto-generated FastAPI endpoints.

Those endpoints loading? AgentOS installed correctly.

Think about it: most install tutorials stop at “pip install succeeded.” But does the framework actually work? Running the FastAPI server and seeing endpoints is the real verification – not just importing a module. One debugging session taught me that.

Connect to AgentOS UI (Optional Monitoring)

Open os.agno.com and sign in. Click “Add new OS” in the top navigation, select “Local”, then enter http://localhost:8000.

The UI connects directly to your local AgentOS instance. Your data stays in your system – no egress, no retention costs. Streaming telemetry from your machine to the browser, not uploading sessions to a cloud service.

Connection fails? Check that your FastAPI server is running and port 8000 isn’t blocked.

Common Install Errors and Fixes

ModuleNotFoundError: No module named ‘agno.agent’
You named your script agno.py. Rename it. Python prioritizes local files over installed packages.

ImportError: cannot import name ‘ToolExecution’
Happens when Docker or cache systems serve an outdated agno version. Force reinstall: pip install --no-cache-dir -U agno.

Missing anthropic/azure-ai-foundry when importing AzureOpenAI
Agno’s Azure imports trigger dependency errors for unrelated packages. Workaround: pip install anthropic azure-ai-inference even if you’re not using them. Known issue (#2371) as of March 2025.

Windows: [WinError 193] script execution fails
Python scripts executed directly fail on Windows without explicitly invoking the interpreter. Use WSL2 or patch agno/skills/utils.py to invoke sys.executable before script paths.

Upgrade to Latest Version

To upgrade, run this in your virtual environment:

pip install -U agno --no-cache-dir

--no-cache-dir forces pip to pull the latest version instead of using cached packages. Matters because frequent Agno updates have broken many projects relying on cached old versions.

Check version after upgrading:

pip show agno | grep Version

Not 2.5.x or higher as of April 2026? Upgrade didn’t take. Delete your venv and reinstall.

Uninstall and Cleanup

Remove Agno completely:

pip uninstall agno agno-infra agno-cli
rm -rf ~/.venvs/agno # if you used the recommended venv path
rm -rf agno.db # removes local SQLite session storage

Installed with uv? Delete the project directory. uv keeps everything isolated there.

What to Build Next

Framework is installed and verified. Now add tools. The agno.tools module includes 100+ integrations out of the box – DuckDuckGo search, GitHub API, SQL databases, vector stores, file processing.

Start with a simple tool-enabled agent, then move to multi-agent teams. Official cookbook at docs.agno.com has 50+ examples.

Your next step: pick a real task, not a demo. Install works when agents solve actual problems.

FAQ

Do I need Docker to run Agno?

No. Docker only matters if you’re deploying AgentOS to production on AWS/cloud infrastructure. For local development and testing? A virtual environment is all you need.

Can I use Agno v1.x code with v2.5?

Agno v2 completely rewrote Agent, Team, and Workflow classes. Old code won’t run without changes. Official docs include a migration guide, but expect to rewrite agent initialization logic. Following a tutorial from before September 2025? Install v1.7.x instead: pip install agno==1.7.5. I tried running v1 code on v2 once – spent 3 hours debugging import errors before realizing the class signatures changed completely. Save yourself the time.

Why is my agent using 50x more memory than the benchmarks claim?

Agno’s ~3.75 KiB memory footprint applies to agent instantiation, not runtime execution. Once your agent loads a model, runs tools, or stores knowledge in a vector database, memory usage scales with those operations. The benchmark compares the framework’s overhead – not total agent memory under load. If your agent is memory-heavy, profile which tools or knowledge stores are consuming RAM. The “10,000x faster than LangGraph” claim? That’s measuring how quickly the Agent class instantiates – not how fast your agent responds to queries. Marketing teams love picking the one number that looks impressive. Actual performance depends on your model’s latency, tool execution time, and how much context you’re passing. A local Llama 3 70B model on a laptop? Slow regardless of framework. GPT-4o mini via API? Fast regardless of framework. The framework overhead is usually negligible compared to model inference time.