There are two ways to deploy the AutoGPT open source AI agent platform locally, and most tutorials online still describe the wrong one. The classic ./autogpt.sh CLI from 2023 now lives in classic/original_autogpt/ – kept around for historical reasons but no longer the active project. The current product is the AutoGPT Platform: a Next.js frontend plus a Dockerized backend stack inside the autogpt_platform/ folder. That folder is Polyform Shield licensed; the rest of the repository (classic Agent, Forge, agbenchmark) stays MIT – a split most tutorials don’t mention. (AutoGPT GitHub README)
If you’re following a guide that tells you to git checkout stable and edit .env.template, close it. That’s the old path. As of early 2026, we’re installing the platform – version autogpt-platform-beta-v0.6.56.
System requirements (don’t skip this)
The platform itself isn’t the heavy part – the LLM cost lives elsewhere. The Docker stack is: Postgres, Redis, RabbitMQ, a full Supabase setup, plus the backend and frontend. Seven-ish containers running constantly. Per the official README, hardware requirements break down like this:
| Component | Minimum | Recommended |
|---|---|---|
| CPU | – | 4+ cores |
| RAM | 8 GB | 16 GB |
| Disk | 10 GB free | 20 GB+ SSD |
| OS | Ubuntu 20.04 / macOS 10.15 / Win10 + WSL2 | Ubuntu 22.04+ |
Six ports need to be free: 3000 (frontend), 8006 (REST API), 8001 (WebSocket), 5432 (Postgres), 6379 (Redis), and 5672 (RabbitMQ). If you’re already running a local Postgres on 5432, stop it before starting AutoGPT or you’ll waste twenty minutes wondering why the migrate container keeps exiting. Software floor: Docker Engine 20.10+, Docker Compose 2.0+, Git 2.30+, Node.js 16.x+, and npm 8.x+.
The fast path: official installer script
The AutoGPT team ships an installer that handles the boring parts – clones the repo, copies the env file, runs docker compose. For macOS/Linux, per the official docs:
curl -fsSL https://setup.agpt.co/install.sh -o install.sh && bash install.sh
A PowerShell variant exists for Windows (see the official docs page – the command differs slightly by shell version). Use the script if you want a working instance fast and don’t need to tweak ports, point at a remote Postgres, or run on an ARM board. Those cases need the manual flow below.
Manual install for the open source AI agent platform
Five commands. Run them from a directory you don’t mind cluttering with a large checkout.
# 1. Clone the repo
git clone https://github.com/Significant-Gravitas/AutoGPT.git
cd AutoGPT/autogpt_platform
# 2. Copy the default env file
cp .env.default .env
# 3. Generate a real encryption key (replace the default)
python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"
# Paste the output into ENCRYPTION_KEY= inside backend/.env
# 4. Build and start the backend stack
docker compose up -d --build
# 5. Start the frontend separately
cd frontend
cp .env.example .env
npm install
npm run dev
Step 3 deserves a second look. The default ENCRYPTION_KEY in .env.default is the same across every fresh clone on Earth. If you ever expose the instance to anything beyond localhost, that shared key is a real problem – the docs flag this explicitly but the installer doesn’t enforce it.
The docker compose up step is where patience matters. First build on a fresh machine: up to 15 minutes, mostly pulling Supabase images. (DataCamp’s install walkthrough and DeepWiki both report a 2-15 minute range for first-run builds.) Ctrl+C halfway through leaves a broken half-built state and the next run is more painful, not less. Let it finish.
Verify it actually works
docker compose ps
Give services 2-5 minutes to settle. Everything should show Up or Up (healthy). The migrate service should show Exit 0 – that one is supposed to exit; it’s not a failure.
Then hit http://localhost:3000. You should land on a sign-up screen served by Supabase Auth. Page loads but auth returns 400? Your encryption key probably wasn’t set correctly in step 3. Stop containers, fix the key, run docker compose down -v (the -v wipes volumes), and rebuild.
The two install failures nobody warns you about
Almost every failed install traces back to one of these two things.
Hyper-V on Windows kills supabase-db
This is the big one. Docker on Windows must use the WSL 2 backend, not Hyper-V. Using Hyper-V causes compatibility problems with Supabase – the supabase-db container gets marked unhealthy, and the error message (dependency failed to start: container supabase-db is unhealthy) gives no hint at the actual cause. It’s documented in GitHub issue #9846, but users typically burn through two failed installs before finding it.
Fix: Docker Desktop → Settings → General → check Use the WSL 2 based engine → restart. Check Docker Desktop’s own docs before switching – the engine change can affect existing containers and build cache.
Raspberry Pi 5 ships the wrong page size
Run getconf PAGESIZE before you start. The Pi 5 ships with a 16K kernel page size by default – returns 16384. Postgres won’t start at 16K; 4096 (4K) is what the supabase-db container needs. The AutoGPT docs reference the underlying Supabase issue (#33816) and note this explicitly: 16384 is incorrect, 4096 is correct. The fix is switching to a 4K page-size kernel via /boot/firmware/config.txt; the Raspberry Pi OS forums document the exact kernel= line for each image.
Upgrade and uninstall
Data lives in named Docker volumes, so upgrading between platform-beta versions is usually painless:
cd AutoGPT
git fetch && git pull
cd autogpt_platform
docker compose down
docker compose up -d --build
The migrate container handles schema changes on the next start. That said – in prior beta versions, breaking env-var renames have shown up in release notes. Always check the releases page before pulling.
To uninstall completely:
cd AutoGPT/autogpt_platform
docker compose down -v # -v removes volumes (your data)
docker system prune -a # optional: clear images
cd ../.. && rm -rf AutoGPT
No global state, no system services, no leftover registry entries. That’s one of the genuine upsides of a Docker-first design.
FAQ
Is AutoGPT actually open source?
Split answer: yes and no. Classic agent plus Forge – MIT. The platform you’ll actually use – Polyform Shield, which restricts competing commercial hosting but imposes no limits on self-hosting or personal use.
Do I still need an OpenAI API key?
You need some LLM provider to run agents, but not necessarily OpenAI. The platform’s block system lets you configure which LLM each agent uses directly in the UI – not hard-coded in .env. Which providers are supported may expand over time; check the UI’s provider settings after you log in rather than relying on any list here, since the platform is still in active beta development (as of early 2026).
Can I run this on a $5 VPS?
No – and not even close. Seven containers including Postgres and Redis need real memory. The official README puts the minimum RAM at 8 GB; in practice, anything under that will likely OOM-kill containers before the stack finishes booting. Save yourself the debugging session: start with the minimum spec, not below it.
Next: once localhost:3000 loads, head to the official getting-started guide and run a Marketplace agent before building your own – it’s the fastest way to learn the block model without staring at an empty canvas.