Here’s what most OpenClaw Docker tutorials won’t tell you: the official setup script runs flawlessly, your containers start green, and then at 2 AM your agent stops responding because the node user can’t write to a config file that root created when you exec’d in to install a package.
70% of OpenClaw permission errors trace back to this.
You’re running an AI that can execute shell commands – why you actually want Docker
OpenClaw gives your AI agent real file access, shell execution, and API calls. Per the official Docker documentation, containerization provides the security boundary between your agent’s workspace and your host system.
Running it directly on your Mac? You’re trusting the model to never rm -rf the wrong directory.
Docker doesn’t make it foolproof – agents can still do damage inside the container – but it materially reduces blast radius. Your ~/.ssh keys stay out of reach. Your /etc stays untouched. When the agent creates files, they’re scoped to mounted volumes you control.
Pro tip: The official image runs as the
nodeuser (UID 1000), not root. This is deliberate. If you see permission errors, resist the urge tochmod 777your way out – that defeats the isolation. Fix ownership instead.
The two-folder contract Docker setup creates (and why the second one matters more)
Per the setup walkthrough by Simon Willison, OpenClaw’s Docker Compose config bind-mounts two directories:
- ~/.openclaw – Configuration, memory, API keys, session state
- ~/openclaw/workspace – The agent’s working directory where it reads/writes files
The workspace is where things get interesting. Any file the agent creates appears here. Any file you drop here becomes visible to the agent. This is the shared surface.
The catch: if your host user is UID 1000 and the container runs as UID 1000, everything works. If your host files are owned by UID 501 (common on macOS) or you create files inside the container as root, OpenClaw can’t touch them.
| Scenario | Host UID | Container UID | Result |
|---|---|---|---|
| Linux default user | 1000 | 1000 (node) | ✓ Works |
| macOS user | 501 | 1000 (node) | ✗ Permission denied |
Files created by docker exec -u root |
N/A | 0 (root) | ✗ node can’t write |
Fix: sudo chown -R 1000:1000 ~/.openclaw ~/openclaw/workspace on the host before starting containers.
Running the official setup (and the onboarding questions that trip people up)
According to the official documentation, the fastest path is the provided docker-setup.sh script. It builds the image, runs onboarding, generates tokens, and starts the gateway.
git clone https://github.com/openclaw/openclaw.git
cd openclaw
export OPENCLAW_IMAGE="ghcr.io/openclaw/openclaw:latest"
./scripts/docker/setup.sh
During onboarding, OpenClaw asks configuration questions. Two are non-obvious:
Model provider authentication: If you choose OpenAI Codex OAuth, it opens a browser URL that redirects to a non-running localhost service and shows an error. You copy that entire localhost:... URL (including the code parameter) and paste it back into the wizard. This is expected behavior per the official Docker guide.
Tailscale mesh networking: Saying yes here enables remote access but complicates local-only setups. If you’re testing locally first, say no. You can enable it later in openclaw.json.
After setup completes, verify the gateway:
docker compose ps
# Should show openclaw-gateway-1 running
docker compose logs -f openclaw-gateway
# Look for "Gateway listening on port 18789"
The token mismatch that breaks CLI commands (even when the UI works fine)
You run docker compose run --rm openclaw-cli devices list and get: unauthorized: device token mismatch.
The Control UI at http://127.0.0.1:18789 works. Your Telegram bot responds. But every CLI command fails.
What happened: per troubleshooting reports on community sites, the setup script writes a 64-character token to .env, but the onboarding wizard generates its own 48-character token and saves it to ~/.openclaw/openclaw.json. They don’t match.
The gateway uses the token from openclaw.json. The CLI reads OPENCLAW_GATEWAY_TOKEN from .env. Mismatch.
Fix option 1: Copy the token from ~/.openclaw/openclaw.json (look for gateway.token) into your .env file, then restart:docker compose restart
Fix option 2: Use the alternative command path that Simon Willison documents – bypass the CLI container entirely:
docker compose exec openclaw-gateway node dist/index.js devices list
This works because it runs inside the gateway container where the correct config is already loaded.
Why version 2026.3.2 might be why your container keeps restarting
Not all OpenClaw Docker images are stable. According to notes on the Docker Hub alpine/openclaw page, versions 2026.3.2 and 2026.2.26 have known issues and higher failure rates.
As of early April 2026, v2026.3.7 is confirmed working.
OpenClaw releases every 2 days on average. Community update guides recommend waiting 3-5 days after a release before pulling it into production. Let others surface the breakage first.
If your container is crash-looping:
docker logs openclaw-gateway --tail 50
Common exit codes:
Exit 1: Application error – check logs for missing API keys or bad config
Exit 137: OOM killed – container needs more memory (minimum 2 GB per official docs)
Exit 127: Command not found – image or entrypoint issue
If the issue appeared after an update, roll back to the previous working image tag and check community channels (r/openclaw, Discord) for known issues with the new version.
Setting up Telegram (the least-annoying channel integration)
OpenClaw supports 20+ messaging platforms. Telegram is the fastest to configure.
Per the setup guide:
- Open Telegram and message @BotFather
- Send
/newbotand follow prompts to create a bot - Copy the API token BotFather gives you
- Add the token to OpenClaw during onboarding, or manually:
docker compose run --rm openclaw-cli channels add --channel telegram --token "YOUR_TOKEN" - Pair your account:
OpenClaw will send you a Telegram message with a pairing code. Run:docker compose run --rm openclaw-cli pairing approve telegram <CODE>
Now you can message your bot from your phone. The agent runs in the container; you control it from Telegram.
What the docs don’t emphasize: you’re also opting into rapid breaking changes
OpenClaw had 13 releases in March 2026 alone. One release disabled tools by default (v2026.3.2). Another broke Matrix channels for weeks.
The Docker image tracks these changes. When you docker compose pull, you get the latest – including any breaking changes that shipped 6 hours ago.
If stability matters:
- Pin a specific image tag in
docker-compose.yml:image: ghcr.io/openclaw/openclaw:2026.3.7 - Test updates in a staging container before promoting to production
- Keep backups of your
~/.openclawdirectory before updates
If you can’t fix an issue within 30 minutes of updating, roll back. A known-good version behind is better than a broken production agent.
When browser automation silently fails (and how to actually install Chromium)
OpenClaw uses Playwright for browser automation. The official Docker image is based on node:24-bookworm, which doesn’t include browsers by default.
Per the Docker documentation, you must install them manually:
docker compose run --rm openclaw-cli
node /app/node_modules/playwright-core/cli.js install chromium
If you skip this, browser-based tools will fail silently or throw Executable doesn't exist errors buried in logs.
To persist browsers across container recreations, set PLAYWRIGHT_BROWSERS_PATH=/home/node/.cache/ms-playwright and ensure that path is covered by OPENCLAW_HOME_VOLUME or mounted explicitly.
Accessing the Control UI when “unauthorized” blocks you
You visit http://127.0.0.1:18789 and see: Unauthorized - authentication required.
The gateway enforces token authentication even on localhost. You need the dashboard URL with the ?token=... parameter.
Get it:
docker compose run --rm openclaw-cli dashboard --no-open
This prints the full URL. Copy and paste into your browser.
If that still shows “pairing required,” approve your browser as a device:
docker compose run --rm openclaw-cli devices list
# Find the requestId for your browser
docker compose run --rm openclaw-cli devices approve <requestId>
Refresh the browser. You’re in.
How to update without breaking your running setup
Pull the latest image:
docker compose pull
docker compose up -d
Your config and workspace volumes persist. The container recreates with the new image.
Clean up old images:
docker image prune -f
If the update breaks something, roll back to the previous tag. You did record which version was working, right?
If not, check the image history:
docker images ghcr.io/openclaw/openclaw
Pull the previous digest or tag, update your docker-compose.yml, and restart.
What actually matters: your agent works, your data persists, your system stays isolated
Docker setup looks like extra complexity. It is. But when your agent accidentally runs npm install -g with bad permissions or tries to read a file it shouldn’t touch, you’ll appreciate the boundary.
The permission errors are fixable. The token mismatch is a known bug with a known workaround. The version instability is real but manageable if you pin versions and test before promoting.
Once it’s running: your config lives in ~/.openclaw, your workspace is scoped, your gateway listens on 18789, and your AI is one Telegram message away.
Start by verifying your current setup works. Then add one integration at a time. Test each before adding the next. If something breaks, you know exactly what changed.
FAQ
Do I need Node.js installed on my host machine if I’m using Docker?
No. The Docker image includes Node 24 (the recommended runtime). You only need Docker and Docker Compose on your host. Node is already inside the container.
Why does my container keep restarting with exit code 137?
Exit 137 means the container was OOM-killed – it ran out of memory. OpenClaw’s Docker image needs at least 2 GB RAM to build and run. Increase Docker’s memory limit in Docker Desktop settings (macOS/Windows) or check available memory with docker stats on Linux. Set a memory limit in your docker-compose.yml: mem_limit: 2g to prevent runaway usage.
Can I run OpenClaw with a local LLM instead of paying for API credits?
Yes. OpenClaw supports Ollama for local models. You’ll need significantly more resources – 16 GB RAM minimum, 32 GB recommended per community guides. The Docker setup stays the same; you just configure agents.defaults.models.chat to point to ollama/qwen3:30b or your chosen model. Expect slower responses and higher memory usage compared to cloud APIs. For experimentation, local models work. For production, cloud APIs are more reliable.