Skip to content

AutoML Open Source: Install AutoGluon 1.5.0 (Guide)

Deploy AutoGluon 1.5.0, the open source AutoML framework from AWS. Real install commands, the conda vs pip trap, GPU setup, and fixes for common errors.

8 min readIntermediate

So you searched for an open source AutoML framework and AutoGluon kept coming up. The next question every developer asks: is this actually a clean install, or am I about to spend three hours fighting CUDA mismatches?

Honest answer – it’s somewhere in between. This guide walks through deploying AutoGluon 1.5.0 the way the maintainers themselves recommend, plus the install pitfalls that aren’t on the docs page but live in the GitHub issue tracker.

Why AutoGluon for an AutoML open source stack

AutoGluon is developed by AWS AI and ships under Apache 2.0. The selling point isn’t the 3-line API everyone quotes – it’s the benchmark results. In the AutoML Benchmark 2025, AutoGluon 1.2 with a 5-minute training budget outperformed all other AutoML systems running with a full 1-hour budget.

Version 1.5.0 (the current stable as of this writing) pushes that further. According to the AutoGluon dev release notes, AutoGluon-Tabular v1.5 Extreme achieves a 70% win rate over v1.4 Extreme across 51 TabArena datasets, with a 2.8% reduction in mean relative error. If you’re choosing between TPOT, H2O, FLAML, and AutoGluon for tabular work, the benchmarks favor AutoGluon. That’s why we’re installing it.

System requirements (and the version trap)

The official install docs confirm Python 3.10, 3.11, 3.12, or 3.13 on Linux, macOS, and Windows. Hardware specs below are community-reported estimates – the official docs don’t specify RAM or disk minimums, so treat these as reasonable starting points rather than hard guarantees:

Spec Minimum Recommended
OS Linux, macOS, Windows Linux (Ubuntu 22.04+)
Python 3.10 3.11 or 3.12
RAM ~8 GB (community est.) 16+ GB (32 GB for Multimodal)
Disk ~5 GB free 20 GB if installing all extras
GPU (optional) CUDA 11.8+ CUDA 12.1, 8+ GB VRAM

Watch the Python version range carefully. AutoGluon 1.4.0 supported only Python 3.9-3.12; 1.5.0 expands that to 3.10-3.13. Copy-paste a six-month-old tutorial into a Python 3.13 environment and you’ll get a dependency resolver loop instead of a clear error – the resolver just can’t find matching wheels for 1.4.x on 3.13 and doesn’t tell you why.

The four install methods, ranked by what the docs actually recommend

Most tutorials default to pip install autogluon and stop there. The official guidance is more specific. Per the install page, the maintainers recommend uv or pip – and the uv install is the version they actively benchmark and test on. The conda install may have specific gaps in installed dependencies (Ray, notably) that affect performance and stability. Their advice: try uv or pip first, fall back to conda only if you have an environment constraint that forces it.

So the ranking, in plain terms:

  1. uv – fastest, what AWS benchmarks against
  2. pip – fine, well-tested
  3. conda / mamba – works, but with caveats (more on this below)
  4. source build – only if you’re testing a PR or contributing

Install AutoGluon 1.5.0 (CPU, recommended path)

Create a fresh virtual environment first. AutoGluon pulls in 250+ packages – never install it into your base Python.

# Create and activate a venv
python3.11 -m venv ag-env
source ag-env/bin/activate # Windows: ag-envScriptsactivate

# Update build tools
pip install -U pip setuptools wheel

# Install uv, then AutoGluon
pip install -U uv
python -m uv pip install autogluon --extra-index-url https://download.pytorch.org/whl/cpu

The --extra-index-url flag pins PyTorch to the CPU wheels, which are roughly 200 MB instead of 2+ GB. Skip it only if you actually want the GPU build pulled automatically.

GPU install

For NVIDIA GPUs, install PyTorch with CUDA first, then AutoGluon:

pip install -U pip setuptools wheel
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
python -m uv pip install autogluon

Apple Silicon

M1/M2/M3 Macs: conda is the path of least resistance here. The official docs confirm Apple Silicon is supported via conda, and conda-forge will automatically pull the GPU version if the machine supports it.

conda create -n ag python=3.11 -y
conda activate ag
conda install -c conda-forge mamba
mamba install -c conda-forge autogluon

Verify it works

Two quick checks. First, confirm the version:

python -c "import autogluon; print(autogluon.__version__)"
# Expected: 1.5.0

Then a smoke test that actually exercises the training loop:

python -c "
from autogluon.tabular import TabularDataset, TabularPredictor
data = TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv').sample(500)
p = TabularPredictor(label='class').fit(data, time_limit=30)
print(p.leaderboard(silent=True).head(3))
"

If that finishes in under two minutes and prints a leaderboard, you’re done. Crashes immediately on import? Jump to the errors section.

Common install errors and what actually fixes them

“No matching distribution found for autogluon”

Almost always a Python version mismatch. Fix: use Python 3.11 in a clean venv, and upgrade pip itself first (pip install -U pip) before anything else. The resolver on older pip versions fails silently on version constraints rather than telling you which package is incompatible.

numpy wheel build failure on pip install –pre

Pip tries to build numpy from source because it can’t find a matching wheel for the nightly build. The fix is to upgrade pip and setuptools before running the AutoGluon install, not after – order matters here because the wheel index gets cached on first resolution.

Docker build hangs, then fails on optimum/onnx

This one caught several Docker users. As documented in GitHub issue #4515, AutoGluon looked for an older version of optimum (<1.19,>=1.17) which in turn looked for an older version of onnx, failing on version 1.10.0. Workaround: pin explicitly (pip install autogluon==1.5.0) and use python:3.11-slim as the base image – not the full python:3.11, which carries system libraries that confuse the resolver.

Conda install runs but training is mysteriously slow

This is the gotcha nobody mentions. An AutoGluon maintainer documented in issue #3876 that conda installs roughly 400 packages while pip installs around 250 – and conda skips Ray entirely. No Ray means much slower fit times on best_quality. The attempted fix of running pip install ray afterward corrupts the Ray install further. Switch to uv or pip.

Which raises a fair question: if the preset matters this much, how do you even know which one to pick before you’ve trained? There’s no clean answer in the docs – best_quality is slower but more accurate, medium_quality is faster but you don’t know how much you’re leaving on the table until you run both. For most teams, that means running the smoke test twice with a time budget you can actually afford, before committing to a preset for production.

Pro tip: Pin your AutoGluon version in production. Per the 1.5.0 release notes, loading models trained on older versions is not supported – you must retrain on 1.5.0. A surprise minor-version bump in CI will silently break model loading. Always specify autogluon==1.5.0 in requirements.

Upgrade and uninstall

Upgrading from 1.4.x to 1.5.0 is straightforward – but every saved model needs retraining. No migration path exists for the pickle files. The version skew is intentional; the model registry changes between minor releases.

# Upgrade
python -m uv pip install --upgrade autogluon

# Verify
python -c "import autogluon; print(autogluon.__version__)"

# Clean uninstall (removes all subpackages)
pip uninstall -y autogluon autogluon.core autogluon.features 
 autogluon.tabular autogluon.multimodal autogluon.timeseries autogluon.common

The uninstall has to list every subpackage because autogluon is a meta-package. Just removing it leaves the modules behind. If you used a venv, the cleanest path is to delete the entire environment directory and start fresh.

What about AutoGluon-Assistant?

Separate product. The team shipped MLZero (also called AutoGluon-Assistant) alongside 1.4.0 – a multi-agent LLM-driven framework (arXiv:2505.13941) that handles perception, memory, code generation, execution, and iterative debugging to turn raw multimodal inputs into ML pipelines. Install: pip install autogluon-assistant. Catch: it needs an LLM provider key – AWS Bedrock or OpenAI. If you just want the core AutoML library, skip it for now. Different infrastructure, different bill.

FAQ

Can I run AutoGluon on Windows without WSL?

Yes. Use the Anaconda path from the official docs – pip-only installs on Windows hit Visual C++ build tool issues with packages like ConfigSpace, and conda sidesteps that cleanly.

Why does the docs page list four install methods if uv is best?

Because “best” depends on your environment. uv is fastest and what the team benchmarks against – but enterprise Python shops often standardize on conda for dependency reproducibility across teams, and Apple Silicon GPU support specifically flows through conda-forge. Pip is the universal fallback when neither of those constraints applies. Source builds exist for contributors testing PRs. The docs cover all four because it’s not a speed contest; it’s a constraint-matching problem.

Is the 3-line code claim real or marketing?

Real, for tabular. TabularPredictor(label='y').fit('train.csv').predict('test.csv') genuinely works. The marketing part is implying that’s all you’ll ever write – in production you’ll add presets, time limits, eval metrics, and feature engineering. Treat the 3 lines as a fair smoke test, not a blueprint for anything that matters.

Next step: run the smoke test command above on a real dataset of yours – something with 10k+ rows. Time how long presets='best_quality' takes versus the default. That number tells you whether AutoGluon fits your latency budget before you commit to it.