Skip to content

Install Stable Diffusion WebUI (AUTOMATIC1111) – 2026 Guide

AUTOMATIC1111 stopped releasing updates in early 2025 - v1.10.1 is its last official version. Here's how to actually install it without breaking your Python environment.

6 min readIntermediate

AUTOMATIC1111’s Stable Diffusion WebUI hit v1.10.1 in February 2025 and hasn’t moved since. Development slowed dramatically while Forge and ComfyUI picked up momentum. But it’s still the entry point for most people running Stable Diffusion locally – 163,000 GitHub stars don’t lie.

Three things break most 2026 installs: master branch fails on fresh setups, Python 3.10.6 is mandatory (not a suggestion), and pkg_resources errors stop you before the first image generates.

Think of this guide as the install path with guardrails already in place. The gotchas are handled before they break your setup.

System Requirements: What You Actually Need

The “4GB VRAM minimum” you see everywhere? Technically true. Misleading in practice.

Minimum to run: Expect slow, low-resolution images.

  • GPU: NVIDIA with 4GB VRAM (GTX 1050 Ti). AMD GPUs work via ROCm – you’ll troubleshoot more than generate.
  • RAM: 16GB. (Official docs say 8GB + 8GB page file works, but 16GB is the real target for smooth operation.)
  • Storage: 12GB free. SSD strongly recommended – model loading from HDD is painful.
  • OS: Windows 10/11, or Linux (Ubuntu/Debian/Arch).

Recommended for smooth 512×512 generation:

  • GPU: NVIDIA RTX with 6-12GB VRAM. RTX 3060 12GB: sweet spot for hobbyists.
  • RAM: 16-32GB.
  • Storage: 20-50GB on SSD. Models, LoRAs, outputs add up fast.
  • CPU: Quad-core minimum (Intel i5/Ryzen 5). Doesn’t bottleneck generation – all GPU – but you need it to not choke on dependency installs.

RTX 50 series (Blackwell)? Switch to dev branch after cloning. Master branch doesn’t support it yet. Pinned issue #16824 has details.

Download Dependencies: Python and Git

AUTOMATIC1111 needs Python 3.10.6. Not 3.10.x. Not 3.11. 3.10.6.

Python 3.11+ breaks torch. You’ll get cryptic errors about missing modules or CUDA failures.

Windows:

  1. Download Python 3.10.6 (64-bit installer).
  2. Run installer. Check “Add Python 3.10 to PATH” before Install Now.
  3. Download Git for Windows. Defaults work.

Linux (Debian/Ubuntu):

sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.10 python3.10-venv git wget

Linux (Arch/Manjaro):

sudo pacman -S python310 git wget

Verify:

python --version

You should see Python 3.10.6. If 3.11 or higher shows up, your PATH points to the wrong install. Fix that first.

Install AUTOMATIC1111 WebUI

Master branch has a dependency issue – tries to clone the official Stability-AI repo, which has breaking changes (as of January 2026). Dev branch uses a community-maintained fork that works.

Windows:

  1. File Explorer → where you want the install (e.g., C:UsersYourName). Avoid spaces or special permissions (not Program Files).
  2. Address bar: type cmd, press Enter. Command prompt opens.
  3. Clone:
    git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
    cd stable-diffusion-webui
    
  4. Switch to dev:
    git checkout dev
    

Linux:

cd ~
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
git checkout dev

One-liner option (uses master – riskier for fresh installs):

bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)

First-Time Configuration: VRAM Optimization

Before running the launcher, edit webui-user.bat (Windows) or webui-user.sh (Linux) to add flags. Prevents out-of-memory errors.

Windows: Right-click webui-user.bat, Edit (or Notepad).
Linux: Open webui-user.sh in nano/vim.

Find set COMMANDLINE_ARGS= (Windows) or export COMMANDLINE_ARGS="" (Linux). Add flags based on VRAM:

VRAM Flags Why
4GB --lowvram --xformers Aggressive memory saving – slow but works.
6-8GB --medvram --xformers Balanced. For RTX 2060/3060.
10GB+ --xformers Speed optimization only.

Example (8GB, Windows):

set COMMANDLINE_ARGS=--medvram --xformers

Example (8GB, Linux):

export COMMANDLINE_ARGS="--medvram --xformers"

Save. Done.

Run the Installer and Launch WebUI

Windows: Double-click webui-user.bat.
Linux: Run ./webui.sh.

Command prompt opens. Wall of text as it:

  • Creates Python venv (venv folder)
  • Installs PyTorch, torchvision, xformers, 50+ dependencies
  • Downloads Stable Diffusion 1.5 model (~4GB)

20-30 minutes on fast internet. Don’t close the window. Looks frozen around “Installing torch” – it’s not. Torch is huge.

Finished:

Running on local URL: http://127.0.0.1:7860

Copy URL → paste in browser. WebUI loads.

Verify the Install Works

Generate test image:

  1. WebUI → Prompt box.
  2. Type: a red apple on a wooden table, photorealistic
  3. Click Generate.

10-60 seconds (depends on GPU). Image appears? You’re done.

Check command prompt for speed – look for 5.2 it/s (iterations/second). Above 3 it/s? Acceptable for mid-range GPU.

Common Errors

These stop 30%+ of installs.

Error: ModuleNotFoundError: No module named ‘pkg_resources’

Means: Venv missing setuptools. Started happening February 2026 for fresh installs – 50+ reports in GitHub discussion #17276.

Fix:

cd stable-diffusion-webui
venvScriptspython.exe -m ensurepip --upgrade
venvScriptspython.exe -m pip install --upgrade pip setuptools wheel

(Linux: venv/bin/python instead of venvScriptspython.exe)

Re-run webui-user.bat. Fixes 80% of cases.

Error: RuntimeError: Torch is not able to use GPU

Means: Outdated NVIDIA drivers, or torch installed CPU-only.

Fix:

  1. Update drivers: NVIDIA’s site.
  2. Delete venv folder inside stable-diffusion-webui.
  3. Re-run webui-user.bat. Forces clean torch reinstall with CUDA.

Error: CUDA out of memory

Resolution or batch size exceeds VRAM.

Fix: Add --medvram or --lowvram to COMMANDLINE_ARGS. In WebUI: reduce Width/Height to 512×512, Batch size to 1.

Error: fatal: not a git repository

Downloaded ZIP from GitHub instead of cloning. ZIP lacks git metadata.

Fix: Delete folder. Clone with git clone.

How to Update AUTOMATIC1111

No auto-update. Manual pull:

cd stable-diffusion-webui
git pull

Delete venv, re-run webui-user.bat.

Auto-update every launch: Edit webui-user.bat, add git pull before call webui.bat:

@echo off
set COMMANDLINE_ARGS=--medvram --xformers
git pull
call webui.bat

Updates every start.

Uninstall / Clean Removal

No system-wide install. To remove: delete stable-diffusion-webui folder.

Keep models and outputs? Back up first:

  • models/Stable-diffusion/ (checkpoint files)
  • outputs/ (generated images)
  • embeddings/ and loras/ (if added)

No registry entries. No hidden system files.

Worth noting: A1111 development has stalled, but the install base is massive. If you hit a wall with this guide, chances are someone else already documented the fix in the GitHub discussions or on Reddit’s r/StableDiffusion. The community troubleshooting is more active than the repo itself at this point.

Should You Still Use AUTOMATIC1111 in 2026?

Depends.

New to Stable Diffusion and want easiest on-ramp? Yes – A1111’s interface is most beginner-friendly. Complex workflows (ControlNet + multiple LoRAs + iterative refinement)? ComfyUI is faster, more flexible. Want better performance on same hardware? Forge (maintained A1111 fork) is the drop-in replacement.

A1111 isn’t dead. It’s on life support. Use it to learn. Migrate when you outgrow it.

FAQ

Can I run AUTOMATIC1111 on AMD GPUs?

Yes via ROCm on Linux. Realistically? You’ll troubleshoot driver issues more than generate images. NVIDIA’s CUDA support is vastly better. If you already have AMD, try it – but expect frustration.

Why does the first launch take 30 minutes?

PyTorch (~2GB), Stable Diffusion 1.5 model (~4GB), 50+ Python packages downloading. Subsequent launches: 10-15 seconds. Long wait happens once. Delete venv? Forces reinstall, wait repeats.

What’s the difference between –medvram and –lowvram?

--medvram splits model layers between VRAM and system RAM – 6-8GB VRAM. --lowvram aggressively offloads to system RAM, slower but necessary for 4GB VRAM. 10GB+? Skip both, just use --xformers for speed. The flags aren’t just performance tweaks – they determine whether your generation completes or crashes mid-render. Watch your command prompt during first generation: if you see CUDA memory errors, you need the next flag down (none → medvram → lowvram).

Run webui-user.bat. Generate your first image. Command prompt shows errors if something breaks – don’t ignore them. Fix as they appear using Common Errors above. Once you see that local URL, you’re in.