Skip to content

Install Fooocus AI: The 3-Click Image Generator (v2.5.5)

Install Fooocus v2.5.5 in under 3 clicks - Stable Diffusion XL with zero configuration. Runs on 4GB VRAM. Official setup, common failures, and swap workarounds.

7 min readBeginner

Fooocus claims you can go from download to first generated image in under 3 mouse clicks. Technically true. Catch: on first launch, the software silently downloads 6-7GB of AI models while you stare at a terminal window.

What Fooocus Actually Is (and Isn’t)

Fooocus is a local Stable Diffusion XL interface that strips out parameter tweaking. Created by Lvmin Zhang (the same Stanford researcher behind ControlNet), it trades flexibility for speed. Type a prompt, hit generate, get photorealistic images. No sampling steps, CFG scales, or model swapping.

One problem: as of version 2.5.5 (August 2024), the project is in long-term support mode. Bug fixes only. No new features. No support for newer architectures like Flux. Want latest models? The developers now recommend WebUI Forge or ComfyUI instead.

Think of it this way: Fooocus is a 2024 sports car with a permanently locked hood. Fast, polished, but you’re not swapping the engine.

The Swap Space Problem Nobody Warns You About

Every Fooocus tutorial lists the same minimum specs: 4GB VRAM (Nvidia), 8GB system RAM. Good for marketing. Useless in practice.

The actual requirement from the official troubleshooting docs: 40GB of system swap space. Without it? RuntimeError: CPUAllocator crashes mid-generation. Windows 10/11 usually handles this via Virtual Memory, but if you’ve ever manually disabled it (or you’re on Linux/Mac), you’re dead in the water.

Why 40GB? The official troubleshoot guide says “it does not need so much Swap, but 40GB should be safe for you to run Fooocus in 100% success.” In other words: the developers don’t know the exact minimum, so they picked a number that works.

Do you really need that much? Probably not if you have 64GB+ RAM. Nobody’s tested it systematically. The swap is insurance against random memory spikes during model loading.

Watch out: Put your swap file on an SSD, not an HDD. Model loading is I/O-bound – mechanical drives turn that 3-second launch into a 30-second crawl.

Get the Real Download (Not the Fake Ones)

Google “Fooocus” and you’ll see fooocus.com, fooocus.ai, fooocus.net – all fake. The official repo warns about this: “Those websites are ALL FAKE. They have ABSOLUTLY no relationship to us.”

The only legitimate source: github.com/lllyasviel/Fooocus/releases.

Windows: The Official 7z Package

Download Fooocus_win64_2-5-0.7z (or the latest version). Need 7-Zip to extract it – Windows’ built-in unzipper chokes on .7z files.

  1. Download from GitHub releases
  2. Extract to a folder with no spaces in the path (C:Fooocus works, C:My ProjectsFooocus breaks some scripts)
  3. Inside: run.bat, run_anime.bat, run_realistic.bat

Don’t move files around. Fooocus expects Python, models, and config in specific relative paths.

Mac: Conda Install (Experimental)

Apple Silicon (M1/M2) works via PyTorch MPS acceleration. Support is “very experimental” per the official docs. Need:

git clone https://github.com/lllyasviel/Fooocus.git
cd Fooocus
conda env create -f environment.yaml
conda activate fooocus
pip install -r requirements_versions.txt

Some people report needing the --disable-offload-from-vram flag to avoid slow model loading. Others don’t. Mac support is a dice roll.

Docker: If You’re Already Running Containers

Official Docker image: CUDA 12.4, PyTorch 2.1. If you’ve got nvidia-docker set up:

git clone https://github.com/lllyasviel/Fooocus.git
cd Fooocus
docker compose up

Models and outputs land in the fooocus-data volume. On Linux that’s usually /var/lib/docker/volumes/. Mac/Windows? Bind mounts are slow – use named volumes instead.

Docker doesn’t support Apple Silicon MPS yet. CPU mode only for Mac containers.

First Run: What ‘Auto-Download’ Really Means

Double-click run.bat. Terminal window appears. Text scrolls. Your browser should open to localhost:7865.

Should.

First launch? Fooocus silently fetches two SDXL models from Hugging Face: sd_xl_base_1.0_0.9vae.safetensors (~4GB) and sd_xl_refiner_1.0_0.9vae.safetensors (~2.5GB). The terminal shows progress bars. The browser tab just says “loading.” Takes 3-10 minutes depending on your connection.

Once downloaded, models live in Fooocus/models/checkpoints/. Already have these files from another Stable Diffusion install? Copy them there before first run. Skips the download.

Common First-Run Failures (and Fixes)

Error Cause Fix
MSVCP140.dll not found Missing Visual C++ runtime Install Microsoft Visual C++ Redistributable 2015-2022 (both x64 and x86)
RuntimeError: CPUAllocator Insufficient system swap Enable Virtual Memory (Windows) or increase swap to 40GB (Linux/Mac)
MetadataIncompleteBuffer Corrupted model download Delete files in models/checkpoints/, relaunch – Fooocus re-downloads automatically
Browser opens but shows blank page Port 7865 blocked or already in use Check if another app (Jupyter, another SD instance) is using 7865. Kill it or launch with --port 8080
CUDA not available Outdated Nvidia drivers Update drivers from nvidia.com

AMD users: Windows support is “very experimental” according to the official docs. Edit run.bat manually to enable ROCm. Linux with ROCm works better, but minimum VRAM jumps to 8GB.

Verify It’s Working

localhost:7865 loads and you see a text box labeled “Describe your image”? You’re in.

Type something simple: red apple on a wooden table. Hit Generate. Terminal shows sampling progress. After 20-60 seconds (depends on your GPU), an image appears.

Check the terminal output for this line:

Device: cuda:0 NVIDIA GeForce RTX [YOUR GPU]
Using xformers cross attention

Says Device: cpu? Your GPU isn’t detected. Drivers, CUDA, or hardware issue.

Generation times on tested hardware (from official docs and community reports as of August 2024):

  • RTX 3060 laptop (6GB VRAM): ~1.35s per iteration
  • RTX 4070 (12GB VRAM): ~0.8s per iteration
  • CPU mode (no GPU): 30-90s per iteration

Upgrade from Previous Versions (v2.5.0+ Pitfall)

Auto-update via git pull works most of the time. When it doesn’t? no module named 'supervision' errors.

The fix from the official upgrade discussion:

  1. Open terminal in the Fooocus folder (type cmd in the address bar on Windows)
  2. Run: ..python_embededpython.exe -m pip install -r .requirements_versions.txt
  3. Restart run.bat

That fails? Download the latest 7z package. Manually move your models/, outputs/, and config.txt to the new install.

Skip Gradio warnings. Fooocus requires an older Gradio version – updating breaks compatibility.

Why Docker Might Be Better (Or Worse)

The zip package is simpler. Most people should use it.

Docker makes sense if you already run containers for other AI tools (ComfyUI, Automatic1111), want to share models across tools via bind mounts, or you’re on a headless Linux server.

But Docker adds complexity: volume paths, GPU passthrough, port mapping. And on Mac/Windows, bind mount performance is terrible – model loading can be 3-5x slower than the native install.

Turns out, the extra abstraction layer costs you. For a tool that’s supposed to be “3 clicks,” Docker defeats the purpose unless you’re already deep in the container ecosystem.

Uninstall / Cleanup

No registry entries. No hidden files. Delete the Fooocus folder. Done.

Models live in Fooocus/models/ – want to keep them for other Stable Diffusion tools? Move that folder elsewhere first.

Docker users: docker compose down stops the container. To purge models too: docker volume rm fooocus-data.

FAQ

Can I use Fooocus without an Nvidia GPU?

Yes, but it’s slow. CPU mode: 30-90 seconds per image instead of 1-3 seconds. AMD GPUs are experimentally supported on Linux (8GB VRAM minimum), not recommended on Windows.

Why does my first image take forever but later ones are fast?

Model loading. The first generation loads SDXL base + refiner into VRAM (~7GB). Subsequent images reuse the loaded models. Switch models or restart Fooocus? You pay the loading cost again. One debugging session on an HDD-based swap: ~20 seconds. Same setup on SSD: ~5 seconds. The difference compounds when you’re iterating on prompts.

Should I install this if I already use Automatic1111 or ComfyUI?

Depends. Fooocus is faster for throwaway images – no UI tweaking, just prompt and go. But it’s locked to SDXL. Won’t get Flux/SD3 support. Need ControlNet, LoRA stacking, or custom samplers? Stick with A1111/ComfyUI. Want “Midjourney but local and free”? Fooocus nails that niche. The “minimal interface” thing isn’t marketing – the entire UI is literally one text box and a generate button. Some people find that liberating. Others find it limiting. They’re not mutually exclusive – you can run both and share model files between them.

Next step: open Fooocus, type a prompt, and see what 3 clicks actually gets you. The models are already downloaded. The swap is configured. The hard part’s done.