Skip to content

ComfyUI Install Guide v0.19.3: Node-Based Image AI Setup

Install ComfyUI Desktop v0.8.33 (ComfyUI 0.19.3) for node-based image AI. Real commands, VRAM math, error fixes, and uninstall paths.

8 min readIntermediate

By the end of this guide you’ll have ComfyUI Desktop running locally – node graph open in your browser, a checkpoint loaded, your first image rendering. We’re targeting Desktop v0.8.33, which ships with ComfyUI core 0.19.3 (per the official release page, as of early 2026). If you’ve installed any node-based image AI tool before, this is faster than you remember. If you haven’t – the trap isn’t the install itself. It’s the maintenance loop on first launch. We’ll get to that.

What you actually need before installing

Disk space is the real constraint, not GPU muscle. PyTorch alone is roughly 15 GB – the Desktop installer pulls it on first run, and if your C: drive is tight, the install fails without a clear error message. The official Windows install docs are upfront about this (as of early 2026). Sort out the drive before you touch the installer.

Component Minimum Recommended
OS Windows 10, macOS (Apple Silicon), Linux Windows 11 / macOS 14+ / Ubuntu 22.04
GPU NVIDIA 4 GB VRAM (discrete) NVIDIA 12 GB+ (RTX 30/40/50 series)
RAM 16 GB 32 GB for video / large workflows
Disk ~15 GB free (PyTorch alone) SSD, 100 GB+ for a real model library
Python (manual install) 3.12 3.13

The floor is actually lower than that table implies. Per the ComfyUI README (as of early 2026), smart memory management lets ComfyUI run on GPUs with as little as 1 GB VRAM, and there’s a --cpu flag if you have no GPU at all. Both work. Neither is fast. For Desktop on Windows, 4 GB VRAM is where it becomes practical.

Node graphs take a minute to click with. Think of each node as a single step in a recipe – one node loads the model, the next encodes your prompt, the next runs the sampler – and the wires between them are just the output of one step becoming the input of the next. Once that model is in your head, reading other people’s workflows stops being confusing. It’s worth sitting with a default workflow for five minutes before adding anything.

Picking the right install method

Three real paths exist:

  • Desktop app – installer with managed Python venv, auto-updates. Easiest. Locked to one machine.
  • Windows portable – zip file, drag and run, fully self-contained. Best if you want to move the folder around or update commits manually.
  • Manual git clone – full control, needed for Linux and AMD/Intel/Apple GPU work.

Two portable builds exist – most tutorials don’t mention this. One targets Nvidia 20-series and above (Python 3.13, CUDA 13.0). The other ships with CUDA 12.6 and Python 3.12 specifically for 10-series and older GPUs (per the README, as of early 2026). Grab the wrong one for a GTX 1080 and it’ll boot – but generation will be sluggish, no error, no warning. Check the release filename before downloading.

Install ComfyUI Desktop (recommended path)

Go to comfy.org/download and grab the Windows or macOS installer. Run it. Three wizard questions matter:

  1. GPU type – pick NVIDIA. CPU mode exists for edge cases but generation times make it impractical for regular use.
  2. Install location – put it on an SSD. Model load times are noticeably faster on solid-state storage.
  3. Migrate existing install – if you have a portable ComfyUI already, the installer detects it and offers to import custom nodes. Models stay where they are.

The download phase is where installs die. The installer pulls Python, then PyTorch, then dependencies. Network drop mid-download? You get the maintenance page. That section’s coming up.

Install via comfy-cli (manual route)

# Clone
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI

# Create venv (Python 3.12 or 3.13)
python -m venv venv
source venv/bin/activate # Linux/Mac
# venvScriptsactivate # Windows

# PyTorch - NVIDIA (cu124 is a common stable target for manual installs)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124

# AMD on Linux (ROCm - check pytorch.org for current version)
# pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm

# Dependencies
pip install -r requirements.txt

# Run
python main.py

Or skip all of that with the official CLI (as of early 2026):

pip install comfy-cli
comfy install
comfy launch

Python version choice: 3.13 is fully supported by core ComfyUI. Custom node dependencies are a different story – if you hit import errors on 3.13, switch to 3.12 and recreate the venv. Python 3.14 runs but some custom nodes break, and the free-threaded variant re-enables the GIL on certain deps, so it’s not a clean upgrade path. Default: 3.12 if you plan to install many community nodes; 3.13 if you stay close to core. (Per ComfyUI README, as of early 2026.)

First launch and verification

Desktop opens its own window. Manual installs print a localhost URL – usually http://127.0.0.1:8188. Drop a checkpoint into ComfyUI/models/checkpoints (a Stable Diffusion 1.5 or SDXL .safetensors file). Reload the browser tab, click Queue Prompt, and watch the nodes light up in sequence. If they complete without error, the install is healthy.

Colourful noise instead of a coherent image? The VAE didn’t load – double-check your checkpoint file isn’t corrupted or partially downloaded.

Already have an A1111 or Forge install? Don’t copy the models over. Rename extra_model_paths.yaml.example in the ComfyUI root to extra_model_paths.yaml and point it at your existing model folders. Per the ComfyUI README, the config file lets you set search paths for models outside the install directory – it avoids duplicating your existing model library entirely.

The maintenance loop and other real errors

Most installs die here. The maintenance page opens automatically when Desktop detects a problem – network drop during the PyTorch pull, missing git, corrupted venv. It offers to reinstall all missing core dependencies. That works for most straightforward cases. When it doesn’t:

  • PyTorch download stalls forever: kill the installer, switch to a wired connection, restart. Restarting from scratch is typically required – the download doesn’t resume.
  • “Git not found”: install Git from git-scm.com first, then re-run the maintenance page.
  • 50-series Blackwell card not detected: the bundled CUDA build may lag new hardware. Update Nvidia drivers – the portable ships with PyTorch CUDA 13.0, and driver support for Blackwell isn’t always present on older driver versions.
  • Custom node throws on import: almost always a Python version mismatch. Switch to 3.12, recreate the venv, reinstall.
  • Existing Python in PATH conflicts: leftover system Python references can cause PyTorch to pick up the wrong interpreter. Purge them or use the portable build, which sandboxes its environment.

Updating and uninstalling

Updating Desktop is automatic – it prompts when a release lands, and you can also trigger it via Menu → Help → Check for Updates (as of early 2026). Portable users run update/update_comfyui.bat. Manual installs: git pull + pip install -r requirements.txt.

Release pace is fast. Per the ComfyUI README, the project targets Monday weekly releases – though that shifts when major model drops land – with three interconnected repos (core, Desktop/Electron, frontend) releasing major stable versions roughly every two weeks since v0.4.0. Don’t update mid-project.

The Windows uninstaller does a simple uninstall – models and custom nodes remain. Complete wipe? After running the uninstaller, manually delete these two folders the docs flag (as of early 2026, per the ComfyUI Wiki desktop guide):

  • C:Users<user>AppDataLocal@comfyorgcomfyui-electron-updater
  • C:Users<user>AppDataLocalPrograms@comfyorgcomfyui-electron

macOS: delete the app from Applications, then clear the matching ~/Library folders.

FAQ

Can I run ComfyUI without an NVIDIA GPU?

Yes. AMD works on Linux via ROCm, Apple Silicon works natively, and CPU mode exists. That said – generation that takes seconds on an RTX 4070 can take several minutes on CPU. It’s a fallback for testing, not daily use.

Should I use Desktop or Portable in 2026?

Desktop if you just want to make images and forget the plumbing exists. Auto-updates, managed venv, GUI installer – you never touch a terminal. Portable if you mess with custom nodes weekly, want multiple ComfyUI versions side by side, or need to copy your install to another machine. The functional ceiling is identical. What differs is how much maintenance posture you want. Most newcomers start with Desktop and switch to Portable once they hit a workflow that needs it – and that’s fine, because models transfer without any re-downloading.

Why does my install need 15 GB just for PyTorch?

PyTorch ships precompiled CUDA binaries inside the wheel – one multi-gigabyte payload per CUDA version. The wheel ComfyUI pulls includes everything it might call during inference. Network failures mid-download are the #1 install failure mode for exactly this reason. Wired connection beats Wi-Fi for first run.

Next step: while ComfyUI is running, install ComfyUI-Manager from inside the UI’s menu – it’s the gateway to every custom node you’ll want next, and the only way to avoid hand-installing each one via git clone.