Skip to content

AnimeGAN AI Cartoon Filter: Install AnimeGANv3 (ONNX Guide)

Install the AnimeGAN AI cartoon filter via the AnimeGANv3 ONNX route. Real commands, real GitHub URLs, and the version traps nobody warns you about.

8 min readIntermediate

So you searched for an AI cartoon filter and landed on AnimeGAN. Now you’re staring at three different GitHub repos, two TensorFlow versions, an EXE, an ONNX folder, and a Python port – and the question every newcomer asks is: which one do I actually install in 2026?

Short answer: not the one with the most stars. The original AnimeGAN repo is a TensorFlow 1.x museum piece, and AnimeGANv2 inherits the same problem. The ONNX route from AnimeGANv3 is what actually runs on a 2026 machine. Here’s how to deploy it.

Why the TensorFlow path is a trap

Every “how to install AnimeGAN” tutorial points you at pip install tensorflow-gpu==1.15.0. The official requirements doc specifies Ubuntu 18.04/20.04, an Nvidia GPU, Miniconda, then conda install cudatoolkit=10 plus cudnn 7.6.0, plus tensorflow-gpu 1.15.0 via pip.

Here’s the catch nobody mentions upfront: TF 1.8 only supports CUDA up to 9.x, and TF 1.15 + CUDA 10 won’t see an RTX 30/40 card. And if you try modern TensorFlow instead, you get the legendary error: ModuleNotFoundError: No module named 'tensorflow.contrib'. Per AnimeGANv2 issue #28, the maintainer’s response is that the project was built with TensorFlow 1.x.x – and that module was removed entirely in TensorFlow 2.x.x. The fix is to downgrade, except TF 1.15 wheels aren’t published for Python 3.10+, so on a fresh 2026 system you can’t install it cleanly anyway.

Before you install anything: If you only want to cartoon-ify a few photos and don’t need a local deploy, the AnimeGAN.js browser demo runs in your tab with zero install. Use it as a sanity check – if the style doesn’t suit your use case, you’ve saved yourself an hour of setup.

What you actually want: AnimeGANv3 + ONNX Runtime

AnimeGANv3 is officially called DTGAN (double-tail GAN). Two output tails – one for coarse anime style, one that refines it – plus linearly adaptive denormalization (LADE) to prevent the color-block artifacts that plagued v1. The practical result: according to the project’s paper page, it can process a 1920×1080 frame on GPU in 115.50 ms as of the 2023 release.

The deploy story is much simpler than the TF path: pre-trained .onnx files, run through onnxruntime. No CUDA pinning, no TensorFlow version archaeology. Same code runs on CPU if you don’t have a GPU – though you’ll feel it, which is covered below.

System requirements for the AI cartoon filter

Resource Minimum Recommended
Python 3.8 3.10 or 3.11
RAM 8 GB+ (CPU inference is RAM-hungry)
Disk ~2 GB if you grab several style ONNX files
GPU None (CPU works) Any CUDA GPU + onnxruntime-gpu
OS Linux / macOS / Windows – ONNX is platform-agnostic

The Python range is the one spec worth pinning. Per the working community implementations, Python ≥3.8 and ≤3.11 is the safe zone – newer onnxruntime wheels exist for 3.12, but a few opencv builds still lag, so 3.10 is the safest pick as of mid-2025.

The CPU-only honesty check

Planning to run this on a laptop without a GPU? The Python port’s own README is upfront about it: inference on CPU consumes a large amount of CPU and memory. Single images at high resolution will be slow. Video stylization? Plan in minutes per second of footage, not seconds. A cloud GPU rental for batch work is worth it.

Install AnimeGANv3 step-by-step

This is the path that actually works on a fresh machine in 2026. We’ll build a minimal Python script that loads an ONNX model and outputs a stylized image.

  1. Create a clean virtual environment.
  2. Install the four required pip packages.
  3. Download an ONNX model file from the official release.
  4. Run inference.
# 1. Create environment
python -m venv animegan-env
source animegan-env/bin/activate # Windows: animegan-envScriptsactivate

# 2. Install dependencies
pip install "Pillow>=10" numpy opencv-contrib-python onnxruntime

# 3. Download a model (Hayao style)
mkdir models && cd models
curl -L -O https://github.com/TachibanaYoshino/AnimeGANv3/releases/download/v1.1.0/AnimeGANv3_Hayao_36.onnx
cd ..

The package list – Pillow≥10, numpy, opencv-contrib-python, onnxruntime – comes from the working community implementations. The model URL pattern is the official v1.1.0 release: swap the filename for whichever style you want.

Pick the right model number

This is the gotcha that wrecks first-time users. There isn’t one model – there’s a matrix of style + version pairs. Picking Hayao_36 when you wanted face-portrait stylization will produce mush, because Hayao_36 is trained on landscapes.

  • AnimeGANv3_Hayao_36.onnx – Studio Ghibli landscape look
  • AnimeGANv3_Shinkai_37.onnx – Makoto Shinkai vibrant skies
  • AnimeGANv3_PortraitSketch_25.onnx – face-only sketch
  • AnimeGANv3_JP_face_v1.0.onnx – Japanese anime face conversion
  • Plus Arcane, Disney, USA cartoon, Nordic myth, Pixar, and Ghibli-c1 variants (the last added August 2025)

The tiny Nordic myth model is 2.4 MB and hits up to 50 FPS on an iPhone 14 at 512×512 – useful context if you’re targeting mobile rather than desktop batch processing.

Here’s the honest question most tutorials skip: which style actually fits your content? Hayao reads as warm and painterly on nature shots but makes faces look flat. JP_face is the reverse – sharp on portraits, weird on landscapes. There’s no universal pick. Run your actual test image through two or three models before committing to a pipeline.

First-time configuration and verification

Save this as cartoonify.py. It’s the smallest viable ONNX inference loop.

import cv2, numpy as np, onnxruntime as ort

session = ort.InferenceSession("models/AnimeGANv3_Hayao_36.onnx")
inp = session.get_inputs()[0]
name, shape = inp.name, inp.shape # NCHW

img = cv2.imread("input.jpg")
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img, (shape[3], shape[2]))
x = (img.astype(np.float32) / 127.5 - 1.0).transpose(2,0,1)[None]

out = session.run(None, {name: x})[0]
out = np.clip((out[0].transpose(1,2,0) + 1.0) * 127.5, 0, 255).astype(np.uint8)
cv2.imwrite("output.jpg", cv2.cvtColor(out, cv2.COLOR_RGB2BGR))
print("Done.")

Drop any input.jpg next to the script and run python cartoonify.py. A working install produces an output.jpg with visibly softer textures and shifted color palette. To verify versions: python -c "import onnxruntime; print(onnxruntime.__version__)".

For video, the official one-liner from the AnimeGANv3 README is: python tools/video2anime.py -i inputs/vid/1.mp4 -o output/results -m deploy/AnimeGANv3_Hayao_36.onnx – run it after cloning the repo for its tools folder.

Common errors and what they actually mean

The three errors that account for most of the support traffic on this project:

1. ModuleNotFoundError: No module named 'tensorflow.contrib' – you cloned an old AnimeGAN/AnimeGANv2 repo. Don’t bother fixing it. Switch to the ONNX route above.

2. InvalidGraph or shape mismatch on session.run – your input tensor doesn’t match the ONNX file’s expected NCHW shape. The inference code reads inp.shape[2] and inp.shape[3] dynamically for that reason. Don’t hardcode 256×256.

3. cuBLAS execution failures – this is a TensorFlow + new-GPU error from the legacy repo. Same fix: leave the TF path behind and use the ONNX runtime instead.

Upgrade and uninstall

There’s no “upgrade” in the traditional sense – AnimeGANv3 isn’t versioned as a package, just as model checkpoints. New style? Download its .onnx from the releases page and point your script at it. Full uninstall: deactivate && rm -rf animegan-env models.

One legal note before you ship anything: per the AnimeGANv3 README (as of mid-2025), the repo is available for non-commercial use only. Commercial use requires emailing the author for an authorization letter. Most YouTubers ignore this. Most also don’t get sued – but if you’re invoicing a client, get the letter first.

FAQ

Is AnimeGANv3 better than AnimeGANv2 for landscapes?

Depends what you mean by “better.” Per the benchmarks on the project’s paper page, v3 scores closer to the anime distribution overall. But run both on a landscape photo yourself and you may prefer v2’s result – it tends to keep the original scene more recognizable. v3 pushes harder toward the style. Neither answer is wrong; it’s a content call.

Can I run this without a GPU?

Yes. onnxruntime defaults to CPU. Expect slow processing and heavy RAM usage – the maintainer says so explicitly. For batch jobs, a cloud GPU hour is faster and cheaper than waiting.

What if I don’t want to install anything locally?

Two options. First: the official Hugging Face Spaces demo (Gradio interface) added September 2022. Second: AnimeGAN.js – runs in a browser tab, same underlying weights, no account needed. Neither handles batch processing or video. But if you just need one photo turned around before a deadline, skip the install entirely and use one of these. You can always set up local inference later when you actually need the throughput.

Next: grab one ONNX model from the releases page, run the 15-line script above on a photo from your camera roll, and decide whether the style fits before you bother downloading the rest of the model zoo.