ComfyUI Custom Nodes for Anime Diffusion
Install, choose, and combine custom nodes to build faster, cleaner anime pipelines in ComfyUI. This hub covers must-have packs, recommended graphs, and fixes.
Updated
Nov 18, 2025
Cluster path
/anime/comfyui/custom-nodes
Graph links
5 cross-links
What are ComfyUI custom nodes?
Custom nodes are add-ons that plug into ComfyUI’s graph system to add missing functions (advanced samplers, IP-Adapter, ControlNet preprocessors, ESRGAN upscalers, AnimateDiff, utility math, batch tools). They let you:
- Build cleaner anime line art and cel shading pipelines
- Improve character consistency via LoRA/IP‑Adapter
- Add motion with AnimateDiff nodes
- Speed up or stabilize inference with advanced samplers and schedulers
Install and manage nodes
Recommended: use ComfyUI-Manager to browse, install, update, and pin versions.
Manual install overview:
- Navigate to your ComfyUI/custom_nodes directory
- Clone the node repo into custom_nodes
- Install Python requirements listed by the node pack
- Restart ComfyUI to load nodes
Example (generic):
cd ComfyUI/custom_nodes
git clone <repo-url>
cd <repo-folder>
pip install -r requirements.txt
Tip: keep a separate virtual environment and pin working commits before updating large graphs.
- Use ComfyUI-Manager for safe updates
- Restart ComfyUI after installing or updating nodes
Must-have node packs for anime workflows
These widely used packs cover most anime tasks. Search on GitHub by name to install.
- ComfyUI-Manager: one-click install/update of custom nodes
- WAS Node Suite for ComfyUI: utilities, filters, prompts, image ops
- ComfyUI-Impact-Pack: advanced samplers, conditioning, quality-of-life nodes
- comfyui_controlnet_aux: lineart, canny, depth, normal, and other preprocessors
- ComfyUI_IPAdapter_plus (or similar): image prompt guidance for character/style
- ComfyUI-AnimateDiff-Evolved: animation nodes for motion in anime scenes
- Real-ESRGAN nodes: upscale, especially anime-focused models (e.g., anime6B)
- Optional: LoRA/embedding helpers included in many packs for easy stacking
- Start with Manager, Impact, WAS, ControlNet AUX, IP-Adapter, AnimateDiff
Starter: clean anime text-to-image graph
Goal: crisp lines, flat shading, minimal artifacts.
Pipeline outline:
- Load Checkpoint (anime-ready model) → CLIP Text Encode (positive/negative)
- LoRA Loader (optional, 0.5–0.8 weight for characters)
- KSampler (Euler a or DPM++ 2M Karras; 20–28 steps; CFG 4.5–7)
- VAE Decode → Save Image
High-res pass (optional):
- Upscale (Real-ESRGAN anime model, 1.5–2x)
- KSampler refine (lower denoise 0.15–0.35) → VAE Decode → Save Image
Prompting tips:
- Add tags like “clean lineart, flat shading, cel shaded, anime style”
- Use negative prompts for “blur, noise, artifacts, extra fingers, text, watermark”
Lineart-controlled anime with ControlNet
ControlNet helps lock pose/composition based on edges.
- Preprocess: Image → ControlNet AUX (Lineart/Canny) → ControlNet Condition
- Main path: Text Encode → KSampler → VAE Decode
- Apply ControlNet: connect condition to KSampler; start weight 0.6–0.9
Tips:
- Use Lineart for clean manga/anime outlines; Canny for general edges
- Adjust guidance strength downward for softer adherence
Character consistency via IP-Adapter and LoRA
Combine a character LoRA with IP-Adapter for stronger identity lock:
- IP-Adapter: plug an image reference into the conditioning path; weight 0.5–0.9
- LoRA Loader: start at 0.6–0.8; raise if features drift
- Use seed locking and fixed sampler to stabilize features across shots
For repeating outfits/hair color, add those tokens to the positive prompt and keep CFG ≤ 7.
- Use a clean, front-facing reference for IP-Adapter
- Lock seed for multi-shot consistency
Image-to-image cleanup and upscaling
For manga panels or low-res anime frames:
- Load Image → VAE Encode (or use img2img node) → KSampler
- Denoise strength 0.25–0.5 for cleanup; lower for preserving line art
- Upscale with Real-ESRGAN anime model (1.5–4x) before a light refine pass
Artifacts fix:
- Use WAS/Impact image filters (sharpen lite, mild denoise) before the final save
- Prefer anime-optimized ESRGAN models for line preservation
- Keep denoise low to avoid losing inking
AnimateDiff: stylized motion for anime loops
Add motion without full video models:
- Load AnimateDiff nodes → connect motion module to KSampler chain
- Use short sequences (8–24 frames) and loopable seeds
- Lower CFG (3.5–6) and denoise (0.2–0.35) to reduce flicker
For consistent characters: keep the same LoRA/IP-Adapter refs across frames.
- Smaller frame counts reduce flicker and VRAM spikes
- Refine key frames with a light img2img pass
Performance and VRAM tuning
Settings that help 6–12GB GPUs:
- Use fp16 models/VAEs when available
- Reduce resolution (e.g., 768→640) and upscale later
- Batch size 1; keep only 1–2 ControlNets active at once
- Prefer KSampler schedulers like DPM++ 2M Karras for speed/quality balance
- Enable memory-efficient attention (if supported by your build)
- Cache models and avoid frequent checkpoint swaps mid-session
Troubleshooting common errors
Missing node/class: the pack didn’t load. Reinstall requirements and restart.
Torch/CUDA mismatch: install a Torch build matching your CUDA; verify with a simple base graph.
API/version breaks: pin node pack commits; check release notes after ComfyUI updates.
Model path errors: confirm model folder mapping in node settings; avoid spaces in paths.
ControlNet/Adapter no effect: raise guidance weight slowly; verify correct connection into the KSampler conditioning input.
- Pin versions before major updates
- Test with a minimal base graph to isolate issues
Versioning, licensing, and safe updates
Best practices:
- Keep a staging environment to test new nodes
- Pin working commits for production graphs
- Read each repository’s license; confirm usage for commercial work
- Export and back up your graphs and custom settings regularly
Cluster map
Trace how this page sits inside the KG.
- Anime generation hub
- Ai
- Ai Anime Short Film
- Aigc Anime
- Anime Style Prompts
- Brand Safe Anime Content
- Cel Shaded Anime Look
- Character Bible Ingestion
- Comfyui
- Consistent Characters
- Dark Fantasy Seinen
- Episode Arcs
- Flat Pastel Shading
- Generators
- Guides
- Inking
- Interpolation
- Kg
- Manga Panel Generator
- Metrics
- Mood Wardrobe Fx
- Neon
- Palettes
- Pipelines
- Problems
- Quality
- Render
- Story Development
- Styles
- Technique
- Tools
- Use Cases
- Video
- Vtuber Highlights
- Workflow
- Workflows
- Blog
- Comic
- Style
Graph links
Neighboring nodes this topic references.
Anime Diffusion
Core models and prompting tips to pair with ComfyUI nodes.
ComfyUI Workflows
End-to-end graph examples for text-to-image and img2img.
ControlNet Lineart
Lock anime outlines using edge and line processors.
IP-Adapter
Reference-based style and identity guidance for characters.
AnimateDiff
Add motion to anime scenes with ComfyUI animation nodes.
Topic summary
Condensed context generated from the KG.
ComfyUI custom nodes extend the base graph editor with extra samplers, upscalers, conditioning tools, ControlNet helpers, and animation/multi-ref features. For anime diffusion, they unlock sharper line art, consistent characters, and stable motion with lean VRAM.