Hunyuan 3D 2.1 ComfyUI Tutorial
Guide

Hunyuan 3D 2.1 ComfyUI Setup Guide — Next-Gen 3D AI Models

Apr 2025 · 10 min read · 3D AI · ComfyUI · Hunyuan · RunPod

What's New in Hunyuan 3D 2.1

The Tencent Hunyuan team has been pushing 3D AI generation forward at a serious pace. Version 2.1 brings several headline improvements over the original 2.0 release:

🔓
Fully Open Source
Model weights, training code, and architecture are all publicly available — no restrictions on use or modification.
🎨
PBR Texturing
Physically Based Rendering replaces the old RGB texture system, producing far more realistic materials and lighting responses.
🔷
10x Geometric Detail
Mesh resolution is dramatically increased — finer features, more accurate shapes, and better edge definition.
Two-Stage Pipeline
Shape generation and texture painting are now separate stages, making it easier to iterate on geometry before committing to textures.
⚠️ VRAM requirements split by stage Mesh generation runs well on 6GB+ VRAM (tested on RTX 4050 6GB laptop). Texture/paint generation requires 20GB+ VRAM and has additional CUDA dependency requirements. The local setup below covers mesh generation; RunPod is recommended for full texturing.

Option A: Local Setup (Mesh Generation)

This covers the manual installation path for Windows users. A one-click installer for mesh generation is also available on The Local Lab Patreon.

Prerequisites

1. Install the Custom Nodes

Open a terminal and navigate into your ComfyUI custom_nodes folder. Clone the Hunyuan 3D 2.1 custom node and any dependencies. Links are in the Resources section below.

If you're using the Windows portable version, navigate back to the root ComfyUI directory and run the dependency install commands shown in the video — these install all requirements for both the manager and the Hunyuan custom nodes into the embedded Python environment.

2. Download the Models from Hugging Face

  1. Shape Checkpoint — go to the Tencent Hunyuan 3D 2.1 page on Hugging Face → Files and Versions → hunyuan3d-dit-v2-1/ folder. Download the FP16 shape checkpoint model.
  2. VAE Model — from the main Hunyuan 3D 2.1 directory on Hugging Face, go into the vae/ folder and download the VAE model file.
  3. Place the Files — drag the shape model into ComfyUI/models/diffusion_models/ and the VAE model into ComfyUI/models/vae/.

3. Load the Workflow and Generate

  1. Download and Load the Workflow — download the workflow JSON (link in Resources below) and load it into ComfyUI. If any nodes glow red (missing), open ComfyUI Manager → Install Missing Nodes, install them, then restart.
  2. Verify Model Selection — check the model loader nodes — confirm your shape checkpoint and VAE are selected correctly in the dropdowns. The texturing section can be bypassed if you're running mesh-only locally.
  3. Upload Your Image and Generate — use the Load Image node to upload your reference image. For best results: single object or character, clearly defined features, plain background. Hit Queue to generate your 3D mesh.
💡 Input image tips Image quality directly dictates mesh quality. Use high-resolution images with good lighting and minimal background clutter. A straight-on or 3/4 angle works better than extreme profiles. Remove backgrounds with an AI tool beforehand for cleaner mesh generation.

Option B: RunPod (Full Pipeline with Texturing)

For the complete two-stage workflow including PBR texturing — or if you're on an AMD card or have no dedicated GPU — the RunPod template handles everything including the 20GB+ VRAM texturing stage.

  1. Open the RunPod Template — click the RunPod template link from the Resources section below. Create a RunPod account if needed.
  2. Select a GPU — choose a GPU with at least 24GB VRAM — an RTX 4090 gives the best performance for the price. Adjust container storage if needed (100GB default is sufficient).
  3. Deploy and Wait for Setup — click Deploy. The pod runs installation scripts automatically — this takes 10–15 minutes. Watch the logs; when storage hits ~31% capacity, installation is complete.
  4. Connect and Open ComfyUI — click Connect → JupyterLab for file access. Then go back to the Connect menu and click ComfyUI to open the interface. Load the workflow file and run the full shape + texture pipeline.
💾 Don't forget to download your outputs RunPod storage is temporary — when the pod stops, files are lost. Download your generated 3D assets via JupyterLab before stopping the pod. RunPod charges only while the pod is running, so stop it when you're done to avoid unnecessary costs.

📦 Want to skip the setup?

The Local Lab offers pre-configured AI installer packages so you can get running in minutes, not hours.

Get the Installer →