Easy Local AI Face Swapping with Z-Image Turbo in ComfyUI
Workflow

Easy Local AI Face Swapping with Z-Image Turbo in ComfyUI

Jan 2026 · 9 min read · Face Swap · Z-Image Turbo · ComfyUI · SAM 3 · LoRA

This ComfyUI workflow — originally shared by Reddit user Retro Gaza Spurs — uses the Z-Image Turbo model combined with your custom character LoRA and the SAM 3 segmentation model to seamlessly place your character's face onto any target image. It's a clean, consistent face-swapping approach that runs on as little as 8 GB VRAM.

How It Works

✂️
SAM 3 Segmentation
Automatically isolates the face and hair region in the target image with precision masking
🎭
Character LoRA
Your trained Z-Image Turbo character LoRA generates the new face in the masked area
📝
Joy Caption
Auto-generates detailed prompts from the target image for better generation accuracy
💾
8 GB VRAM Minimum
BF-16 Z-Image Turbo model + FP8 weight dtype setting keeps VRAM usage manageable
You need a character LoRA. This workflow requires a trained Z-Image Turbo character LoRA. Find pre-made ones on CivitAI, or train your own — check the guide on training Z-Image Turbo LoRAs with AI Toolkit.

Required Files

All files come from the Comfy-Org HuggingFace page (link in video description). Navigate to Files and Versions → split_files:

File Location on HuggingFace ComfyUI Destination
Z-Image Turbo model (BF-16 or NVFP4) split_files → diffusion_models models/diffusion_models/
Qwen3-4B GGUF clip model split_files → text_encoders models/clip/
Z-Image Turbo VAE split_files → VAE models/vae/
Your character LoRA (.safetensors) CivitAI or self-trained models/loras/
Clip model note: Use the Qwen3-4B GGUF version, not the FP8 version. The GGUF model is linked in the written guide in the video description.

Manual Setup — ComfyUI Installation

  1. Download the ComfyUI Portable ZIP from the ComfyUI releases page. Extract it with 7-Zip.
  2. Navigate into the custom_nodes folder, click the address bar, type cmd, press Enter.
  3. Clone the ComfyUI Manager:
    git clone https://github.com/ltdrdata/ComfyUI-Manager
  4. Navigate back to the main ComfyUI portable directory (where the Python embedded folder is) and run the dependency install command from the written guide in the description.
  5. Place your downloaded model files in the correct ComfyUI folders as shown in the table above.
One-click installer available on Patreon — includes the low-VRAM version of this workflow with all downloads handled automatically.

Loading the Workflow

  1. Launch ComfyUI. Download the workflow JSON file (link in video description) and drag it into the ComfyUI interface.
  2. Red nodes will appear — open Manager → Install Missing Nodes and install each one, then restart ComfyUI.
  3. After restart, verify the workflow has no red nodes.

Configuring the Workflow

The workflow is split into two sections:

Top Section — Model Loaders and SAM 3 Configuration

Bottom Section — Image Input and Generation

  1. Upload your target image — the image whose face you want to replace.
  2. In the Model Loader node, select your Z-Image Turbo diffusion model. Set weight dtype to FP8 to reduce VRAM usage.
  3. In the LoRA Loader, select your character LoRA.
  4. Review the Joy Caption node settings — toggle the true/false options for lighting, camera angles, and watermarks as needed. The Joy Caption model will auto-download on first run (use the 4-bit quantized version to save ~11 GB vs. the full precision model).
  5. After Joy Caption generates a prompt, add your character's trigger word (and any missing details) in the "add important extra info here" node.
Joy Caption model size: The standard Joy Caption model is ~15 GB. Use the 4-bit quantized version to save significant storage and VRAM. The link is in the written guide.

Running the Generation

  1. Click Run. The first run downloads SAM 3 and Joy Caption automatically — this takes longer than subsequent runs.
  2. Watch the preview nodes during generation — they show the segmentation mask in real-time so you can see exactly which areas are being processed.
  3. Subsequent runs typically take 30 seconds to a few minutes depending on your hardware.

Tips for Best Results

📦 Want to skip the setup?

The Local Lab offers pre-configured AI installer packages so you can get running in minutes, not hours.

Get the Installer →