top of page
New HI-Dream Text to Image GGUF ComfyUI - Low VRAM Workflow and Installer

New HI-Dream Text to Image GGUF ComfyUI - Low VRAM Workflow and Installer

New HiDream GGUF Quantized Models Available!

 

They've released quantized GGUF models for the cutting-edge HiDream image generation model (SOTA quality under MIT license). As always, for my supporters, I created a Windows installer for ComfyUI with pre-configured workflows (text-to-image & image-to-image) including LoRA support, upscaling nodes, and Florence2 integration. But I included the default workflow I found on Calcius HF below as well for everyone.

 

Performance: Generated 768x768 images on my RTX 4050 6GB in ~2 minutes using the Q4_K_S GGUF model.

 

Model Access:
Installer defaults to Q4_K_S (smaller footprint)
Larger GGUF versions:
City96 HF: https://huggingface.co/city96/HiDream-I1-Full-gguf/tree/main
Calcuis HF: https://huggingface.co/calcuis/hidream-gguf/tree/main

 

Required Components:
Text Encoders: https://huggingface.co/Comfy-Org/HiDream-I1_ComfyUI/tree/main/split_files/text_encoders
VAE: https://huggingface.co/HiDream-ai/HiDream-I1-Dev/blob/main/vae/diffusion_pytorch_model.safetensors (Flux VAE also works)

City96 ComfyUI GGUF custom node - https://github.com/city96/ComfyUI-GGUF.git

 

HiDream GitHub Repository: https://github.com/HiDream-ai/HiDream-I1

 

Defualt Hidream Workflow - https://huggingface.co/calcuis/hidream-gguf/blob/main/workflow-hidream.json

 

Get Started:
Patreon/YouTube members: One-click installer + tutorials → https://www.patreon.com/posts/hidream-gguf-to-126889644
Community support: Join Discord → https://discord.gg/5hmB4N4JFc

  • Buy On Patreon

    While I improve the store, you can purchase these items or sign up for a membership on Patreon  - https://www.patreon.com/TheLocalLab.

$3.00Price
Quantity
    bottom of page