What if you could describe an image in plain English inside your AI chat window and have it generated right there — without switching apps, without uploading anything to the cloud, and without touching a complicated node graph? That's exactly what happens when you connect Open-WebUI to ComfyUI.
These two tools do very different things, but together they create a powerful local AI workspace where text generation and image generation live in the same interface. This guide walks you through the setup and shows you how to get the most out of the combination.
Understanding the Two Tools
Open-WebUI
A self-hosted web interface for interacting with local LLMs (via Ollama, LM Studio, or OpenAI-compatible APIs). Think of it as your own private ChatGPT — complete control over your data, running entirely on your hardware. It also supports image generation backends, which is the key to this integration.
ComfyUI
A node-based interface for building and running AI image generation workflows. Highly flexible — you can run Flux, SDXL, and dozens of other models with full control over every parameter. It exposes an API that other tools (like Open-WebUI) can call to trigger image generation programmatically.
What You'll Need Before Starting
- Open-WebUI installed and running (via Docker or pip — github.com/open-webui)
- ComfyUI installed with at least one working image generation workflow (github.com/comfyanonymous/ComfyUI)
- A Flux model loaded in ComfyUI — Flux Dev gives the best quality/resource balance
- Both services running at the same time on your local machine
Why Flux for This Workflow?
You can use any model that ComfyUI supports — SDXL, SD 1.5, Playground, etc. But Flux Dev is the recommended choice for this workflow for a few reasons:
- Superior prompt following: Flux understands complex, detailed prompts far better than older diffusion models. When you're typing prompts conversationally in a chat window, this matters a lot.
- Consistent quality: Flux produces high-quality results with less prompt engineering — you don't need to memorize negative prompts or quality tokens.
- Resource efficiency: Flux Dev runs well on 8–12GB VRAM cards, making it accessible without requiring a top-tier GPU.
Connecting Open-WebUI to ComfyUI
Launch ComfyUI normally. By default it runs on http://127.0.0.1:8188. The API is enabled automatically — you don't need to pass any extra flags. Confirm it's working by visiting that address in your browser.
In Open-WebUI, click your profile icon → Admin Panel → Settings → Images. This is where you configure the image generation backend.
From the Image Generation Engine dropdown, select ComfyUI. Enter the base URL: http://127.0.0.1:8188. Click the refresh icon next to the workflow selector — Open-WebUI will pull available workflows from your ComfyUI instance.
Choose your Flux Dev workflow from the dropdown. If you don't have one set up yet, ComfyUI has a default text-to-image workflow you can use as a starting point. Save your settings.
Back in the Open-WebUI chat window, click the image icon (🖼) in the toolbar to enable image generation mode. Now type an image description and send — Open-WebUI passes the prompt to ComfyUI and displays the result inline in the chat.
Getting the Best Results
A few tips that make a meaningful difference once the integration is live:
- Use your LLM to write the prompt: Ask your chat model to "write a detailed ComfyUI image prompt for [description]" first, then paste that into the image generator. LLMs are excellent at expanding brief ideas into detailed, Flux-optimized prompts.
- Set a default image size: In the Open-WebUI Images settings, set your default resolution. For Flux,
1024×1024or1216×832are good starting points. - Keep ComfyUI open in a separate tab: You can monitor generation progress and tweak the workflow in ComfyUI while Open-WebUI handles the chat side.
- Save workflows you like: ComfyUI lets you save named workflow presets. Build a solid Flux Dev workflow once, save it, and it'll always be available from Open-WebUI's dropdown.
Troubleshooting Common Issues
- Open-WebUI can't connect to ComfyUI: Make sure ComfyUI is running before you open Open-WebUI. Check the URL matches exactly — including the port number.
- No workflows showing in the dropdown: Click the refresh icon next to the workflow selector. If it's still empty, check that your ComfyUI has at least one saved workflow (not just an unsaved default).
- Images are generating but look wrong: The workflow selected in Open-WebUI may have different node names than what Flux expects. Open the workflow directly in ComfyUI and test it there first to confirm it works correctly on its own.
- Slow generation: This is normal for Flux Dev on consumer GPUs — 30–90 seconds per image depending on your hardware. Flux Schnell is significantly faster if speed matters more than quality.
Watch the full video above for a live demonstration of the setup process and the workflow in action — including how it looks to generate images conversationally inside a chat session.
Resources & Downloads
- ComfyUI GitHub Repo
- Open WebUI GitHub Repo
- Flux ComfyUI GGUF Workflow (without LoRA)
- Flux ComfyUI GGUF Workflow (with LoRA)
- Ollama & Open WebUI One-Click Installer (Patreon)
Related Tutorials
- How To Run Flux Dev & Schnell GGUF Models With LoRAs in ComfyUI
- How to Run Flux NF4 Image Models in ComfyUI (Low VRAM)
- Open WebUI Miniconda Install Tutorial
Want a one-click ComfyUI setup?
Our installer gets ComfyUI running with Flux in under 5 minutes — no command line, no dependency headaches.