Open-WebUI and ComfyUI Workflow
Workflow

How to Improve Your AI Image Generation Workflow with Open-WebUI and ComfyUI

Sep 7, 2024 · The Local Lab

What if you could describe an image in plain English inside your AI chat window and have it generated right there — without switching apps, without uploading anything to the cloud, and without touching a complicated node graph? That's exactly what happens when you connect Open-WebUI to ComfyUI.

These two tools do very different things, but together they create a powerful local AI workspace where text generation and image generation live in the same interface. This guide walks you through the setup and shows you how to get the most out of the combination.

Understanding the Two Tools

Chat Interface

Open-WebUI

A self-hosted web interface for interacting with local LLMs (via Ollama, LM Studio, or OpenAI-compatible APIs). Think of it as your own private ChatGPT — complete control over your data, running entirely on your hardware. It also supports image generation backends, which is the key to this integration.

Image Engine

ComfyUI

A node-based interface for building and running AI image generation workflows. Highly flexible — you can run Flux, SDXL, and dozens of other models with full control over every parameter. It exposes an API that other tools (like Open-WebUI) can call to trigger image generation programmatically.

The result: You chat with your LLM in Open-WebUI, type an image prompt, and ComfyUI generates it — all without leaving the chat window. Completely local, completely private.

What You'll Need Before Starting

Why Flux for This Workflow?

You can use any model that ComfyUI supports — SDXL, SD 1.5, Playground, etc. But Flux Dev is the recommended choice for this workflow for a few reasons:

Connecting Open-WebUI to ComfyUI

1
Make sure ComfyUI is running with its API enabled

Launch ComfyUI normally. By default it runs on http://127.0.0.1:8188. The API is enabled automatically — you don't need to pass any extra flags. Confirm it's working by visiting that address in your browser.

2
Open Open-WebUI Settings → Images

In Open-WebUI, click your profile icon → Admin PanelSettingsImages. This is where you configure the image generation backend.

3
Set the image backend to ComfyUI

From the Image Generation Engine dropdown, select ComfyUI. Enter the base URL: http://127.0.0.1:8188. Click the refresh icon next to the workflow selector — Open-WebUI will pull available workflows from your ComfyUI instance.

4
Select your Flux workflow

Choose your Flux Dev workflow from the dropdown. If you don't have one set up yet, ComfyUI has a default text-to-image workflow you can use as a starting point. Save your settings.

5
Enable image generation in the chat

Back in the Open-WebUI chat window, click the image icon (🖼) in the toolbar to enable image generation mode. Now type an image description and send — Open-WebUI passes the prompt to ComfyUI and displays the result inline in the chat.

Getting the Best Results

A few tips that make a meaningful difference once the integration is live:

Privacy note: Because both tools run locally, nothing you generate — no prompts, no images, no chat history — leaves your machine. This matters if you're generating images of proprietary products, personal subjects, or anything you wouldn't want on a third-party server.

Troubleshooting Common Issues

Watch the full video above for a live demonstration of the setup process and the workflow in action — including how it looks to generate images conversationally inside a chat session.

Resources & Downloads

Using LoRAs with the workflow: Connect the GGUF model loader node to the LoRA node, then connect the LoRA node to the KSampler node. Make sure there's always a LoRA loaded when using that workflow. To revert, switch back to the default workflow without the LoRA node.

Related Tutorials

Want a one-click ComfyUI setup?

Our installer gets ComfyUI running with Flux in under 5 minutes — no command line, no dependency headaches.

Get the Installer ← More Articles