What Is the Artifacts Feature?
Anthropic popularized the Artifacts concept in Claude.ai — instead of dumping HTML, SVG, or interactive code into a text chat, the AI renders it live in a side panel you can see and interact with immediately. Open WebUI brought the same idea to self-hosted local AI.
With Artifacts enabled in Open WebUI, when the model generates HTML pages, SVG graphics, React components, data visualizations, or interactive tools, they render right there in the chat interface. You can make edits and watch them update in real time. No copy-paste into a separate browser tab, no guessing whether the code actually works — you see it instantly.
Live HTML Rendering
Ask the model to build a landing page, calculator, or form — it renders immediately in-chat.
SVG Visualization
Generate charts, diagrams, icons, and illustrations as live vector graphics you can see and refine.
Real-Time Edits
Ask follow-up questions to iterate on the output — changes apply and re-render instantly.
Interactive Tools
Generate functional UI components, games, and mini-apps you can actually click around in.
What Is Open WebUI?
Open WebUI is a powerful, feature-rich, self-hosted web interface for local language models. It wraps Ollama and OpenAI-compatible APIs with a polished browser-based chat experience that rivals commercial AI platforms. Key capabilities include:
- Flexible install — Docker, Kubernetes, or direct pip install
- Ollama + OpenAI API integration — switch between local and cloud models from one interface
- Pipelines plugin — extend Open WebUI with custom Python logic and third-party API integrations
- Local RAG — upload documents and have the model answer questions based on your files
- Web search — real-time search results injected into the conversation context
- Image generation — connect Automatic1111 or ComfyUI for in-chat image generation
- Responsive design — works on desktop, laptop, and mobile
Why Use Claude via Pipelines?
Local models via Ollama are great for privacy and cost — but Claude (especially claude-3-5-sonnet and claude-3-opus) is exceptional at code generation, reasoning, and producing clean, well-structured outputs. The Artifacts feature especially shines with Claude because of how well it generates complete, functional HTML and SVG on first pass.
Open WebUI's Pipelines plugin makes it possible to add Claude as a model option alongside your local Ollama models — you pick which one to use per conversation, all from the same interface. The Anthropic API key is the only external dependency.
Part 1: Setting Up Open WebUI
If you don't have Open WebUI running yet, here's the recommended setup path using Miniconda for a clean Python environment:
Install Miniconda
Download and install Miniconda from the official site (docs.conda.io), choosing the version for your operating system. This gives you an isolated Python environment manager.
Create a Dedicated Environment
Open the Miniconda PowerShell and create an environment for Open WebUI:
conda create -n open-webui python=3.11 -y
Activate the Environment
Switch into the new environment:
conda activate open-webui
Install Open WebUI
Install via pip — this pulls the full Open WebUI package along with its dependencies:
pip install open-webui
Launch Open WebUI
Start the server and open localhost:8080 in your browser. Create your admin account on first launch — everything stays local, no cloud account required.
open-webui serve
Part 2: Setting Up the Anthropic Claude Pipelines Integration
The Pipelines plugin is what allows Open WebUI to connect to external APIs like Anthropic's Claude. Here's how to get it configured:
Get an Anthropic API Key
Sign up at console.anthropic.com and generate an API key. You'll need credits loaded — Claude API usage is metered, but the cost is low for personal use.
Install and Start the Pipelines Server
The Pipelines server runs separately from Open WebUI. Clone the Open WebUI Pipelines repo, install it, and start it on port 9099. It acts as a middleware layer between Open WebUI and external APIs.
Connect Pipelines in Admin Settings
In Open WebUI, go to Admin Panel → Settings → Pipelines. Enter the Pipelines server URL (http://localhost:9099) and save. Open WebUI will now detect your running pipeline integrations.
Add the Anthropic Pipeline + API Key
In the Pipelines settings, add the Anthropic Claude pipeline from the available list. Enter your Anthropic API key in the provided field. Claude models (sonnet, opus, haiku) will now appear in the Open WebUI model selector alongside your local Ollama models.
Enable the Artifacts Feature
Go to Admin Panel → Settings → Interface and toggle on the Artifacts feature. From this point forward, compatible model outputs will automatically render as live artifacts in the chat panel.
Using Artifacts: What to Try First
Once everything is set up, here are some great prompts to test the Artifacts feature with Claude:
- "Build me a responsive landing page for a local AI tool with a hero section, features grid, and CTA button." — Claude generates complete HTML/CSS that renders instantly.
- "Create an SVG bar chart showing the relative VRAM requirements of Llama 3.1 8B, 70B, and 405B." — See a clean data visualization appear in-chat.
- "Make a simple interactive quiz with 3 questions about AI — show the score at the end." — A working quiz renders and you can actually take it.
- "Generate an SVG icon of a robot head for a tech blog." — Clean vector art, no image generation model needed.
📦 Want to skip the setup?
The Local Lab offers pre-configured AI installer packages so you can get running in minutes, not hours.
Get the Installer →