How To Run LM-Studio API With Open-WebUI
- locallab
- Jun 27, 2024
- 4 min read
Why Open-WebUI? The Benefits of Going Local
You've probably heard of ChatGPT and its impressive capabilities. But what if you could have a similar experience with complete privacy, offline access, and the freedom to use open-source models? That's the power of Open WebUI. It's like having your own personal AI assistant, tailored to your needs and preferences, always available at your fingertips.
Imagine:
Enhanced Privacy: Your data stays on your machine, giving you full control over your information.
Offline Access: No internet? No problem! Open-WebUI allows you to utilize AI even when you're offline.
Open-Source Freedom: Explore a vast world of open-source language models and customize your AI experience.
Unleashed Potential: With Open-WebUI, you unlock a range of features including text-to-speech, image generation, document Q&A, web searching, and much more.
Intrigued? Let's get started!
If you prefer I also have a video tutorial available.
Setting the Stage: Installing Miniconda
Before we dive into Open WebUI, we need to prepare our workspace. We'll be using Miniconda, a powerful package manager for organizing our AI tools. It's like setting up a well-equipped workshop before starting a new project.
Here's how to install Miniconda:
Download: Head over to the Anaconda website (https://docs.anaconda.com/) and download the Miniconda installer for your operating system (Windows, macOS, or Linux).
Install: Once the download is complete, run the Miniconda executable and follow the on-screen instructions.
Creating a Dedicated Environment: Anaconda Prompt
With Miniconda installed, it's time to create a dedicated environment for Open-WebUI. This helps keep things organized and avoids potential conflicts with other software.
Follow these steps:
Open Anaconda Prompt: Search for "Anaconda Prompt" in your Windows search bar (or the equivalent on your operating system) and open the application.
Create Environment: Type the following command in the Anaconda Prompt and press Enter:
conda create -n open-webui python=3.11 -y
This command creates an environment named "open-webui" specifically for Open-WebUI and installs Python 3.11, which is essential for compatibility.
Activate Environment: Once the environment is created, activate it by typing:
conda activate open-webui
You'll notice a slight change in your command prompt, indicating that you are now working within the "open-webui" environment.
Installing Open WebUI: A Piece of Cake
Now that our environment is ready, installing Open-WebUI is incredibly straightforward.
Just follow these steps:
Type Command: In the activated Anaconda Prompt, type the following command and press
enter:
pip install open-webui
This command fetches Open-WebUI and all its necessary dependencies, installing them automatically within your dedicated environment.
Grab a Coffee: This process might take a few minutes, so feel free to grab a coffee or stretch your legs while Open-WebUI sets up shop.
Starting the Server: Launching Open-WebUI
The moment we've been waiting for! It's time to launch the Open-WebUI server and bring our local AI assistant to life.
Here's how:
Type Command: In the Anaconda Prompt, type the following command and press Enter:
open-webui serve
This initializes the startup process, downloads any remaining necessary models, and then launches the Open WebUI server.
Copy Localhost URL: You'll see a localhost URL displayed in your terminal. Copy this URL.
Paste in Web Browser: Open your web browser and paste the copied localhost URL into the address bar. Press Enter.
Troubleshooting: If you encounter an "Unable to Connect" error, adjust the URL by:
Removing the "s" from "https" to make it "http".
Adding "127.0.0.1" before the port number in the URL's IP address.
For example, if your URL is "https://0.0.0.0:8080", change it to "http://127.0.0.1:8080".
This should resolve the connection issue and bring you to the local Open WebUI webpage.
Adding Brainpower: Integrating with LM Studio
Open-WebUI provides the interface, but we need to add the brainpower. For this, we'll use LM Studio, a fantastic tool for running open-source language models.
Here's how to integrate LM Studio:
Download and Install: Visit the LM Studio website (https://lmstudio.ai/) and download the installer for your system. Follow the instructions to install it.
Download a Model: Open LM Studio and download your favorite open-source language model. There are many options available, including Meta's Llama 2, Google's Gemini, and more.
Load the Model: Once the model is downloaded, load it up in LM Studio.
Start the API Server: Navigate to the "Server" tab in LM Studio and click "Start Server".
Copy Localhost URL: Take note of the localhost URL provided in the server tab. We'll need this to connect LM Studio to Open-WebUI.
Connect in Open WebUI:
Go back to Open WebUI and click on "Settings".
Navigate to the "Connections" tab.
Paste the LM Studio API URL into the "OpenAI API Base URL" field.
In the "API Key" field, type "lm-studio".
Click "Save".
Let's Chat: Your Private AI Assistant is Ready
Congratulations! You now have a fully functional, locally run AI assistant powered by Open-WebUI and LM Studio.
To start chatting:
Select Your Model: Go back to the main interface of Open WebUI and select your loaded language model from the dropdown menu in the top left corner.
Start Typing: Type your question or request in the chat box and press Enter. Open-WebUI will use your local language model to generate a response.
Note: Sometimes the model might not appear immediately in the model list. As a workaround, try selecting a ChatGPT option, as it should still function with your local model.
Local Document Q&A: Unlocking the Power of RAG
Want to chat with your local documents and get insightful answers? Open-WebUI allows you to unlock the power of Retrieval Augmented Generation (RAG).
Here's how:
Upload Your Documents: Click on the plus icon in the chat interface.
Select and Upload: Choose the documents you want to use for Q&A and upload them.
Ask Your Questions: Start typing your questions related to the uploaded documents. Open-WebUI will utilize the information from your documents to provide comprehensive answers.
Congratulations! You've Gone Local with AI
That's it! You've successfully set up Open-WebUI and LM Studio, creating your very own private AI assistant, running entirely on your machine. Enjoy the freedom, privacy, and limitless potential of local AI.
Comments