What Is Orpheus TTS?
Orpheus is a speech language model released by Canopy AI under the Apache 2.0 license — meaning it's free to use, modify, and deploy for both personal and commercial projects. What sets it apart from other open-source TTS options is how genuinely human it sounds. It doesn't just read text flatly; it conveys emotion, varies intonation naturally, and supports expressive tags that let you control how speech is delivered.
<laugh> or <sigh> to inject natural expressiveness into the generated voice.The Setup: Orpheus FastAPI Web UI + LM Studio
To run Orpheus easily, we use the Orpheus FastAPI Web UI — an open-source project that wraps the model in a clean browser-based interface. The Local Lab fork of this project adds built-in LM Studio support out of the box, extends the context window to 8,192 tokens, and uses the GGUF model format for lower resource usage.
You'll need two things running together: LM Studio (which serves the Orpheus model via its local API) and the Orpheus FastAPI server (which provides the web UI and connects to LM Studio).
Step 1 — Set Up LM Studio and Load the Model
- Install LM Studio — download from lmstudio.ai and install for your OS (Windows, Mac, or Linux).
- Download the Orpheus Model — open LM Studio, go to the Discover tab, and search for Orpheus. Download the
orpheus-3b-4k-ggufmodel — it's compact and runs smoothly on 4GB+ VRAM. - Load the Model and Start the API Server — switch to the Developer tab in LM Studio, load the Orpheus model, and confirm the local API server starts on port
1234.
Step 2 — Install the Orpheus FastAPI Web UI (Manual)
We'll use Miniconda to keep the Python environment clean and isolated.
- Install Miniconda — download and install Miniconda from the Anaconda website. Once installed, open the Anaconda Prompt from your Windows search bar.
- Create and Activate a Conda Environment — create a dedicated Python 3.10 environment (required for compatibility):
- Clone the Repository — navigate to your preferred install folder, then clone The Local Lab's fork of the Orpheus FastAPI Web UI:
- Install PyTorch with CUDA and Dependencies — run the PyTorch install command from the repository README, then install project requirements:
- Launch the FastAPI Server — start with:
Then open your browser and navigate to http://127.0.0.1:1555 to access the Orpheus TTS web UI.
Using the Web UI
Once both LM Studio (with Orpheus loaded) and the FastAPI server are running, the web UI is straightforward:
- Text box — type or paste the text you want converted to speech
- Emotion tags — insert tags like
<laugh>,<sigh>, or<gasp>anywhere for expressive delivery - Voice selector — choose from 8 available voices
- Speed slider — found under Advanced Options, adjusts playback pace
- Generate Speech — click to generate; audio appears with a waveform visualizer
📦 Want to skip the setup?
The Local Lab offers pre-configured AI installer packages so you can get running in minutes, not hours.
Get the Installer →