Flux models (Schnell and Dev) changed the game for AI image generation when they dropped — producing images with a level of realism and prompt adherence that set a new bar. But what if you want to go further and teach Flux your own style, your own subject, or your own character? That's where LoRA training comes in.
In this guide we're using FluxGym — a streamlined training UI built on the Cocktail Peanut repository — running on RunPod cloud GPUs. The result: custom LoRA training for under $0.50 a run, no beefy local GPU required.
What Is a LoRA?
LoRA stands for Low-Rank Adaptation. Think of it as a small add-on file that plugs into a base model and shifts its outputs in a specific direction — without retraining the entire model. You can train a LoRA to:
- Replicate a specific artistic style (e.g. a particular illustrator's look)
- Add a specific person, character, or face so the model can generate consistent likenesses
- Teach a product or object the model doesn't know about
- Fine-tune for a specific aesthetic or color palette
LoRA files are small (usually 50–200MB), easy to share, and plug directly into ComfyUI or any other Flux-compatible frontend. Once trained, you just drop the file into your models folder and reference it in your workflow.
Why FluxGym on RunPod?
Training locally requires a high-end GPU — ideally an RTX 4090 or better. Most people don't have one sitting around. RunPod solves this by giving you on-demand access to cloud GPUs by the hour. FluxGym is a pre-configured RunPod template that handles all the environment setup automatically, so you go from zero to training in minutes rather than hours of environment troubleshooting.
What You'll Need
- A RunPod account with a small credit balance ($5 goes a long way)
- 10–20 training images of your subject (more on image prep below)
- A ComfyUI setup to test your LoRA afterward (local or cloud)
Preparing Your Training Images
Image quality matters more than quantity for LoRA training. A focused set of 15–20 well-captioned images will outperform 100 poorly curated ones. Here's what to aim for:
- Resolution: At least 512×512, ideally 1024×1024. Flux was trained on high-res data — match it.
- Variety: Different angles, lighting, backgrounds, and poses if training a subject. Diversity prevents the model from overfitting to one look.
- Clean crops: Your subject should be clearly visible and not obscured. Remove images where the subject is partially cut off or blurry.
- Captions: FluxGym can auto-caption images using Florence 2 — let it run and then review/edit the captions. Good captions = better trigger word control.
sks person or a made-up word like zxqstyle). This gives you a reliable way to activate the LoRA in prompts without it bleeding into everything you generate.
Step-by-Step: Training with FluxGym on RunPod
Log into RunPod, go to Explore Templates, and search for FluxGym — or use the direct template link below. Select it and choose a GPU — an A40 (48GB VRAM) is a reliable option. Click Deploy and wait ~2 minutes for the pod to spin up.
Once the pod is running, click Connect → HTTP Service to open the FluxGym web interface in your browser. You'll see a clean UI with tabs for training configuration, image upload, and captions.
Drag and drop your prepared images into the upload area. FluxGym will display them in a grid. You can click individual images to review or remove them before training begins.
Click Caption Images to run Florence 2 over your dataset. It generates a text description for each image automatically. Review them — you'll want to add your trigger word to each caption if it's not already there.
For most use cases, the defaults are solid. Key settings to pay attention to: Steps (1000–1500 is a good starting range), Learning Rate (leave at default unless you know what you're doing), and Base Model (choose Flux Schnell for speed, Flux Dev for quality).
Hit Start Training. FluxGym shows a live progress bar and sample images at checkpoints so you can see how the LoRA is developing. A full run typically completes in 10–20 minutes on an A40.
Once complete, download the .safetensors LoRA file from the output directory. Then terminate the pod immediately — you stop paying the moment it's terminated. Drop the file into your ComfyUI models/loras folder and load it in your workflow.
Testing Your LoRA in ComfyUI
Load Flux Dev or Schnell in ComfyUI, add a LoRA loader node pointing to your new file, set the strength to around 0.8–1.0, and include your trigger word in the prompt. Start with a simple prompt that describes your subject plus the trigger word — you want to isolate whether the LoRA is firing correctly before adding complexity.
If the outputs look too "burnt in" (the LoRA is overpowering the base model), lower the strength. If the style or subject isn't showing up clearly, increase it or consider retraining with more steps.
What's Changed Since This Guide Was Written
The Flux ecosystem has moved quickly. A few updates worth knowing if you're reading this in 2025:
- Flux Dev is now available through more training pipelines — including local options like SimpleTuner and Kohya, for those with 24GB+ VRAM GPUs
- AI Toolkit has emerged as a popular alternative to FluxGym for more advanced users who want finer control over training hyperparameters
- RunPod pricing and available GPUs change frequently — check current rates before spinning up a pod
Resources & Downloads
- RunPod FluxGym Template (direct link)
- FluxGym Colab GitHub Repo
- Flux ComfyUI GGUF Workflow (without LoRA)
- Flux ComfyUI GGUF Workflow (with LoRA)
Want to run Flux locally?
Check out our one-click ComfyUI installer — gets you up and running with Flux in under 5 minutes.