Want to train a custom Flux LoRA but don't have a high-end GPU? Google Colab gives you free access to cloud GPUs that are more than capable of running the job — and with FluxGym's Gradio interface, the whole process is point-and-click. No command line, no complicated setup, no expensive hardware.
This guide walks you through the complete process from opening the Colab notebook to downloading a finished .safetensors LoRA file ready to drop into ComfyUI.
Colab vs RunPod — Which Should You Use?
Google Colab (Free)
- Free GPU access (T4 or TPU)
- Session time limits apply
- Can disconnect unexpectedly
- Great for learning and experimenting
- No credit card needed to start
RunPod (Paid ~$0.50/run)
- Faster, more powerful GPUs
- No session time limits
- More reliable for longer runs
- Better for production training
- Small cost per run
If you're new to LoRA training, start here with Colab. It's the lowest-friction way to get your first successful run and understand the workflow before committing to paid compute.
What You'll Need
- A Google account (free)
- 15–20 training images of your subject
- The FluxGym Colab notebook (link below)
- Patience — free Colab GPUs are slower than dedicated cloud instances
Preparing Your Training Images
Good training data is the single biggest factor in LoRA quality. Spend time here and the training will take care of itself.
- 15–20 images is the sweet spot — more isn't always better; quality and variety matter more than quantity
- Resolution: Aim for at least 512×512. Higher is better — Flux was trained on high-res data.
- Variety: Different lighting, angles, backgrounds, and distances. If training a person, include close-up face shots, mid-body, and full-body across different outfits.
- Clean images: No heavy filters, heavy compression artifacts, or major occlusions. Your subject should be clearly visible.
- Consistent subject: Every image should feature the thing you're teaching — don't mix in images of unrelated subjects.
Step-by-Step: Training in Google Colab
Go to the fluxgym-Colab GitHub repo and click "Open in Colab." Make a copy to your own Drive so your changes are saved.
Go to Runtime → Change runtime type and set the Hardware Accelerator to T4 GPU (recommended for free tier) or TPU if available. Click Save. This is critical — without a GPU the training will be extremely slow.
Click the play button on each cell in order, starting from the top. These cells install dependencies, clone the FluxGym repo, and download the Flux model weights. This can take 5–10 minutes on first run — let each cell complete before moving to the next.
The final setup cell runs a Python command that starts the FluxGym Gradio server. Once it's ready, Colab will display a public shareable link (e.g. https://xxxxxx.gradio.live). Click it to open the FluxGym UI in a new tab.
In the FluxGym UI, give your LoRA a descriptive name and set a trigger word — a unique term that activates the LoRA in prompts. Use something specific and made-up (e.g. dreamstyle, myphoto, zxqface) so it doesn't conflict with words the base model already knows.
Drag your prepared images into the upload area. FluxGym will display them in a grid for review. Remove any that don't meet the quality bar before proceeding.
Click Caption Images. FluxGym runs Florence 2 over your dataset to generate text descriptions for each image. Review the captions and manually add your trigger word to each one — this is important for reliable activation during inference.
For free Colab with a T4 GPU, select 16GB VRAM in the settings. Keep epochs between 10–20 and repeat values between 1–3 to start. Lower values train faster but may produce weaker results — you can iterate. Leave other settings at default for your first run.
Click Start Training. FluxGym shows a live log and sample images at checkpoints. On a free T4 GPU this typically takes 20–40 minutes depending on your dataset size and epoch count. Keep the browser tab open — Colab sessions can disconnect if idle.
When training completes, find the output .safetensors file in the Colab file browser (left sidebar → Files) under the FluxGym output directory. Right-click and download it. This is your finished LoRA — ready to use in ComfyUI or any Flux-compatible tool.
Testing Your LoRA in ComfyUI
Drop the .safetensors file into your ComfyUI models/loras folder. In your workflow, add a Load LoRA node between your model loader and the sampler. Set strength to 0.8–1.0 and include your trigger word in the positive prompt.
Test with a simple prompt first — just your trigger word plus a basic scene description. If the output reflects your training subject, the LoRA is working. If it's too strong or "burned in," lower the strength. If it's not activating, try increasing steps or retraining with a higher epoch count.
When to Upgrade to RunPod
Colab is perfect for learning the workflow and experimenting with small datasets. You'll want to move to RunPod (or another paid GPU provider) when:
- You're training regularly and Colab session limits are slowing you down
- You need faster A40/A100 GPUs for larger datasets or higher-quality output
- You're doing production training where reliability matters more than cost
Watch the full video above for a hands-on walkthrough of every step — including what the UI looks like at each stage and what good vs. poor training output looks like at the checkpoint previews.
Resources & Downloads
- FluxGym Colab GitHub Repo
- RunPod FluxGym Template
- Flux ComfyUI GGUF Workflow (without LoRA)
- Flux ComfyUI GGUF Workflow (with LoRA)
Ready to run Flux locally?
Our one-click ComfyUI installer gets you up and running with Flux in under 5 minutes — no command line needed.