Training
Train face replacement models using your local GPU or cloud instances. Monitor loss, preview progress, and manage training sessions.
Overview
Training is the core process of teaching a neural network to transform one face into another. Recaster supports four model architectures from DeepFaceLab -- SAEHD, AMP, Quick96, and XSeg -- each optimized for different use cases ranging from quick testing to production-quality output.
Training can be performed on your local GPU for Free tier users, or on cloud GPUs via Vast.ai for Studio tier users. Both modes use the same training interface with real-time loss monitoring and preview capabilities.
Training Workflow
The end-to-end face replacement workflow follows these major stages. Training is the longest stage and the one where model quality is determined.
Extract Faces
Review and Edit Masks
Configure Training
Train the Model
Monitor Progress
Merge Results
Training Duration
Training Modes
Recaster offers two training modes to accommodate different hardware situations and workflows:
Local Training
Train on your own GPU. Available to all users on the Free tier. Requires a compatible NVIDIA, AMD, or Apple Silicon GPU with sufficient VRAM.
- No additional cost beyond electricity
- Full control over hardware
- No network latency
- Limited by your GPU's VRAM and speed
Remote Training
StudioTrain on cloud GPUs via Vast.ai. Requires a Studio tier license. Access powerful GPUs like RTX 3090, 4090, and A100 on demand.
- Access to high-end GPUs without buying hardware
- Multiple concurrent training sessions
- Live preview streaming from remote
- Pay-per-use pricing (typically $0.20-0.80/hr)
Training Interface
The Training widget provides a unified interface for both local and remote training. Key interface elements include:
Configuration Panel
Set the model type, resolution, batch size, architecture dimensions, and learning rate. Configuration options change dynamically based on the selected model type. Hover over any parameter for a tooltip explanation.
Loss Graph
A real-time chart showing source loss (blue line) and destination loss (yellow line) over iterations. Both lines should trend downward during training. When the lines flatten, the model has converged and further training provides diminishing returns.
Preview Canvas
Shows the current training preview with 9 different views of the face swap in progress. Toggle between views to assess different aspects of the swap quality.
Control Buttons
Start, pause, resume, and stop training. Save the model manually or create a backup at any point. The current iteration count and estimated time are displayed in the status area.
Training History
Recaster automatically tracks all training sessions, including the configuration used, iteration count, loss values, and duration. The training history is stored in your application settings folder and can be accessed from the Training widget.
Training history is auto-saved every 30 seconds and at milestone iterations (every 1,000 iterations). This ensures that progress is preserved even if the application or system crashes unexpectedly.
Resume Previous Sessions
Learn More
Explore specific training topics in detail:
Local Training
Train on your own GPU with real-time loss monitoring, auto-save, and pause/resume support.
Model Types
Compare SAEHD, AMP, Quick96, and XSeg models. Understand when to use each architecture.
Preview Window
Navigate 9 preview views, read loss graphs, and recognize convergence signs during training.
Remote Training
StudioTrain on Vast.ai cloud GPUs with live preview streaming, multi-session support, and cost tracking.
Quick Tips
- Start small -- Use Quick96 first to verify your dataset is clean before committing to a long SAEHD training run.
- Clean your dataset -- Remove duplicate, blurry, or misaligned faces before training. Dataset quality matters more than dataset size.
- Monitor loss -- If loss stops decreasing or starts increasing, training has likely converged. Continuing beyond this point wastes time.
- Save regularly -- While auto-save runs every 30 seconds, create manual backups before experimenting with settings changes.
- Match face counts -- Try to have roughly similar numbers of source and destination faces for balanced training.
Was this page helpful?