Project Introduction: OneTrainer
OneTrainer is an all-encompassing platform designed to simplify and enhance the training process for stable diffusion models. It combines various features and tools to offer a seamless experience for users looking to work with different diffusion models.
Key Features of OneTrainer
-
Range of Supported Models: OneTrainer supports a variety of models including FLUX.1, multiple versions of Stable Diffusion (1.5, 2.0, 2.1, 3.0, 3.5), SDXL, Würstchen-v2, Stable Cascade, PixArt-Alpha, PixArt-Sigma, and inpainting models.
-
Model Formats: It caters to both diffusers and ckpt model formats, providing flexibility in usage.
-
Training Methods: Users can choose from full fine-tuning, LoRA, and embeddings, depending on their specific training needs.
-
Masked Training: This feature allows the focus to remain on specific parts of the dataset, optimizing the training process.
-
Automatic Backups: Regular backups ensure that all training progress is saved for easy continuation at any point.
-
Image Augmentation: Enhance your dataset diversity with automatic transformations applied to each image, such as adjustments in rotation, brightness, contrast, and saturation.
-
Tensorboard Integration: Monitor and track the training progress with simple tensorboard support.
-
Multiple Prompts: Train using different prompts on the same image samples to explore various outcomes.
-
Noise Scheduler Rescaling: Implement findings from contemporary research to improve training efficiency.
-
EMA (Exponential Moving Average): Users can train their own EMA models, and optionally manage weights in CPU memory to conserve VRAM.
-
Aspect Ratio Bucketing: Automatically adapts to multiple aspect ratios ensuring efficiency without manual configurations.
-
Multi Resolution Training: Simultaneously train on different resolutions to capture varied dataset dimensions.
-
Dataset Tooling: Use automatic captioning or create masks for training with tools like BLIP, BLIP2, WD-1.4, ClipSeg, and Rembg.
-
Model Tooling and Conversion: Convert models between formats effortlessly within the user interface.
-
Sampling UI: Sample models mid-training directly from the OneTrainer interface with no need to shift to another application.
-
AlignProp: Integrates a reinforcement learning technique for enhancing diffusion networks, implementing research from recent academic studies.
Installation and Setup
To get started with OneTrainer, ensure Python 3.10 is installed. The setup involves cloning the repository from GitHub and running provided scripts for automatic or manual installation. The process is compatible with both Windows and Unix-based systems, with additional steps required for some Linux distributions.
Keeping Up to Date
OneTrainer offers both automatic and manual options for updates, enabling users to seamlessly incorporate the latest features and fixes.
Usage Guide
Run the start-ui.bat
to launch the user interface. For those preferring command line operations, OneTrainer accommodates that preference with scripts available for different functionalities like training, captioning, and sampling.
Contributing to OneTrainer
The project welcomes contributions, whether through code, discussions, or issue reporting. A structured contribution guide is available for those looking to enhance OneTrainer’s functionality. Additionally, contributors should install the required dependencies and set up Git hooks for consistent code quality.
Related Projects
OneTrainer draws inspiration from and is part of a community that includes tools like MGDS, StableTuner, and Visions of Chaos. These projects offer complementary functions and share the aim of advancing machine learning capabilities.
In conclusion, OneTrainer stands as a powerful and user-friendly option for engaging with stable diffusion model development, offering extensive features that cater to both novice and experienced users in the field.