clip-guided-diffusion
This project uses CLIP-powered diffusion models for text-to-image transformation, offering options for prompt complexity and image size, compatible with CPU and GPU. It includes features for blending images, timestep adjustment, and support for both CLI and Python API. Straightforward installation and wandb integration for output logging are also available.