#text-to-image generation

Logo of deep-daze
deep-daze
This command line tool converts text into images using OpenAI's CLIP and Siren technologies. It enables users to generate unique images with text prompts, customizable settings, and various options. The tool supports both simple and advanced use cases, making it ideal for exploring neural network capabilities. Compatible with Nvidia and AMD GPUs, it provides practical access to AI art generation.
Logo of HCP-Diffusion
HCP-Diffusion
HCP-Diffusion offers a versatile approach to text-to-image generation with Stable Diffusion models, facilitating efficient GPU usage. It integrates Accelerate and Colossal-AI to support various methods such as DreamArtist++ for better speed and quality. The toolbox features layer-wise tuning, LoRA, and multi-dataset compatibility, enhancing training and inference processes.
Logo of rich-text-to-image
rich-text-to-image
This project investigates how rich text formatting can enhance control over text-to-image generation. Using attributes such as font size, color, and style improves token weighting and image accuracy. Recent updates include model integration, local style support, and precise color generation. The method leverages region-based diffusion and cross-attention mapping for intricate visual results, highlighted by a JSON input structure and various user interface deployment options, signifying advances in expression and precision.
Logo of fastcomposer
fastcomposer
FastComposer utilizes diffusion models for efficient personalized multi-subject text-to-image generation without the need for fine-tuning. By using subject embeddings from an image encoder, it allows for generation based on both subject images and textual instructions. This approach addresses identity blending issues through cross-attention localization and delayed subject conditioning, generating images of multiple individuals in varying styles. Achieving up to 2500x speed improvement over traditional methods, FastComposer enables high-quality image creation without extra storage for new subjects.
Logo of threestudio
threestudio
This framework uses leading 2D text-to-image models to innovate 3D content creation from text prompts and images, seamlessly integrating existing setups for high-quality rendering. Discover its main features, advantages, and recent updates for diverse applications, ideal for developers in 3D animation and modeling.
Logo of MS-Diffusion
MS-Diffusion
MS-Diffusion enhances text-to-image personalization by overcoming challenges in multi-subject image generation with its grounding resampler and cross-attention mechanisms. It integrates layout guidance for effective subject interaction, supporting zero-shot learning. Resources such as inference tools and MS-Bench provide thorough evaluation across many scenarios, ensuring detailed and cohesive image outputs.
Logo of PickScore
PickScore
The Pick-a-Pic project offers open-source datasets and a model to explore text-to-image user preferences. Available datasets include over a million examples in v2 and the original v1, along with the PickScore model. The repository includes a web application, installation instructions, and guides for inference, training, evaluation, and dataset download. A demo is available on HF Spaces, facilitating advanced AI research.
Logo of TaleCrafter
TaleCrafter
The project presents an interactive story visualization system designed for handling multiple characters efficiently. Key features include maintaining character identity, aligning text and visuals, and customizable layouts. Built on comprehensive language and T2I models, it incorporates four main components: story-to-prompt conversion, text-to-layout generation, controllable text-to-image creation, and image-to-video animation. These features provide adaptable storytelling capabilities, verified through various experiments and user feedback, enhancing interactive narrative creation.
Logo of HunyuanDiT
HunyuanDiT
Hunyuan-DiT is a multi-resolution diffusion transformer designed for Chinese and English text understanding, supporting iterative image generation. Utilizing a bilingual encoder and advanced architecture, it excels in Chinese image creation among open-source models. The integration with Docker and Hugging Face, alongside pre-trained models and inference scripts, makes it a practical choice for research and development in text and image synthesis.