#ComfyUI
ComfyUI_IPAdapter_plus
The IPAdapter Plus by ComfyUI provides sophisticated models for efficient image-to-image style and subject transfer. It features precise style transfer capabilities, compatibility with Kolors FaceIDv2, and optimized memory usage for long animations. Its experimental ClipVision Enhancer improves high-resolution visuals. The project remains open-source, powered by sponsorship, with comprehensive video guides and installation instructions available on GitHub to fully utilize its advanced image conditioning functionalities in creative applications.
ComfyUI-PhotoMaker-ZHO
The ComfyUI adaptation of PhotoMaker provides features such as Lora model integration, flexible image input options, and improved processing speed for efficient photo creation. It includes both online and local model loading options, comprehensive style mixing with a choice of ten styles, and adjustable parameters for better output customization. Based on SDXL models, the project emphasizes enhanced speed, refined workflows, and introduces new style features ideal for producing cinematic, digital art, or Disney-inspired photos.
comfyui-mixlab-nodes
The project provides an extensive collection of UI nodes to enhance the usability of the latest comfyui, featuring API support, video generation integration, and a JS-SDK for frontend use. It supports workflow-to-app transformation and real-time design features such as ScreenShareNode and dynamic prompts. Recent updates include capabilities for video generation via fal.ai, the introduction of SimulateDevDesignDiscussions, and new nodes for image-to-text, LLM enhancements, speech recognition, 3D processing, and style adjustments. This integration supports effective node management and promotes efficient application and workflow development.
comflowy
Discover the potential of AI generative tools with Comflowy, a community focused on enhancing ComfyUI and Stable Diffusion. Comflowy offers thorough tutorials, active discussions, and a rich database of workflows and models to support developers and users. By simplifying complex barriers, this community aims to make ComfyUI more accessible, setting the stage for its wider adoption in AI graphics. Engage with Comflowy to collaborate with like-minded individuals in advancing AI innovations.
comfyui-deploy
The platform facilitates the deployment and management of generative workflows through serverless GPUs, offering solutions for version control, error handling, and security. It enables management and execution of workflows across diverse environments, supported by technologies such as Shadcn UI and NextJS. Engage with the community for updates and explore resources for optimal implementation.
ComfyUI-Moore-AnimateAnyone
The Moore-AnimateAnyone project from ComfyUI provides a flexible method to implement realistic animations in your digital projects. The setup is user-friendly; it involves cloning the repository into ComfyUI's custom_nodes directory with simple pip installation. With comprehensive workflow examples available, this technology enhances character motion realism efficiently across diverse applications.
rgthree-comfy
rgthree-comfy introduces a versatile set of nodes for ComfyUI aimed at making workflows smoother and more efficient. Features such as seed control, rerouting, image comparison, and context switching facilitate streamlined processes. The project also integrates elements like the Power Lora Loader and Fast Actions Button to assist in semi-automated tasks. Customizable configurations allow for a tailored user experience, focusing on practical techniques like muting and context switching to enhance execution and minimize redundant processing.
ComfyUI-Workflows-ZHO
Discover a wide array of detailed ComfyUI workflows and extensions, featuring practical tools such as Stable Cascade, LivePortrait Animals, and FLUX.1 DEV & SCHNELL. This collection provides valuable enhancements across multiple categories like 3D modeling and intelligent assistants, designed for users at all experience levels. Access comprehensive guides and resources for effortless integration and effective use, ensuring you enhance ComfyUI's capabilities efficiently.
clarity-upscaler
Discover Clarity AI, an open-source solution for enhancing images with capabilities to upscale resolutions up to 13k, sharpen images, and format outputs to jpg, png, and webp. It facilitates multi-step and pattern upscaling, with seamless integration into platforms like ComfyUI and A1111 webUI. Comprehensive tutorials guide users in employing API nodes, Cog predictions, and techniques for anime upscaling. Clarity AI is regularly updated with features like pattern upscaling and speed optimizations, delivering adaptable solutions for superior image enhancement.
ComfyUI_UltimateSDUpscale
Improve image upscaling projects by integrating ComfyUI with custom nodes from the Ultimate Stable Diffusion Upscale script. This setup provides options like tiled sampling and customizable samplers, allowing precise control over image dimensions and tile sizes through inputs such as 'upscale_by' and 'force_uniform_tiles'. Ideal for professionals wanting precise image processing control.
ComfyUI_Comfyroll_CustomNodes
Explore an extensive library of custom nodes crafted to augment the capabilities of ComfyUI. Featuring nodes for essential operations, aspect ratio adjustment, graphics production, animation planning, and utility tasks, this project emphasizes creativity and efficiency. Installation options include direct download or use of the ComfyUI Manager. The comprehensive wiki offers insights and examples for node utilization, ensuring smooth workflow integration. Regular updates are available through patch notes, fostering a collaborative community of ComfyUI users.
ComfyUI-3D-Pack
This project provides a comprehensive set of nodes for efficient 3D asset generation in ComfyUI, using algorithms like 3DGS, NeRF, and InstantMesh. It handles inputs including Mesh and UV Texture, and supports major operating systems and GPU setups. The ComfyUI-Manager facilitates easy installation of pre-built components, with options for automated or semi-automated builds. A diverse range of models enables conversion of single images into detailed 3D meshes with texture. Features include tools like StableFast3D and CharacterGen, and advanced functions such as 3D Gaussian Splatting and Instant NGP. Supported by a development community and workflow guides, it serves as a versatile tool for today's 3D creators.
ComfyUI_LayerStyle
ComfyUI_LayerStyle enhances ComfyUI by introducing Photoshop-like layer and mask features, streamlining your workflow and reducing software transitions. The project offers comprehensive installation guides and supports multiple models, continuously updated for optimal efficiency. Leverage these nodes to seamlessly manage layers and enhance your ComfyUI capabilities with ease.
ComfyUI-ToonCrafter
ComfyUI-ToonCrafter enables the use of ToonCrafter for generative keyframe animation in ComfyUI, optimized for RTX 4090 in 26 seconds. It supports animation rendering in Blender without network requirements. Installation includes setting up a ComfyUI node and downloading model weights for high VRAM, with fp16 for better performance. Its seamless integration with Blender provides a convenient solution for creating dynamic animations offline.
ComfyUI-BRIA_AI-RMBG
Discover the capabilities of BRIA Background Removal v1.5, which supports batch processing and mask output for images and videos. Developed as a non-commercial open-source model by BRIA AI, it integrates with ComfyUI, offering a reliable tool for background removal. Installation is flexible, through ComfyUI Manager or manual options, and recent updates enable video background removal and mask output, enhancing its application range.
comfyui-reactor-node
The ReActor Node provides a simple and efficient methodology for face swaps within ComfyUI, deriving from the ReActor SD-WebUI Face Swap Extension. It features compatibility with GPEN restoration models and incorporates the ReActorFaceBoost Node for enhanced facial output. The latest updates introduce performance enhancements and the ability to manage face models using the ReActorBuildFaceModel Node. Suitable for users focused on sophisticated face swapping, including face restoration and model control options. Discover new creative avenues with the ReActor Node, ensuring precise likeness and accurate swap outcomes.
cog-face-to-many
Discover an open-source tool to transform faces into diverse forms like 3D models, pixel art, and more, all created by artificialguybr. Use platforms like Replicate or ComfyUI and integrate nodes such as ComfyUI Controlnet Aux. Follow a detailed guide for setup and utilize the interface on your machine or through a web UI.
comfyui-inpaint-nodes
Explore inpainting capabilities with versatile nodes and models like Fooocus, LaMa, and MAT in ComfyUI. This repository offers tools for smooth image modification, aiding in pre-processing and conditioning. Utilize features such as mask expansion, blurring, and filling techniques to enhance imagery. Simple installation ensures easy integration, increasing the efficiency of inpainting tasks with practical applications.
comfyui_LLM_party
The project facilitates the development and integration of LLM workflows into existing image systems, providing a wide array of tools for multi-tool usage and management solutions tailored to various industries. It supports ecosystems such as QQ, Feishu, and Discord, featuring API and local model integrations, dynamic model loading, and extensive compatibility, making it suitable for diverse applications in education, research, and media.
stable-diffusion-webui-chinese
This project delivers a Simplified Chinese localization for the Stable Diffusion Web UI, including translations for various extensions such as ControlNet and openpose-editor as of March 2024. Installation is available via the WebUI extensions tab or through direct template copying, with regular updates to improve accessibility for Chinese-speaking AI art enthusiasts.
stable-diffusion-webui-docker
This project enables seamless implementation of Stable Diffusion on personal devices, utilizing various UIs like AUTOMATIC1111 and ComfyUI for text-to-image and image-to-image tasks. It offers easy setup guidance and robust support through comprehensive guides and a FAQ section. With multiple UI choices, users can delve into a wide range of features for diverse image generation requirements. The project promotes a community-driven approach with clear content-sharing guidelines aimed at safe and ethical usage.
ComfyUI
ComfyUI provides a cutting-edge, intuitive interface for crafting sophisticated diffusion workflows using a node-based system that eliminates the need for coding. Supporting a wide array of models including SD1.x, SD2.x, SDXL, and more, it efficiently manages resources with GPU usage as low as 1GB VRAM and supports CPU execution. Features include offline operation, asynchronous queuing, and model loading, making it ideal for versatile and modular diffusion processes. ComfyUI ensures seamless cross-platform compatibility and provides numerous workflow examples for scalable applications.
cog-consistent-character
Discover how to effectively create images of characters in various poses with the cog-consistent-character tool. This guide covers local and Replicate usage, emphasizing ComfyUI for streamlined operations. Follow detailed instructions to set up on a GPU machine, employing custom scripts and nodes for integration. Execute local installations and operate a web UI from a Cog container. Connect to the server via your GPU’s IP to explore advanced character creation features.
comfyui_segment_anything
The project reimagines `sd-webui-segment-anything` within the ComfyUI framework, integrating essential object segmentation features smoothly. It aligns outputs with the original tool, maintaining consistency. Python dependencies need to be installed for optimized use. Support for model management, such as `bert-base-uncased` and `GroundingDino`, makes it easier to download necessary resources. Community contributions are encouraged to bolster the project's development and capability expansion. The project advances image analysis with strong pre-trained models.
fast-stable-diffusion
Discover advanced image generation concepts with tools adapted for Paperspace. Featuring AUTOMATIC1111 Webui, ComfyUI, and DreamBooth, the project aims to refine stable diffusion processes. With seamless integration and enhanced functionalities, it presents significant benefits for AI image processing developers and researchers, offering insightful resources irrespective of the user’s familiarity with these tools.
ComfyUI-YoloWorld-EfficientSAM
This project presents an unofficial implementation of YOLO-World and EfficientSAM models tailored for ComfyUI, emphasizing practical object detection and segmentation. Version 2.0 enhances functionality with mask separation and extraction, compatible with images and videos. The project facilitates model loading for YOLO-World and EfficientSAM on both CUDA and CPU platforms. It offers features like confidence and IoU threshold setting, and customizable mask outputs. Contributions such as the Yoloworld ESAM Detector Provider add value, with user-friendly installation and thorough workflows, supporting detailed detection tasks.
stable-diffusion-prompt-reader
This independent tool allows for the extraction, editing, and management of prompts from images produced by Stable Diffusion, avoiding web interface complications. Supporting macOS, Windows, and Linux, it integrates with tools like A1111's webUI and Easy Diffusion. Users can opt for an intuitive drag-and-drop GUI or advanced command-line interface. Prompts can be copied, removed, or exported, and images can be converted into various formats. Multiple display modes and generation tool detection broaden its application across diverse workflows.
ComfyUI_InstantID
Seamless integration of InstantID with ComfyUI enhances identity modeling through native support. This extension bypasses diffusers by incorporating features such as noise injection and face keypoints directly. Key updates improve workflows and resolve critical bugs. Open-source backing encourages contributions for ongoing development, with compatibility for SDXL and advanced styling via IPAdapter.
Comfy-Photoshop-SD
The guide offers a detailed process for linking ComfyUI with the Auto-Photoshop-StableDiffusion plugin to enhance Stable Diffusion functions in Photoshop. Through ComfyUI-Manager, users can set up custom workflows, transform them into APIs, and access novel features like Img2Img and Controlnet Outpainting. This installation method streamlines image generation and editing by laying out clear steps from extension download to employing advanced features in Photoshop.
SeargeSDXL
This update brings advanced image processing capabilities to ComfyUI by incorporating the SDXL 1.0 with base and refiner checkpoints. It integrates features like FreeU v2, Controlnet, and Multi-LoRA into a unified workflow. The installation is simplified with a user-friendly script, and comprehensive documentation is provided. Performance improvements include a 20% increase in processing speed and enhanced image quality, making it a valuable resource for generating high-resolution images with detailed customization.
ComfyUI_Custom_Nodes_AlekPet
Expand ComfyUI's capabilities with custom nodes offering advanced features, such as text translation via Google and Deep Translators, image enhancement with color adjustments, and control nodes. The project integrates smoothly into existing setups and provides straightforward installation options, either through download or Git cloning. These nodes cater to various needs, including text translation, color correction, and code execution, offering versatile solutions for ComfyUI users.
Feedback Email: [email protected]