ComfyUI-AdvancedLivePortrait
Update
As of August 21, 2024, the ComfyUI-AdvancedLivePortrait project has introduced a remarkable feature: the ability to create a video without needing a traditional video file. This innovative tool can now track facial expressions from a source video efficiently. The workflow for using this feature has been updated to enhance user experience and functionality.
Introduction
The ComfyUI-AdvancedLivePortrait project offers users an advanced platform that is not only faster but also provides a real-time preview of changes. This tool allows for a wide array of creative possibilities:
- Facial Expression Editing: Users can modify facial expressions in still photos.
- Integration with Videos: The tool allows inserting these edited facial expressions into video clips, providing a dynamic and customized video experience.
- Creation of Animations: Users can generate animations by stringing together multiple facial expressions, making it ideal for dynamic content creators.
- Facial Expression Extraction: It is possible to extract facial expressions from sample photos and apply them creatively elsewhere.
For a visual overview, there is a demonstration available through the project's GitHub repository.
Installation
The installation process for ComfyUI-AdvancedLivePortrait is streamlined through the ComfyUI-Manager. Users can now automatically install it with ease, ensuring a hassle-free set-up experience.
Usage
The project makes available several workflows and sample data located in the path '\custom_nodes\ComfyUI-AdvancedLivePortrait\sample'. These resources serve as starting points for users to explore the functionalities of the tool.
One of the key features is the ability to add expressions to video content, as detailed in the 'workflow2_advanced.json' file. This workflow outlines how commands are used to achieve facial expression integration, and breaks down concepts such as the 'Motion index', which includes:
- [Motion index] = [Changing frame length] : [Length of frames waiting for next motion]: Defines how the motion is indexed and controlled.
- Motion index 0 represents the original, unaltered source image, and the index progresses as expressions are applied.
Linking a driving video to 'src_images' enables the addition of these custom facial expressions, enriching the content's visual storytelling capabilities.
Data Management
Expressions created or modified can be saved and reloaded using 'Load Exp Data' and 'Save Exp Data' nodes, respectively. The saved data is stored at the path '\ComfyUI\output\exp_data', allowing for easy retrieval and reuse.
Acknowledgments
The project is built on the foundational work of an original author, accessible at Live Portrait. Additionally, this project incorporates a model converted by kijai, with further details available at GitHub Link.
This comprehensive approach to facial expression editing and integration in videos presents a powerful tool for creators looking to animate and enhance visual storytelling with ease and precision.