Project Icon

EMO

Generate Expressive Portrait Videos Using Audio2Video Diffusion Model

Product DescriptionThe EMO project introduces an innovative method for generating expressive portrait videos via an Audio2Video Diffusion Model, functioning effectively even under suboptimal conditions. This approach effectively connects audio inputs to compelling video outputs, advancing the field of computer vision. Learn about its innovative features presented at ECCV 2024.
Project Details