MOFA-Video
MOFA-Video facilitates image animation by leveraging sparse-to-dense motion generation and flow-based adaptation within a Video Diffusion Model. It allows animation of static images using diverse control signals, including trajectories and keypoint sequences. Developed by Tencent AI Lab and the University of Tokyo, this technology is presented at ECCV 2024. The project provides both training and inference code, focusing on accessibility through comprehensive guides and demonstrations. Experience the conversion of static imagery to dynamic motion with MOFA-Video.