Awesome-Video-Diffusion-Models
This objective review explores the complexities of video diffusion models, outlining essential tools, methods, and benchmarks for text-to-video (T2V) generation and editing. It sheds light on recent developments in training, model advancements, and evaluation criteria, and offers insights into techniques such as pose-guided and sound-guided video generation. The article also covers key open-source tools and datasets, discussing frameworks and evaluation norms that facilitate the progress and evaluation of state-of-the-art video generative technologies. Suitable for researchers and practitioners, this resource is valuable for advancing innovations in video understanding and enhancement via diffusion methodologies.