#Video Generation

Logo of ShareGPT4Video
ShareGPT4Video
ShareGPT4Video presents a comprehensive video-text dataset featuring 40K captions generated by GPT4-Vision. It includes adaptable video captioning models like ShareGPT4Video-8B and ShareCaptioner-Video, which significantly enhance text-to-video applications. The initiative offers accessible demos and extensive resources, including publications, project documentation, datasets, and source code, all contributing to the advancement of video comprehension in AI. Acknowledged by NeurIPS 2024, ShareGPT4Video is central to the progress in video-language modeling.
Logo of Text-To-Video-Finetuning
Text-To-Video-Finetuning
Learn about the progress in video diffusion model finetuning through the use of LoRA. This resource offers valuable configuration examples for training, data preprocessing, and automatic captioning, making it suitable for researchers and developers. Achieve superior results with ModelScope integration and Torch 2.0, ensuring efficient memory usage. Discover community-supported models like Zeroscope and Potat1 to improve video generation precision and effectiveness.
Logo of TATS
TATS
Discover an innovative method for generating long-form videos using Time-Agnostic VQGAN and Transformer models. This system generates extensive frames from brief training sequences and supports video creation from text or audio inputs, offering diverse output options. Recent findings reveal discrepancies between FVD metrics and human evaluations, providing new insights. It also includes guidelines for setup and usage across different datasets, making it an essential resource for industry professionals.
Logo of Automatic-Youtube-Reddit-Text-To-Speech-Video-Generator-and-Uploader
Automatic-Youtube-Reddit-Text-To-Speech-Video-Generator-and-Uploader
Explore a solution that automates Reddit TTS video creation through three integrated programs, balancing speed and quality. Manual review ensures content suitability and optimal thumbnails. Efficient YouTube API integration enhances engagement using familiar TTS voices and music. Ideal for leveraging the Reddit TTS trend.
Logo of SoraReview
SoraReview
This review offers an in-depth exploration of Sora, a text-to-video generative AI model by OpenAI. It covers the model's underlying technologies and its diverse applications in industries including film, education, and healthcare. It also addresses challenges such as video safety and unbiased content generation, and examines the potential future developments in AI video production. Learn how these innovations might enhance human-AI collaboration and boost productivity and creativity in video generation.
Logo of Awesome-Video-Diffusion-Models
Awesome-Video-Diffusion-Models
This objective review explores the complexities of video diffusion models, outlining essential tools, methods, and benchmarks for text-to-video (T2V) generation and editing. It sheds light on recent developments in training, model advancements, and evaluation criteria, and offers insights into techniques such as pose-guided and sound-guided video generation. The article also covers key open-source tools and datasets, discussing frameworks and evaluation norms that facilitate the progress and evaluation of state-of-the-art video generative technologies. Suitable for researchers and practitioners, this resource is valuable for advancing innovations in video understanding and enhancement via diffusion methodologies.
Logo of dolphin
dolphin
Dolphin is a video platform utilizing large language models for tasks including video understanding, processing, and generation. It offers features like video Q&A, trimming, subtitle editing, and text-to-video conversion. The platform allows the integration of additional video and language models, offering flexibility for developers and researchers. Developed by Beihang University and Nanyang Technological University, Dolphin is continuously enhanced by community efforts.
Logo of MuseV
MuseV
Explore MuseV, a framework for generating high-fidelity virtual human videos with limitless duration using Visual Conditioned Parallel Denoising. Compatible with Stable Diffusion, it supports various applications from image-to-video to video-to-video. Also, check out MuseTalk for lip synchronization and MusePose for creating videos driven by pose signals. Participate in this community-driven effort to advance virtual human technology.
Logo of LaVie
LaVie
LaVie is a text-to-video conversion framework utilizing cascaded latent diffusion models. Part of the Vchitect system, it integrates Base T2V, Video Interpolation, and Video Super-Resolution features for customizable video output. It includes pre-trained models like LaVie base and Stable Diffusion, available on OpenXLab and Hugging Face Spaces. The framework offers diverse sampling methods and guidance scales, supporting the creative video generation process. Developers can follow step-by-step installation and inference tutorials. LaVie is open for academic research and commercial activities, fostering a collaborative video creation technology community.
Logo of glm-free-api
glm-free-api
This API offers fast streaming output, supports complex dialogues and AI integrations like video and art generation, and ensures easy setup with ChatGPT compatibility. It provides tools for web search and image analysis, without complex configurations, making it a compatible, flexible choice across various platforms such as Docker, Render, and Vercel for users seeking advanced AI solutions.
Logo of dreamoving-project
dreamoving-project
DreaMoving is a video generation framework utilizing diffusion models to produce customizable human videos. It allows for the creation of high-quality, controllable videos that realistically capture human movements and expressions. This framework marks a significant step forward in video synthesis and provides demos on ModelScope and HuggingFace platforms. Designed for researchers and enthusiasts, DreaMoving explores AI's potential in generating realistic human imagery. Its advanced algorithm aims to innovate video content creation across various applications.
Logo of Awesome-Diffusion-Transformers
Awesome-Diffusion-Transformers
This extensive compilation delves into diffusion transformers used in various fields such as text, speech, and video production. It highlights groundbreaking research, including text-driven motion generation and scalable image synthesis models, illustrating the latest technological applications. With emphasis on methodologies like transformer-based denoising and high-resolution image synthesis, this collection provides valuable insights into efficient training techniques. Featuring works like MotionDiffuse and scalable diffusion models, it is designed for researchers and practitioners, offering a comprehensive overview of innovations in diffusion transformers, paired with accessible resources and recent research data.