FollowYourPose
Explore a new method for generating pose-controllable videos using easily accessible datasets and pre-trained models. This approach employs a two-stage training process, combining image pose pairs and videos to create text-editable digital animations with pose control. Utilizing a convolutional encoder and temporal self-attention, this project maintains the editing capabilities of pre-trained models for dynamic video generation. Access our code and models to discover advancements in digital human creation.