Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Follow Your Pose: Pose-Guided Text-to-Video Generation using Pose-Free Videos

About

Generating text-editable and pose-controllable character videos have an imperious demand in creating various digital human. Nevertheless, this task has been restricted by the absence of a comprehensive dataset featuring paired video-pose captions and the generative prior models for videos. In this work, we design a novel two-stage training scheme that can utilize easily obtained datasets (i.e.,image pose pair and pose-free video) and the pre-trained text-to-image (T2I) model to obtain the pose-controllable character videos. Specifically, in the first stage, only the keypoint-image pairs are used only for a controllable text-to-image generation. We learn a zero-initialized convolutional encoder to encode the pose information. In the second stage, we finetune the motion of the above network via a pose-free video dataset by adding the learnable temporal self-attention and reformed cross-frame self-attention blocks. Powered by our new designs, our method successfully generates continuously pose-controllable character videos while keeps the editing and concept composition ability of the pre-trained T2I model. The code and models will be made publicly available.

Yue Ma, Yingqing He, Xiaodong Cun, Xintao Wang, Siran Chen, Ying Shan, Xiu Li, Qifeng Chen• 2023

Related benchmarks

TaskDatasetResultRank
Character AnimationDualDynamics
FVD298.3
8
Video Editing20 in-the-wild cases
CLIP score26.55
8
Video Motion EditingUser Study 20 video cases
M-A Score96.3
7
Human Motion GenerationLLM-generated Prompts (50 prompts)
Aesthetic Quality48.8
5
Pose-conditioned Video GenerationUVCBench
Aesthetic Quality50.36
5
Showing 5 of 5 rows

Other info

Follow for update