Follow Your Pose: Pose-Guided Text-to-Video Generation using Pose-Free Videos
About
Generating text-editable and pose-controllable character videos have an imperious demand in creating various digital human. Nevertheless, this task has been restricted by the absence of a comprehensive dataset featuring paired video-pose captions and the generative prior models for videos. In this work, we design a novel two-stage training scheme that can utilize easily obtained datasets (i.e.,image pose pair and pose-free video) and the pre-trained text-to-image (T2I) model to obtain the pose-controllable character videos. Specifically, in the first stage, only the keypoint-image pairs are used only for a controllable text-to-image generation. We learn a zero-initialized convolutional encoder to encode the pose information. In the second stage, we finetune the motion of the above network via a pose-free video dataset by adding the learnable temporal self-attention and reformed cross-frame self-attention blocks. Powered by our new designs, our method successfully generates continuously pose-controllable character videos while keeps the editing and concept composition ability of the pre-trained T2I model. The code and models will be made publicly available.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Character Animation | DualDynamics | FVD298.3 | 8 | |
| Video Editing | 20 in-the-wild cases | CLIP score26.55 | 8 | |
| Video Motion Editing | User Study 20 video cases | M-A Score96.3 | 7 | |
| Human Motion Generation | LLM-generated Prompts (50 prompts) | Aesthetic Quality48.8 | 5 | |
| Pose-conditioned Video Generation | UVCBench | Aesthetic Quality50.36 | 5 |