Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Taming Diffusion Probabilistic Models for Character Control

About

We present a novel character control framework that effectively utilizes motion diffusion probabilistic models to generate high-quality and diverse character animations, responding in real-time to a variety of dynamic user-supplied control signals. At the heart of our method lies a transformer-based Conditional Autoregressive Motion Diffusion Model (CAMDM), which takes as input the character's historical motion and can generate a range of diverse potential future motions conditioned on high-level, coarse user control. To meet the demands for diversity, controllability, and computational efficiency required by a real-time controller, we incorporate several key algorithmic designs. These include separate condition tokenization, classifier-free guidance on past motion, and heuristic future trajectory extension, all designed to address the challenges associated with taming motion diffusion probabilistic models for character control. As a result, our work represents the first model that enables real-time generation of high-quality, diverse character animations based on user interactive control, supporting animating the character in multiple styles with a single unified model. We evaluate our method on a diverse set of locomotion skills, demonstrating the merits of our method over existing character controllers. Project page and source codes: https://aiganimation.github.io/CAMDM/

Rui Chen, Mingyi Shi, Shaoli Huang, Ping Tan, Taku Komura, Xuelin Chen• 2024

Related benchmarks

TaskDatasetResultRank
text-conditioned human interaction generationInterHuman (test)
R Precision (Top 1)33.5
12
text-conditioned human interaction generationInterX (test)
R-Precision (Top 1)31.2
10
Motion Generation100STYLE
FID0.91
5
Showing 3 of 3 rows

Other info

Follow for update