Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

StableMoFusion: Towards Robust and Efficient Diffusion-based Motion Generation Framework

About

Thanks to the powerful generative capacity of diffusion models, recent years have witnessed rapid progress in human motion generation. Existing diffusion-based methods employ disparate network architectures and training strategies. The effect of the design of each component is still unclear. In addition, the iterative denoising process consumes considerable computational overhead, which is prohibitive for real-time scenarios such as virtual characters and humanoid robots. For this reason, we first conduct a comprehensive investigation into network architectures, training strategies, and inference processs. Based on the profound analysis, we tailor each component for efficient high-quality human motion generation. Despite the promising performance, the tailored model still suffers from foot skating which is an ubiquitous issue in diffusion-based solutions. To eliminate footskate, we identify foot-ground contact and correct foot motions along the denoising process. By organically combining these well-designed components together, we present StableMoFusion, a robust and efficient framework for human motion generation. Extensive experimental results show that our StableMoFusion performs favorably against current state-of-the-art methods. Project page: https://h-y1heng.github.io/StableMoFusion-page/

Yiheng Huang, Hui Yang, Chuanchen Luo, Yuxi Wang, Shibiao Xu, Zhaoxiang Zhang, Man Zhang, Junran Peng• 2024

Related benchmarks

TaskDatasetResultRank
Text-to-motion generationHumanML3D (test)
FID0.098
331
text-to-motion mappingKIT-ML (test)
R Precision (Top 3)0.782
275
Text-to-motion generationKIT-ML (test)
FID0.258
115
Text-driven Motion GenerationHumanML3D (test)
R-Precision@151
36
Text-driven Motion GenerationMotion-X (test)
R Precision Top 10.474
11
Text-to-motion generationSnapMoGen (test)
R-Precision Top 10.679
8
Showing 6 of 6 rows

Other info

Follow for update