Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MoGenTS: Motion Generation based on Spatial-Temporal Joint Modeling

About

Motion generation from discrete quantization offers many advantages over continuous regression, but at the cost of inevitable approximation errors. Previous methods usually quantize the entire body pose into one code, which not only faces the difficulty in encoding all joints within one vector but also loses the spatial relationship between different joints. Differently, in this work we quantize each individual joint into one vector, which i) simplifies the quantization process as the complexity associated with a single joint is markedly lower than that of the entire pose; ii) maintains a spatial-temporal structure that preserves both the spatial relationships among joints and the temporal movement patterns; iii) yields a 2D token map, which enables the application of various 2D operations widely used in 2D images. Grounded in the 2D motion quantization, we build a spatial-temporal modeling framework, where 2D joint VQVAE, temporal-spatial 2D masking technique, and spatial-temporal 2D attention are proposed to take advantage of spatial-temporal signals among the 2D tokens. Extensive experiments demonstrate that our method significantly outperforms previous methods across different datasets, with a 26.6% decrease of FID on HumanML3D and a 29.9% decrease on KIT-ML. Project page: https://aigc3d.github.io/mogents.

Weihao Yuan, Weichao Shen, Yisheng He, Yuan Dong, Xiaodong Gu, Zilong Dong, Liefeng Bo, Qixing Huang• 2024

Related benchmarks

TaskDatasetResultRank
Text-to-motion generationHumanML3D (test)
FID0.017
331
text-to-motion mappingHumanML3D (test)
FID0.033
243
Text-to-motion generationKIT-ML (test)
FID0.143
115
Text-to-motion generationHumanML3D 19 (test)
FID0.033
37
Text-to-motion generationHumanML3D MARDM-67 evaluator (test)
FID0.06
16
Text-to-motion generationKIT-ML 46 (test)
R-Precision Top 144.5
9
Showing 6 of 6 rows

Other info

Follow for update