Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MotionGPT3: Human Motion as a Second Modality

About

With the rapid progress of large language models (LLMs), multimodal frameworks that unify understanding and generation have become promising, yet they face increasing complexity as the number of modalities and tasks grows. We observe that motion quantization introduces approximation errors that cap motion quality, and that unifying discrete text and continuous motion within a single-stream backbone amplifies cross-modal interference. Motivated by recent multi-branch Transformer designs that separate signals from different modalities, we propose MotionGPT3, a bimodal motion-language model for both understanding and generation. MotionGPT3 encodes raw motion into a continuous latent space using a variational autoencoder (VAE), thereby avoiding quantization-induced artifacts, while leveraging the semantic prior of pretrained language models. A dual-stream Transformer with shared attention preserves modality-specific routes while enabling controlled, bidirectional information flow, which reduces interference, stabilizing optimization, and empirically accelerates convergence without degrading fidelity. For multimodal joint training, a generate-then-align three-stage schedule further improves stability and limits cross-task interference. Experiments show that MotionGPT3 achieves 2x faster convergence in training loss and up to 4x faster convergence in validation, while maintaining state-of-the-art performance on standard motion understanding and motion generation benchmarks.

Bingfan Zhu, Biao Jiang, Sunyi Wang, Shixiang Tang, Tao Chen, Linjie Luo, Youyi Zheng, Xin Chen• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-motion generationHumanML3D (test)
FID0.021
331
text-to-motion mappingHumanML3D (test)
FID0.208
243
Human Motion PredictionHuman3.6M (test)
MPJPE42.3
85
Text-to-Motion SynthesisHumanML3D
R-Precision (Top 1)55.3
43
Text-driven Motion GenerationHumanML3D (test)
R-Precision@154.3
36
Text-to-motionKIT-ML
R@380.3
33
Motion-to-TextHumanML3D (test)
BLEU@419.41
32
Trajectory-based motion generationAnyContext (test)
R@10.262
10
Speed-based motion generationAnyContext (test)
R@127.9
10
Style-based motion generationAnyContext (test)
R@10.183
10
Showing 10 of 12 rows

Other info

Follow for update