Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SnapMoGen: Human Motion Generation from Expressive Texts

About

Text-to-motion generation has experienced remarkable progress in recent years. However, current approaches remain limited to synthesizing motion from short or general text prompts, primarily due to dataset constraints. This limitation undermines fine-grained controllability and generalization to unseen prompts. In this paper, we introduce SnapMoGen, a new text-motion dataset featuring high-quality motion capture data paired with accurate, expressive textual annotations. The dataset comprises 20K motion clips totaling 44 hours, accompanied by 122K detailed textual descriptions averaging 48 words per description (vs. 12 words of HumanML3D). Importantly, these motion clips preserve original temporal continuity as they were in long sequences, facilitating research in long-term motion generation and blending. We also improve upon previous generative masked modeling approaches. Our model, MoMask++, transforms motion into multi-scale token sequences that better exploit the token capacity, and learns to generate all tokens using a single generative masked transformer. MoMask++ achieves state-of-the-art performance on both HumanML3D and SnapMoGen benchmarks. Additionally, we demonstrate the ability to process casual user prompts by employing an LLM to reformat inputs to align with the expressivity and narration style of SnapMoGen. Project webpage: https://snap-research.github.io/SnapMoGen/

Chuan Guo, Inwoo Hwang, Jian Wang, Bing Zhou• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-motion generationHumanML3D (test)
FID0.02
331
Text-to-motion generationHumanML3D MARDM-67 evaluator (test)
FID0.108
16
Text-to-motion generationSnapMoGen (test)
R-Precision Top 10.802
8
Showing 3 of 3 rows

Other info

Follow for update