OmniMoGen: Unifying Human Motion Generation via Learning from Interleaved Text-Motion Instructions
About
Large language models (LLMs) have unified diverse linguistic tasks within a single framework, yet such unification remains unexplored in human motion generation. Existing methods are confined to isolated tasks, limiting flexibility for free-form and omni-objective generation. To address this, we propose OmniMoGen, a unified framework that enables versatile motion generation through interleaved text-motion instructions. Built upon a concise RVQ-VAE and transformer architecture, OmniMoGen supports end-to-end instruction-driven motion generation. We construct X2Mo, a large-scale dataset of over 137K interleaved text-motion instructions, and introduce AnyContext, a benchmark for evaluating interleaved motion generation. Experiments show that OmniMoGen achieves state-of-the-art performance on text-to-motion, motion editing, and AnyContext, exhibiting emerging capabilities such as compositional editing, self-reflective generation, and knowledge-informed generation. These results mark a step toward the next intelligent motion generation. Project Page: https://OmniMoGen.github.io/.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-driven Motion Generation | HumanML3D (test) | R-Precision@155 | 54 | |
| Speed-based motion generation | AnyContext (test) | R@144.3 | 10 | |
| Style-based motion generation | AnyContext (test) | R@10.429 | 10 | |
| Trajectory-based motion generation | AnyContext (test) | R@10.347 | 10 | |
| Edited-to-Source Retrieval | MotionFix (test) | R@187.87 | 7 | |
| Edited-to-Target Retrieval | MotionFix (test) | R@171.59 | 7 |