Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

OmniMoGen: Unifying Human Motion Generation via Learning from Interleaved Text-Motion Instructions

About

Large language models (LLMs) have unified diverse linguistic tasks within a single framework, yet such unification remains unexplored in human motion generation. Existing methods are confined to isolated tasks, limiting flexibility for free-form and omni-objective generation. To address this, we propose OmniMoGen, a unified framework that enables versatile motion generation through interleaved text-motion instructions. Built upon a concise RVQ-VAE and transformer architecture, OmniMoGen supports end-to-end instruction-driven motion generation. We construct X2Mo, a large-scale dataset of over 137K interleaved text-motion instructions, and introduce AnyContext, a benchmark for evaluating interleaved motion generation. Experiments show that OmniMoGen achieves state-of-the-art performance on text-to-motion, motion editing, and AnyContext, exhibiting emerging capabilities such as compositional editing, self-reflective generation, and knowledge-informed generation. These results mark a step toward the next intelligent motion generation. Project Page: https://OmniMoGen.github.io/.

Wendong Bu, Kaihang Pan, Yuze Lin, Jiacheng Li, Kai Shen, Wenqiao Zhang, Juncheng Li, Jun Xiao, Siliang Tang• 2025

Related benchmarks

TaskDatasetResultRank
Text-driven Motion GenerationHumanML3D (test)
R-Precision@155
54
Speed-based motion generationAnyContext (test)
R@144.3
10
Style-based motion generationAnyContext (test)
R@10.429
10
Trajectory-based motion generationAnyContext (test)
R@10.347
10
Edited-to-Source RetrievalMotionFix (test)
R@187.87
7
Edited-to-Target RetrievalMotionFix (test)
R@171.59
7
Showing 6 of 6 rows

Other info

Follow for update