AToM: Aligning Text-to-Motion Model at Event-Level with GPT-4Vision Reward
About
Recently, text-to-motion models have opened new possibilities for creating realistic human motion with greater efficiency and flexibility. However, aligning motion generation with event-level textual descriptions presents unique challenges due to the complex relationship between textual prompts and desired motion outcomes. To address this, we introduce AToM, a framework that enhances the alignment between generated motion and text prompts by leveraging reward from GPT-4Vision. AToM comprises three main stages: Firstly, we construct a dataset MotionPrefer that pairs three types of event-level textual prompts with generated motions, which cover the integrity, temporal relationship and frequency of motion. Secondly, we design a paradigm that utilizes GPT-4Vision for detailed motion annotation, including visual data formatting, task-specific instructions and scoring rules for each sub-task. Finally, we fine-tune an existing text-to-motion model using reinforcement learning guided by this paradigm. Experimental results demonstrate that AToM significantly improves the event-level alignment quality of text-to-motion generation.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-motion generation | HumanML3D (test) | FID0.4 | 331 | |
| text-to-motion mapping | HumanML3D (test) | FID0.613 | 243 | |
| Text-to-motion | HumanML3D General (test) | MM Dist3.943 | 3 | |
| Text-to-motion | HumanML3D Frequency (test) | MM Dist5.259 | 2 |