Fine-grained Motion Retrieval via Joint-Angle Motion Images and Token-Patch Late Interaction
About
Text-motion retrieval aims to learn a semantically aligned latent space between natural language descriptions and 3D human motion skeleton sequences, enabling bidirectional search across the two modalities. Most existing methods use a dual-encoder framework that compresses motion and text into global embeddings, discarding fine-grained local correspondences, and thus reducing accuracy. Additionally, these global-embedding methods offer limited interpretability of the retrieval results. To overcome these limitations, we propose an interpretable, joint-angle-based motion representation that maps joint-level local features into a structured pseudo-image, compatible with pre-trained Vision Transformers. For text-to-motion retrieval, we employ MaxSim, a token-wise late interaction mechanism, and enhance it with Masked Language Modeling regularization to foster robust, interpretable text-motion alignment. Extensive experiments on HumanML3D and KIT-ML show that our method outperforms state-of-the-art text-motion retrieval approaches while offering interpretable fine-grained correspondences between text and motion. The code is available in the supplementary material.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Motion-to-text retrieval | KIT-ML | R@116.02 | 25 | |
| Text-to-motion retrieval | KIT-ML | R@116.27 | 25 | |
| Text-to-motion retrieval | HumanML3D | Recall@326.22 | 14 | |
| Motion-to-text retrieval | HumanML3D | R@113.76 | 11 |