Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Fine-grained Motion Retrieval via Joint-Angle Motion Images and Token-Patch Late Interaction

About

Text-motion retrieval aims to learn a semantically aligned latent space between natural language descriptions and 3D human motion skeleton sequences, enabling bidirectional search across the two modalities. Most existing methods use a dual-encoder framework that compresses motion and text into global embeddings, discarding fine-grained local correspondences, and thus reducing accuracy. Additionally, these global-embedding methods offer limited interpretability of the retrieval results. To overcome these limitations, we propose an interpretable, joint-angle-based motion representation that maps joint-level local features into a structured pseudo-image, compatible with pre-trained Vision Transformers. For text-to-motion retrieval, we employ MaxSim, a token-wise late interaction mechanism, and enhance it with Masked Language Modeling regularization to foster robust, interpretable text-motion alignment. Extensive experiments on HumanML3D and KIT-ML show that our method outperforms state-of-the-art text-motion retrieval approaches while offering interpretable fine-grained correspondences between text and motion. The code is available in the supplementary material.

Yao Zhang, Zhuchenyang Liu, Yanlan He, Thomas Ploetz, Yu Xiao• 2026

Related benchmarks

TaskDatasetResultRank
Motion-to-text retrievalKIT-ML
R@116.02
25
Text-to-motion retrievalKIT-ML
R@116.27
25
Text-to-motion retrievalHumanML3D
Recall@326.22
14
Motion-to-text retrievalHumanML3D
R@113.76
11
Showing 4 of 4 rows

Other info

Follow for update