EchoMask: Speech-Queried Attention-based Mask Modeling for Holistic Co-Speech Motion Generation
About
Masked modeling framework has shown promise in co-speech motion generation. However, it struggles to identify semantically significant frames for effective motion masking. In this work, we propose a speech-queried attention-based mask modeling framework for co-speech motion generation. Our key insight is to leverage motion-aligned speech features to guide the masked motion modeling process, selectively masking rhythm-related and semantically expressive motion frames. Specifically, we first propose a motion-audio alignment module (MAM) to construct a latent motion-audio joint space. In this space, both low-level and high-level speech features are projected, enabling motion-aligned speech representation using learnable speech queries. Then, a speech-queried attention mechanism (SQA) is introduced to compute frame-level attention scores through interactions between motion keys and speech queries, guiding selective masking toward motion frames with high attention scores. Finally, the motion-aligned speech features are also injected into the generation network to facilitate co-speech motion generation. Qualitative and quantitative evaluations confirm that our method outperforms existing state-of-the-art approaches, successfully producing high-quality co-speech motion.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Gesture Generation | BEAT2 | FGD4.623 | 17 | |
| Non-facial Gesture Generation | BEAT2 | FGD4.623 | 6 | |
| Holistic Motion Generation | BEAT2 | FGD4.623 | 5 | |
| Facial Generation | BEAT2 | MSE6.761 | 3 |