Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Spatiotemporal-Untrammelled Mixture of Experts for Multi-Person Motion Prediction

About

Comprehensively and flexibly capturing the complex spatio-temporal dependencies of human motion is critical for multi-person motion prediction. Existing methods grapple with two primary limitations: i) Inflexible spatiotemporal representation due to reliance on positional encodings for capturing spatiotemporal information. ii) High computational costs stemming from the quadratic time complexity of conventional attention mechanisms. To overcome these limitations, we propose the Spatiotemporal-Untrammelled Mixture of Experts (ST-MoE), which flexibly explores complex spatio-temporal dependencies in human motion and significantly reduces computational cost. To adaptively mine complex spatio-temporal patterns from human motion, our model incorporates four distinct types of spatiotemporal experts, each specializing in capturing different spatial or temporal dependencies. To reduce the potential computational overhead while integrating multiple experts, we introduce bidirectional spatiotemporal Mamba as experts, each sharing bidirectional temporal and spatial Mamba in distinct combinations to achieve model efficiency and parameter economy. Extensive experiments on four multi-person benchmark datasets demonstrate that our approach not only outperforms state-of-art in accuracy but also reduces model parameter by 41.38% and achieves a 3.6x speedup in training. The code is available at https://github.com/alanyz106/ST-MoE.

Zheng Yin, Chengjian Li, Xiangbo Shu, Meiqi Cao, Rui Yan, Jinhui Tang• 2025

Related benchmarks

TaskDatasetResultRank
Multi-person motion predictionCMU-Mocap UMPM 3 persons
JPE (0.2s)31
8
Multi-person motion predictionMix2 10 persons
JPE (0.2s)34
7
Multi-person motion predictionMix1 6 persons
JPE (0.2s)34
7
3D Human Motion PredictionChi3D
JPE (0.2s)44
4
Showing 4 of 4 rows

Other info

Follow for update