Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MoveGPT: Scaling Mobility Foundation Models with Spatially-Aware Mixture of Experts

About

The success of foundation models in language has inspired a new wave of general-purpose models for human mobility. However, existing approaches struggle to scale effectively due to two fundamental limitations: a failure to use meaningful basic units to represent movement, and an inability to capture the vast diversity of patterns found in large-scale data. In this work, we develop MoveGPT, a large-scale foundation model specifically architected to overcome these barriers. MoveGPT is built upon two key innovations: (1) a unified location encoder that maps geographically disjoint locations into a shared semantic space, enabling pre-training on a global scale; and (2) a Spatially-Aware Mixture-of-Experts Transformer that develops specialized experts to efficiently capture diverse mobility patterns. Pre-trained on billion-scale datasets, MoveGPT establishes a new state-of-the-art across a wide range of downstream tasks, achieving performance gains of up to 35% on average. It also demonstrates strong generalization capabilities to unseen cities. Crucially, our work provides empirical evidence of scaling ability in human mobility, validating a clear path toward building increasingly capable foundation models in this domain.

Chonghua Han, Yuan Yuan, Jingtao Ding, Jie Feng, Fanjin Meng, Yong Li• 2025

Related benchmarks

TaskDatasetResultRank
Next Location PredictionAtlanta
HR@124.5
8
Next Location PredictionChicago
HR@126.9
8
Next Location PredictionSeattle
HR@130.9
8
Next Location PredictionWashington
HR@126.5
8
Showing 4 of 4 rows

Other info

Follow for update