Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient Temporal Tokenization for Mobility Prediction with Large Language Models

About

We introduce RHYTHM (Reasoning with Hierarchical Temporal Tokenization for Human Mobility), a framework that leverages large language models (LLMs) as spatio-temporal predictors and trajectory reasoners. RHYTHM partitions trajectories into daily segments encoded as discrete tokens with hierarchical attention, capturing both daily and weekly dependencies while substantially reducing the sequence length. Token representations are enriched with pre-computed prompt embeddings via a frozen LLM, enhancing the model's ability to capture interdependencies without extensive computational overhead. By freezing the LLM backbone, RHYTHM achieves significant computational efficiency. Evaluation on three real-world datasets demonstrates a 2.4% improvement in accuracy, 5.0% increase on weekends, and 24.6% reduction in training time compared to state-of-the-art methods.

Haoyu He, Haozheng Luo, Yan Chen, Qi R. Wang• 2025

Related benchmarks

TaskDatasetResultRank
Event PredictionStackOverflow
RMSE0.578
42
Event sequence modelingUS Earthquake
Accuracy64.4
13
Event sequence modelingAmazon Review
Accuracy (%)70
13
Event sequence modelingChicago Crime
Accuracy27.1
13
Event sequence modelingNYC Taxi
Accuracy91.9
13
Showing 5 of 5 rows

Other info

Follow for update