Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Enhancing Large Language Models for Mobility Analytics with Semantic Location Tokenization

About

The widespread adoption of location-based services has led to the generation of vast amounts of mobility data, providing significant opportunities to model user movement dynamics within urban environments. Recent advancements have focused on adapting Large Language Models (LLMs) for mobility analytics. However, existing methods face two primary limitations: inadequate semantic representation of locations (i.e., discrete IDs) and insufficient modeling of mobility signals within LLMs (i.e., single templated instruction fine-tuning). To address these issues, we propose QT-Mob, a novel framework that significantly enhances LLMs for mobility analytics. QT-Mob introduces a location tokenization module that learns compact, semantically rich tokens to represent locations, preserving contextual information while ensuring compatibility with LLMs. Furthermore, QT-Mob incorporates a series of complementary fine-tuning objectives that align the learned tokens with the internal representations in LLMs, improving the model's comprehension of sequential movement patterns and location semantics. The proposed QT-Mob framework not only enhances LLMs' ability to interpret mobility data but also provides a more generalizable approach for various mobility analytics tasks. Experiments on three real-world dataset demonstrate the superior performance in both next-location prediction and mobility recovery tasks, outperforming existing deep learning and LLM-based methods.

Yile Chen, Yicheng Tao, Yue Jiang, Shuai Liu, Han Yu, Gao Cong• 2025

Related benchmarks

TaskDatasetResultRank
Next Location PredictionChicago
HR@130.6
8
Next Location PredictionSeattle
HR@131.5
8
Next Location PredictionWashington
HR@128.6
8
Next Location PredictionAtlanta
HR@124
8
Showing 4 of 4 rows

Other info

Follow for update