Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Temporal Tokenization Strategies for Event Sequence Modeling with Large Language Models

About

Representing continuous time is a critical and under-explored challenge in modeling temporal event sequences with large language models (LLMs). Various strategies like byte-level representations or calendar tokens have been proposed. However, the optimal approach remains unclear, especially given the diverse statistical distributions of real-world event data, which range from smooth log-normal to discrete, spiky patterns. This paper presents the first empirical study of temporal tokenization for event sequences, comparing distinct encoding strategies: naive numeric strings, high-precision byte-level representations, human-semantic calendar tokens, classic uniform binning, and adaptive residual scalar quantization. We evaluate these strategies by fine-tuning LLMs on real-world datasets that exemplify these diverse distributions. Our analysis reveals that no single strategy is universally superior; instead, prediction performance depends heavily on aligning the tokenizer with the data's statistical properties, with log-based strategies excelling on skewed distributions and human-centric formats proving robust for mixed modalities.

Zefang Liu, Nam H. Nguyen, Yinzhu Quan, Shi-Xiong Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Event PredictionStackOverflow
RMSE0.474
42
Event sequence modelingChicago Crime
Accuracy27.2
13
Event sequence modelingUS Earthquake
Accuracy64.2
13
Event sequence modelingNYC Taxi
Accuracy92
13
Event sequence modelingAmazon Review
Accuracy (%)69.7
13
Showing 5 of 5 rows

Other info

Follow for update