Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Predicting the Order of Upcoming Tokens Improves Language Modeling

About

Multi-token prediction (MTP) has been proposed as an auxiliary objective to improve next-token prediction (NTP) in language model training but shows inconsistent improvements, underperforming in standard NLP benchmarks. We found MTP's exact future token prediction to be too difficult as an auxiliary loss. Instead, we propose token order prediction (TOP), which trains models to order upcoming tokens by their proximity using a learning-to-rank loss. TOP requires only a single additional unembedding layer compared to MTP's multiple transformer layers. We pretrain models of 340M, 1.8B, and 7B parameters using NTP, MTP, DeepSeek MTP (DS-MTP) and TOP objectives. The results of nine standard NLP benchmarks show that TOP overall outperforms NTP, MTP, and DS-MTP even at scale. TOP models with continued training on math and code also perform better on 4 relevant benchmarks. On the synthetic star graph task, TOP enables pathfinding on graphs where NTP, MTP, and DS-MTP fail. Our code is available at https://github.com/zaydzuhri/token-order-prediction

Zayd M. K. Zuhri, Erland Hilman Fuadi, Alham Fikri Aji• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy68.73
1460
Commonsense ReasoningPIQA
Accuracy76.39
647
Mathematical ReasoningMATH
Accuracy20.4
643
Mathematical ReasoningGSM8K
Accuracy (GSM8K)55.57
358
Question AnsweringSciQ
Accuracy91.6
226
Question AnsweringTriviaQA
EM30.9
116
Commonsense ReasoningSocialIQA
Accuracy43.91
97
Language ModelingLAMBADA (test)
Accuracy57.03
71
Question AnsweringARC Challenge
Normalized Accuracy46.42
48
Multiple-choice Question AnsweringMMLU continuation (test)
Accuracy39.65
12
Showing 10 of 13 rows

Other info

Follow for update