Predicting the Order of Upcoming Tokens Improves Language Modeling
About
Multi-token prediction (MTP) has been proposed as an auxiliary objective to improve next-token prediction (NTP) in language model training but shows inconsistent improvements, underperforming in standard NLP benchmarks. We found MTP's exact future token prediction to be too difficult as an auxiliary loss. Instead, we propose token order prediction (TOP), which trains models to order upcoming tokens by their proximity using a learning-to-rank loss. TOP requires only a single additional unembedding layer compared to MTP's multiple transformer layers. We pretrain models of 340M, 1.8B, and 7B parameters using NTP, MTP, DeepSeek MTP (DS-MTP) and TOP objectives. The results of nine standard NLP benchmarks show that TOP overall outperforms NTP, MTP, and DS-MTP even at scale. TOP models with continued training on math and code also perform better on 4 relevant benchmarks. On the synthetic star graph task, TOP enables pathfinding on graphs where NTP, MTP, and DS-MTP fail. Our code is available at https://github.com/zaydzuhri/token-order-prediction
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Commonsense Reasoning | HellaSwag | Accuracy68.73 | 1460 | |
| Commonsense Reasoning | PIQA | Accuracy76.39 | 647 | |
| Mathematical Reasoning | MATH | Accuracy20.4 | 643 | |
| Mathematical Reasoning | GSM8K | Accuracy (GSM8K)55.57 | 358 | |
| Question Answering | SciQ | Accuracy91.6 | 226 | |
| Question Answering | TriviaQA | EM30.9 | 116 | |
| Commonsense Reasoning | SocialIQA | Accuracy43.91 | 97 | |
| Language Modeling | LAMBADA (test) | Accuracy57.03 | 71 | |
| Question Answering | ARC Challenge | Normalized Accuracy46.42 | 48 | |
| Multiple-choice Question Answering | MMLU continuation (test) | Accuracy39.65 | 12 |