Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Next Concept Prediction in Discrete Latent Space Leads to Stronger Language Models

About

We propose Next Concept Prediction (NCP), a generative pretraining paradigm built on top of Next Token Prediction (NTP). NCP predicts discrete concepts that span multiple tokens, thereby forming a more challenging pretraining objective. Our model, ConceptLM, quantizes hidden states using Vector Quantization and constructs a concept vocabulary. It leverages both NCP and NTP to drive parameter updates and generates a concept to guide the generation of the following tokens. We train ConceptLM from scratch at scales ranging from 70M to 1.5B parameters with up to 300B training data, including Pythia and GPT-2 backbones. Results on 13 benchmarks show that NCP yields consistent performance gains over traditional token-level models. Furthermore, continual pretraining experiments on an 8B-parameter Llama model indicate that NCP can further improve an NTP-trained model. Our analysis suggests that NCP leads to more powerful language models by introducing a harder pretraining task, providing a promising path toward better language modeling.

Yuliang Liu, Yunchong Song, Yixuan Wang, Kewen Ge, Alex Lamb, Qipeng Guo, Kai Chen, Bowen Zhou, Zhouhan Lin• 2026

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC Challenge
Accuracy21.9
749
Question AnsweringARC Easy
Normalized Acc49.9
385
Language ModelingWikiText-103
PPL31.53
146
Language ModelingLambada OpenAI
Accuracy53
61
Language ModelingOpenWebText
Perplexity14.4
50
General Language UnderstandingStandard Downstream Tasks Suite (SciQ, PIQA, WinoGrande, ARC-E, ARC-C, HellaSwag, LogiQA, BoolQ, LAMBADA, MMLU)
Average Accuracy48.3
32
Language ModelingThe Pile
Perplexity8.7
25
Language ModelingLambada (OpenAI split)
PPL20.01
13
Language ModelingLambada Standard
PPL23.2
7
Showing 9 of 9 rows

Other info

Follow for update