Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PonderLM: Pretraining Language Models to Ponder in Continuous Space

About

Humans ponder before articulating complex sentence elements, enabling deeper cognitive processing through focused effort. In this work, we introduce this pondering process into language models by repeatedly invoking the forward process within a single token generation step. During pondering, instead of generating an actual token sampled from the prediction distribution, the model ponders by yielding a weighted sum of all token embeddings according to the predicted token distribution. The generated embedding is then fed back as input for another forward pass. We show that the model can learn to ponder in this way through self-supervised learning, without any human annotations. Experiments across three widely used open-source architectures-GPT-2, Pythia, and LLaMA-and extensive downstream task evaluations demonstrate the effectiveness and generality of our method. On 9 downstream benchmarks, our pondering-enhanced Pythia models significantly outperform the official Pythia models. Notably, our PonderPythia models demonstrate remarkable effectiveness: PonderPythia-2.8B surpasses Pythia-6.9B and rivals Pythia-12B, while our PonderPythia-1B matches TinyLlama-1.1B, a model trained on 10 times more data. The code is available at https://github.com/LUMIA-Group/PonderingLM.

Boyi Zeng, Shixiang Song, Siyuan Huang, Yixuan Wang, He Li, Ziwei He, Xinbing Wang, Zhiyu Li, Zhouhan Lin• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningWinoGrande
Accuracy65.3
1085
Question AnsweringARC-E
Accuracy70.6
416
Question AnsweringPIQA
Accuracy76.7
374
Question AnsweringSciQ--
283
Sentence CompletionHellaSwag
Accuracy49
276
Language ModelingLambada OpenAI
Accuracy68.9
127
Reading ComprehensionRACE
Accuracy39
70
Question AnsweringARC-C
Accuracy (ARC-C)35.8
46
Language ModelingLambada Standard
Accuracy60.8
36
Mean Performance EvaluationDownstream Tasks Summary
Average Accuracy61.5
36
Showing 10 of 13 rows

Other info

Follow for update