Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PonderLM: Pretraining Language Models to Ponder in Continuous Space

About

Humans ponder before articulating complex sentence elements, enabling deeper cognitive processing through focused effort. In this work, we introduce this pondering process into language models by repeatedly invoking the forward process within a single token generation step. During pondering, instead of generating an actual token sampled from the prediction distribution, the model ponders by yielding a weighted sum of all token embeddings according to the predicted token distribution. The generated embedding is then fed back as input for another forward pass. We show that the model can learn to ponder in this way through self-supervised learning, without any human annotations. Experiments across three widely used open-source architectures-GPT-2, Pythia, and LLaMA-and extensive downstream task evaluations demonstrate the effectiveness and generality of our method. On 9 downstream benchmarks, our pondering-enhanced Pythia models significantly outperform the official Pythia models. Notably, our PonderPythia models demonstrate remarkable effectiveness: PonderPythia-2.8B surpasses Pythia-6.9B and rivals Pythia-12B, while our PonderPythia-1B matches TinyLlama-1.1B, a model trained on 10 times more data. The code is available at https://github.com/LUMIA-Group/PonderingLM.

Boyi Zeng, Shixiang Song, Siyuan Huang, Yixuan Wang, He Li, Ziwei He, Xinbing Wang, Zhiyu Li, Zhouhan Lin• 2025

Related benchmarks

TaskDatasetResultRank
Language ModelingLambada OpenAI
Accuracy65.2
61
Downstream NLP EvaluationDownstream NLP Tasks (Lambada, ARC, WinoGrande, PIQA, HellaSwag, SciQ, RACE) LM Evaluation Harness (test)
Lambada OpenAI68.9
36
General Language UnderstandingGeneral Downstream Tasks Aggregate
Average Accuracy56.5
8
Showing 3 of 3 rows

Other info

Follow for update