Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Training Large Language Models to Reason in a Continuous Latent Space

About

Large language models (LLMs) are typically constrained to reason in the language space, where they express the reasoning process through a chain-of-thought (CoT) to solve complex problems. However, the language space may not always be optimal for reasoning. Most word tokens primarily ensure textual coherence and are not essential for reasoning, while some critical tokens require complex planning and pose challenges to LLMs. To explore the potential of reasoning beyond language, we introduce a new paradigm called Coconut (Chain of Continuous Thought). Coconut utilizes the last hidden state of the LLM as a representation of the reasoning state, termed "continuous thought." Instead of decoding this state into words, we feed it back to the model as the next input embedding directly in the continuous space. This latent reasoning paradigm enables an advanced reasoning pattern, where continuous thoughts can encode multiple alternative next steps, allowing the model to perform a breadth-first search (BFS) rather than committing prematurely to a single deterministic path as in CoT. Coconut outperforms CoT on logical reasoning tasks that require substantial search during planning and achieves a better trade-off between accuracy and efficiency.

Shibo Hao, Sainbayar Sukhbaatar, DiJia Su, Xian Li, Zhiting Hu, Jason Weston, Yuandong Tian• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy15.27
983
Code GenerationHumanEval--
850
Mathematical ReasoningMATH--
535
Code GenerationHumanEval (test)
Pass@169.39
444
Mathematical ReasoningSVAMP
Accuracy40.7
368
Mathematical ReasoningGSM8K
Accuracy36.6
351
Code GenerationMBPP (test)
Pass@154.8
276
Mathematical ReasoningSVAMP (test)
Accuracy44
233
Mathematical ReasoningGSM8K
Speed Up (x)3.14
177
Mathematical ReasoningMATH
Accuracy13.8
162
Showing 10 of 44 rows

Other info

Follow for update