SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs
About
Chain-of-Thought (CoT) reasoning enables Large Language Models (LLMs) to solve complex reasoning tasks by generating intermediate reasoning steps. However, most existing approaches focus on hard token decoding, which constrains reasoning within the discrete vocabulary space and may not always be optimal. While recent efforts explore continuous-space reasoning, they often require full-model fine-tuning and suffer from catastrophic forgetting, limiting their applicability to state-of-the-art LLMs that already perform well in zero-shot settings with a proper instruction. To address this challenge, we propose a novel approach for continuous-space reasoning that does not require modifying the LLM. Specifically, we employ a lightweight fixed assistant model to speculatively generate instance-specific soft thought tokens as the initial chain of thoughts, which are then mapped into the LLM's representation space via a trainable projection module. Experimental results on five reasoning benchmarks demonstrate that our method enhances LLM reasoning performance through supervised, parameter-efficient fine-tuning. Source code is available at https://github.com/xuyige/SoftCoT.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Code Generation | HumanEval (test) | Pass@171.83 | 444 | |
| Code Generation | MBPP (test) | Pass@156.04 | 276 | |
| Mathematical Reasoning | SVAMP (test) | Accuracy40 | 233 | |
| Mathematical Reasoning | AQUA | Accuracy80.63 | 132 | |
| Commonsense Reasoning | StrategyQA | Accuracy71.18 | 125 | |
| Commonsense Reasoning | StrategyQA (test) | Accuracy60.61 | 81 | |
| Arithmetic Reasoning | MultiArith (test) | Accuracy74.4 | 67 | |
| Mathematical Reasoning | ASDiv Aug (test) | Accuracy88.9 | 25 | |
| Mathematical Reasoning | GSM8K-NL (test) | Accuracy36.8 | 19 | |
| Mathematical Reasoning | ASDiv-Aug | Accuracy92.14 | 15 |