Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Turbo Connection: Reasoning as Information Flow from Higher to Lower Layers

About

Complex problems, whether in math, logic, or planning, are solved by humans through a sequence of steps where the result of one step informs the next. In this work, we adopt the perspective that the reasoning power of Transformers is fundamentally limited by a fixed maximum number of steps along any latent path of computation. To address this, we introduce Turbo Connection (TurboConn), a novel architecture that overcomes the fixed-depth constraint by routing multiple residual connections from the higher-layer hidden states of each token $t$ to the lower layers of token $t+1$. Fine-tuning pre-trained LLMs with our method not only yields accuracy gains of 0.9% to over 10% on benchmarks like GSM8K, Parity, and multi-step arithmetic, but also demonstrates that the density of these backward connections is critical; our dense interaction significantly outperforms "sparse" alternatives that only pass a single hidden state or vector. Notably, TurboConn can be integrated into pre-trained LLMs to overcome task-specific plateaus: while a fine-tuned Qwen-3-1.7B achieves only 53.78% on Parity, adding our architectural modification enables the model to reach 100% accuracy, all without the necessity to retrain the full model from scratch or sophisticated curriculum learning. Our results provide strong empirical evidence that the depth of the computational path is a key factor in reasoning ability, also offering a new mechanism to enhance LLMs without significantly affecting generation latency.

Mohan Tang, Sidi Lu• 2026

Related benchmarks

TaskDatasetResultRank
Arithmetic ReasoningMultiArith (test)
Accuracy51.86
67
ReasoningMulti-Step Arithmetic
Accuracy45.81
28
Grade School Math ReasoningGSM8K No CoT augmented (test)
Accuracy0.2482
6
Parity DeterminationParity (test)
Accuracy100
6
Showing 4 of 4 rows

Other info

Follow for update