Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Breaking the Pre-Sampling Barrier: Activation-Informed Difficulty-Aware Self-Consistency

About

Self-Consistency (SC) is an effective decoding strategy that improves the reasoning performance of Large Language Models (LLMs) by generating multiple chain-of-thought reasoning paths and selecting the final answer via majority voting. However, it suffers from substantial inference costs because it requires a large number of samples. To mitigate this issue, Difficulty-Adaptive Self-Consistency (DSC) was proposed to reduce unnecessary token usage for easy problems by adjusting the number of samples according to problem difficulty. However, DSC requires additional model calls and pre-sampling to estimate difficulty, and this process is repeated when applying to each dataset, leading to significant computational overhead. In this work, we propose Activation-Informed Difficulty-Aware Self-Consistency (ACTSC) to address these limitations. ACTSC leverages internal difficulty signals reflected in the feed-forward network neuron activations to construct a lightweight difficulty estimation probe, without any additional token generation or model calls. The probe dynamically adjusts the number of samples for SC and can be applied to new datasets without requiring pre-sampling for difficulty estimation. To validate its effectiveness, we conduct experiments on five benchmarks. Experimental results show that ACTSC effectively reduces inference costs while maintaining accuracy relative to existing methods.

Taewoong Yoon, Geunyeong Jeong, Geon Park, Sihyeong Yeom, Harksoo Kim• 2026

Related benchmarks

TaskDatasetResultRank
ReasoningGPQA Diamond
Accuracy38.46
88
Mathematical ReasoningAIME 2025
Sample Count30.13
15
Mathematical ReasoningAIME 2024
Sample Count28.17
15
ReasoningMMLU-Pro
Avg Samples6.6
5
Showing 4 of 4 rows

Other info

Follow for update