Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Correctness-Optimized Residual Activation Lens (CORAL): Transferrable and Calibration-Aware Inference-Time Steering

About

Large language models (LLMs) exhibit persistent miscalibration, especially after instruction tuning and preference alignment. Modified training objectives can improve calibration, but retraining is expensive. Inference-time steering offers a lightweight alternative, yet most existing methods optimize proxies for correctness rather than correctness itself. We introduce CORAL (Correctness-Optimized Residual Activation Lens), a regularized inference-time steering method that captures distributed correctness signals from model internal activations using weight-decay MLP probes. We evaluate CORAL across three 7B-parameter models and find that it consistently improves accuracy by 10\% and expected calibration error (ECE) by 50\% on average. We additionally demonstrate that these gains transfer without retraining to the complete published test sets of four held-out benchmarks (ARC-Challenge, HellaSwag, Math-MC, OpenBookQA), averaging 14\% accuracy improvements and 49\% ECE improvements. Our results support the hypothesis that distributed information in model internals can be extracted using regularized probes when individual neurons are insufficient. CORAL thus provides a compute-efficient, transferable, and calibration-aware approach to improve MCQA performance during inference.

Miranda Muqing Miao, Young-Min Cho, Lyle Ungar• 2026

Related benchmarks

TaskDatasetResultRank
Reading ComprehensionRACE
Accuracy74.93
34
Commonsense ReasoningHellaSwag published (test)
Accuracy82.35
15
Mathematical ReasoningMath-MC (test)
Accuracy58.04
15
Question AnsweringOpenBookQA published (test)
Accuracy65.4
15
Showing 4 of 4 rows

Other info

Follow for update