Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Hidden Failures in Robustness: Why Supervised Uncertainty Quantification Needs Better Evaluation

About

Recent work has shown that the hidden states of large language models contain signals useful for uncertainty estimation and hallucination detection, motivating a growing interest in efficient probe-based approaches. Yet it remains unclear how robust existing methods are, and which probe designs provide uncertainty estimates that are reliable under distribution shift. We present a systematic study of supervised uncertainty probes across models, tasks, and OOD settings, training over 2,000 probes while varying the representation layer, feature type, and token aggregation strategy. Our evaluation highlights poor robustness in current methods, particularly in the case of long-form generations. We also find that probe robustness is driven less by architecture and more by the probe inputs. Middle-layer representations generalise more reliably than final-layer hidden states, and aggregating across response tokens is consistently more robust than relying on single-token features. These differences are often largely invisible in-distribution but become more important under distribution shift. Informed by our evaluation, we explore a simple hybrid back-off strategy for improving robustness, arguing that better evaluation is a prerequisite for building more robust probes.

Joe Stacey, Hadas Orgad, Kentaro Inui, Benjamin Heinzerling, Nafise Sadat Moosavi• 2026

Related benchmarks

TaskDatasetResultRank
Short-form generationShort-form generation ID
PRR67
38
Long-form generationLong-form generation ID
PRR0.2
38
Long-form generationLong-form generation datasets LOO - near OOD--
24
Long-form generationLong-form generation datasets 1D-SameTask - OOD--
24
Long-form generationDiffTask OOD--
24
Long-form generation1D-DiffTask most OOD--
24
Short-form generationShort-form generation datasets LOO - near OOD--
24
Short-form generationShort-form generation datasets 1D-SameTask - OOD--
24
Short-form generationDiffTask OOD--
24
Short-form generationShort-form generation datasets 1D-DiffTask most OOD--
24
Showing 10 of 18 rows

Other info

Follow for update