Disentangling Ambiguity from Instability in Large Language Models: A Clinical Text-to-SQL Case Study
About
Deploying large language models for clinical Text-to-SQL requires distinguishing two qualitatively different causes of output diversity: (i) input ambiguity that should trigger clarification, and (ii) model instability that should trigger human review. We propose CLUES, a framework that models Text-to-SQL as a two-stage process (interpretations --> answers) and decomposes semantic uncertainty into an ambiguity score and an instability score. The instability score is computed via the Schur complement of a bipartite semantic graph matrix. Across AmbigQA/SituatedQA (gold interpretations) and a clinical Text-to-SQL benchmark (known interpretations), CLUES improves failure prediction over state-of-the-art Kernel Language Entropy. In deployment settings, it remains competitive while providing a diagnostic decomposition unavailable from a single score. The resulting uncertainty regimes map to targeted interventions - query refinement for ambiguity, model improvement for instability. The high-ambiguity/high-instability regime contains 51% of errors while covering 25% of queries, enabling efficient triage.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Clinical Text-to-SQL | EpiAskKB 2025 (test) | -- | 21 | |
| Open-domain QA | AmbigQA Nq=300 | -- | 6 | |
| Open-domain QA | SituatedQA Nq=300 | -- | 6 |