Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Draft-Conditioned Constrained Decoding for Structured Generation in LLMs

About

Large language models (LLMs) are increasingly used to generate executable outputs, JSON objects, and API calls, where a single syntax error can make the output unusable. Constrained decoding enforces validity token-by-token via masking and renormalization, but it can distort generation when the model assigns low probability mass to valid continuations, pushing decoding toward locally valid yet semantically incorrect trajectories. We propose \emph{Draft-Conditioned Constrained Decoding (DCCD)}, a simple two-step, training-free inference procedure that decouples semantic planning from structural enforcement: an unconstrained draft is generated first, and constrained decoding is then applied, conditioned on this draft, to guarantee validity. We analyze DCCD through a KL-projection view, showing that draft conditioning increases feasible mass and reduces the cumulative "projection tax" induced by hard constraints, with an optional best-of-$K$ draft selection. Across structured reasoning benchmarks, DCCD improves strict structured accuracy by up to +24 percentage points over standard constrained decoding (e.g., 15.2\% to 39.0\% on GSM8K with a 1B model), and enables smaller model pairs to match or exceed much larger constrained baselines, yielding substantial gains in parameter efficiency.

Avinash Reddy, Thayne T. Walker, James S. Ide, Amrit Singh Bedi• 2026

Related benchmarks

TaskDatasetResultRank
Math ReasoningGSM8K
Accuracy95.15
187
Mathematical ReasoningGSM-Symbolic
GSM-Sym Accuracy53
73
Mathematical ReasoningMATH500
Accuracy58.6
57
First-order logic formalizationFOLIO
Accuracy31.53
24
Showing 4 of 4 rows

Other info

Follow for update