Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Decoding as Optimisation on the Probability Simplex: From Top-K to Top-P (Nucleus) to Best-of-K Samplers

About

Decoding sits between a language model and everything we do with it, yet it is still treated as a heuristic knob-tuning exercise. We argue decoding should be understood as a principled optimisation layer: at each token, we solve a regularised problem over the probability simplex that trades off model score against structural preferences and constraints. This single template recovers greedy decoding, Softmax sampling, Top-K, Top-P, and Sparsemax-style sparsity as special cases, and explains their common structure through optimality conditions. More importantly, the framework makes it easy to invent new decoders without folklore. We demonstrate this by designing Best-of-K (BoK), a KL-anchored coverage objective aimed at multi-sample pipelines (self-consistency, reranking, verifier selection). BoK targets the probability of covering good alternatives within a fixed K-sample budget and improves empirical performance. We show that such samples can improve accuracy by, for example, +18.6% for Qwen2.5-Math-7B on MATH500 at high sampling temperatures.

Xiaotong Ji, Rasul Tutunov, Matthieu Zimmer, Haitham Bou-Ammar• 2026

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval--
850
Question AnsweringGPQA
Accuracy36.36
258
Code GenerationHumanEval
Accuracy (%)56.01
77
Science Question AnsweringGPQA
Accuracy32.32
28
Showing 4 of 4 rows

Other info

Follow for update