Turning Up the Heat: Min-p Sampling for Creative and Coherent LLM Outputs
About
Large Language Models (LLMs) generate text by sampling the next token from a probability distribution over the vocabulary at each decoding step. Popular sampling methods like top-p (nucleus sampling) often struggle to balance quality and diversity, especially at higher temperatures which lead to incoherent or repetitive outputs. We propose min-p sampling, a dynamic truncation method that adjusts the sampling threshold based on the model's confidence by using the top token's probability as a scaling factor. Our experiments on benchmarks including GPQA, GSM8K, and AlpacaEval Creative Writing show that min-p sampling improves both the quality and diversity of generated text across different model families (Mistral and Llama 3) and model sizes (1B to 123B parameters), especially at higher temperatures. Human evaluations further show a clear preference for min-p sampling, in both text quality and creativity. Min-p sampling has been adopted by popular open-source LLM frameworks, including Hugging Face Transformers, VLLM, and many others, highlighting its considerable impact on improving text generation quality.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | GSM8K (test) | Accuracy81.96 | 900 | |
| Question Answering | GPQA | Accuracy31.92 | 258 | |
| Open-ended generation | Creative Writing Evaluation Prompts | Average Judge Score8.12 | 108 | |
| Scientific Reasoning | GPQA Main | Accuracy29.02 | 67 | |
| Mathematical Reasoning | MATH 500 | Exact Match60.6 | 60 | |
| Mathematical Reasoning | GSM8K (test) | Exact Match Accuracy (GSM8K Test)95.6 | 60 | |
| Mathematical Reasoning | GSM8K | Exact Match Accuracy (GSM8K)93.18 | 60 | |
| Science Question Answering | GPQA main (test) | Exact Match Accuracy39.73 | 60 | |
| Mathematics Problem Solving | MATH500 (test) | Exact Match Accuracy59.8 | 60 | |
| Mathematical Reasoning | AQUA | AQuA Exact Match77.56 | 60 |