Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning How Hard to Think: Input-Adaptive Allocation of LM Computation

About

Computationally intensive decoding procedures--including search, reranking, and self-critique--can improve the quality of language model (LM) outputs in problems spanning code generation, numerical reasoning, and dialog. Existing work typically applies the same decoding procedure for every input to an LM. But not all inputs require the same amount of computation to process. Can we allocate decoding computation adaptively, using more resources to answer questions whose answers will be harder to compute? We present an approach that predicts the distribution of rewards given an input and computation budget, then allocates additional computation to inputs for which it is predicted to be most useful. We apply this approach in two decoding procedures: first, an adaptive best-of-k procedure that dynamically selects the number of samples to generate as input to a reranker; second, a routing procedure that dynamically responds to a query using a decoding procedure that is expensive but accurate, or one that is cheaper but less capable. Across a suite of programming, mathematics, and dialog tasks, we show that accurate computation-allocation procedures can be learned, and reduce computation by up to 50% at no cost to response quality, or improve quality by up to 10% at a fixed computational budget.

Mehul Damani, Idan Shenfeld, Andi Peng, Andreea Bobu, Jacob Andreas• 2024

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@191.22
850
Arithmetic ReasoningMultiArith
Accuracy95.4
181
Multi-hop Question AnsweringHotpotQA
Avg@8 Accuracy88.29
32
Multiple-choice Question AnsweringAQUA
Accuracy81.36
31
Code GenerationDS-1000
Pass@150.08
28
Medical Question AnsweringDDXPlus
Accuracy76.58
28
Knowledge ReasoningMMLU
MMLU Knowledge Reasoning Accuracy84.2
19
Showing 7 of 7 rows

Other info

Follow for update