Amortizing intractable inference in large language models
About
Autoregressive large language models (LLMs) compress knowledge from their training data through next-token conditional distributions. This limits tractable querying of this knowledge to start-to-end autoregressive sampling. However, many tasks of interest -- including sequence continuation, infilling, and other forms of constrained generation -- involve sampling from intractable posterior distributions. We address this limitation by using amortized Bayesian inference to sample from these intractable posteriors. Such amortization is algorithmically achieved by fine-tuning LLMs via diversity-seeking reinforcement learning algorithms: generative flow networks (GFlowNets). We empirically demonstrate that this distribution-matching paradigm of LLM fine-tuning can serve as an effective alternative to maximum-likelihood training and reward-maximizing policy optimization. As an important application, we interpret chain-of-thought reasoning as a latent variable modeling problem and demonstrate that our approach enables data-efficient adaptation of LLMs to tasks that require multi-step rationalization and tool use.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Open-ended generation | WikiText-103 (test) | MAUVE0.2961 | 26 | |
| Open-ended Text Generation | Law-MT Out of Domain (test) | MAUVE28.62 | 16 | |
| Story infilling | ROCStories (test) | BLEU-40.019 | 7 |