Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Offline RL by Reward-Weighted Fine-Tuning for Conversation Optimization

About

Offline reinforcement learning (RL) is a variant of RL where the policy is learned from a previously collected dataset of trajectories and rewards. In our work, we propose a practical approach to offline RL with large language models (LLMs). We recast the problem as reward-weighted fine-tuning, which can be solved using similar techniques to supervised fine-tuning (SFT). To showcase the value of our approach, we apply it to learning short-horizon question-answering policies of a fixed length, where the agent reasons about potential answers or asks clarifying questions. Our work stands in a stark contrast to state-of-the-art methods in this domain, based on SFT and direct preference optimization, which have additional hyper-parameters and do not directly optimize for rewards. We compare to them empirically, and report major gains in both optimized rewards and language quality.

Subhojyoti Mukherjee, Viet Dac Lai, Raghavendra Addanki, Ryan Rossi, Seunghyun Yoon, Trung Bui, Anup Rao, Jayakumar Subramanian, Branislav Kveton• 2025

Related benchmarks

TaskDatasetResultRank
Science Question AnsweringScienceQA (test)
Average Accuracy95.02
208
Conversational SQLCoSQL
Accuracy65.83
14
Scientific Question AnsweringSciQA
Accuracy92.48
13
Reasoning Question AnsweringARC
Accuracy79.93
7
Science Question AnsweringOpenBookQA
Accuracy68.14
7
Mathematical Dialogue EvaluationMathDial (test)
Accuracy9.67
7
Clarifying QuestionsSciQA (test)
Accuracy26
6
Clarifying QuestionsOpenBookQA (test)
Accuracy28
6
Showing 8 of 8 rows

Other info

Follow for update