Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Direct Multi-Turn Preference Optimization for Language Agents

About

Adapting Large Language Models (LLMs) for agent tasks is critical in developing language agents. Direct Preference Optimization (DPO) is a promising technique for this adaptation with the alleviation of compounding errors, offering a means to directly optimize Reinforcement Learning (RL) objectives. However, applying DPO to multi-turn tasks presents challenges due to the inability to cancel the partition function. Overcoming this obstacle involves making the partition function independent of the current state and addressing length disparities between preferred and dis-preferred trajectories. In this light, we replace the policy constraint with the state-action occupancy measure constraint in the RL objective and add length normalization to the Bradley-Terry model, yielding a novel loss function named DMPO for multi-turn agent tasks with theoretical explanations. Extensive experiments on three multi-turn agent task datasets confirm the effectiveness and superiority of the DMPO loss. The code is available at https://github.com/swt-user/DMPO.

Wentao Shi, Mengqi Yuan, Junkang Wu, Qifan Wang, Fuli Feng• 2024

Related benchmarks

TaskDatasetResultRank
Social DialogueSOTOPIA Self-Chat
GOAL8.34
28
Social DialogueSOTOPIA Interaction with GPT-4o
Goal Score8
28
Social DialogueSOTOPIA Overall (AVG)
AVG Score5.43
11
Social DialogueSOTOPIA Interaction with GPT-4o-mini
GOAL Score7.41
11
Showing 4 of 4 rows

Other info

Follow for update