Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Offline Reinforcement Learning for Mixture-of-Expert Dialogue Management

About

Reinforcement learning (RL) has shown great promise for developing dialogue management (DM) agents that are non-myopic, conduct rich conversations, and maximize overall user satisfaction. Despite recent developments in RL and language models (LMs), using RL to power conversational chatbots remains challenging, in part because RL requires online exploration to learn effectively, whereas collecting novel human-bot interactions can be expensive and unsafe. This issue is exacerbated by the combinatorial action spaces facing these algorithms, as most LM agents generate responses at the word level. We develop a variety of RL algorithms, specialized to dialogue planning, that leverage recent Mixture-of-Expert Language Models (MoE-LMs) -- models that capture diverse semantics, generate utterances reflecting different intents, and are amenable for multi-turn DM. By exploiting MoE-LM structure, our methods significantly reduce the size of the action space and improve the efficacy of RL-based DM. We evaluate our methods in open-domain dialogue to demonstrate their effectiveness w.r.t.\ the diversity of intent in generated utterances and overall DM performance.

Dhawal Gupta, Yinlam Chow, Aza Tulepbergenov, Mohammad Ghavamzadeh, Craig Boutilier• 2023

Related benchmarks

TaskDatasetResultRank
Dialogue ManagementReddit Casual (test)
Mean Return465
18
Dialogue ManagementCornell (test)
Mean Return3.62
18
Offline Reinforcement Learning for Dialogue ManagementReddit Casual
Return4.65
8
Offline Reinforcement Learning for Dialogue ManagementCornell Movie
Return3.62
8
Dialogue ManagementReddit Casual
Average Fluency88
6
Showing 5 of 5 rows

Other info

Follow for update