Probing the Robustness of Trained Metrics for Conversational Dialogue Systems
About
This paper introduces an adversarial method to stress-test trained metrics to evaluate conversational dialogue systems. The method leverages Reinforcement Learning to find response strategies that elicit optimal scores from the trained metrics. We apply our method to test recently proposed trained metrics. We find that they all are susceptible to giving high scores to responses generated by relatively simple and obviously flawed strategies that our method converges on. For instance, simply copying parts of the conversation context to form a response yields competitive scores or even outperforms responses written by humans.
Jan Deriu, Don Tuggener, Pius von D\"aniken, Mark Cieliebak• 2022
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Dialogue Policy Evaluation | PersonaChat (test) | -- | 10 | |
| Dialogue Policy Evaluation | DailyDialog (test) | -- | 8 | |
| Dialogue Policy Evaluation | Empathetic Dialogues (test) | -- | 8 | |
| Dialogue Evaluation | Empathetic Dialogues | -- | 5 | |
| Dialogue Evaluation | DailyDialog | -- | 4 | |
| Dialogue Evaluation | PersonaChat | -- | 4 |
Showing 6 of 6 rows