Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Probing the Robustness of Trained Metrics for Conversational Dialogue Systems

About

This paper introduces an adversarial method to stress-test trained metrics to evaluate conversational dialogue systems. The method leverages Reinforcement Learning to find response strategies that elicit optimal scores from the trained metrics. We apply our method to test recently proposed trained metrics. We find that they all are susceptible to giving high scores to responses generated by relatively simple and obviously flawed strategies that our method converges on. For instance, simply copying parts of the conversation context to form a response yields competitive scores or even outperforms responses written by humans.

Jan Deriu, Don Tuggener, Pius von D\"aniken, Mark Cieliebak• 2022

Related benchmarks

TaskDatasetResultRank
Dialogue Policy EvaluationPersonaChat (test)--
10
Dialogue Policy EvaluationDailyDialog (test)--
8
Dialogue Policy EvaluationEmpathetic Dialogues (test)--
8
Dialogue EvaluationEmpathetic Dialogues--
5
Dialogue EvaluationDailyDialog--
4
Dialogue EvaluationPersonaChat--
4
Showing 6 of 6 rows

Other info

Code

Follow for update