Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Overview of Dialog System Evaluation Track: Dimensionality, Language, Culture and Safety at DSTC 12

About

The rapid advancement of Large Language Models (LLMs) has intensified the need for robust dialogue system evaluation, yet comprehensive assessment remains challenging. Traditional metrics often prove insufficient, and safety considerations are frequently narrowly defined or culturally biased. The DSTC12 Track 1, "Dialog System Evaluation: Dimensionality, Language, Culture and Safety," is part of the ongoing effort to address these critical gaps. The track comprised two subtasks: (1) Dialogue-level, Multi-dimensional Automatic Evaluation Metrics, and (2) Multilingual and Multicultural Safety Detection. For Task 1, focused on 10 dialogue dimensions, a Llama-3-8B baseline achieved the highest average Spearman's correlation (0.1681), indicating substantial room for improvement. In Task 2, while participating teams significantly outperformed a Llama-Guard-3-1B baseline on the multilingual safety subset (top ROC-AUC 0.9648), the baseline proved superior on the cultural subset (0.5126 ROC-AUC), highlighting critical needs in culturally-aware safety. This paper describes the datasets and baselines provided to participants, as well as submission evaluation results for each of the two proposed subtasks.

John Mendon\c{c}a, Lining Zhang, Rahul Mallidi, Alon Lavie, Isabel Trancoso, Luis Fernando D'Haro, Jo\~ao Sedoc• 2025

Related benchmarks

TaskDatasetResultRank
Dialogue response evaluationDSTC 12 (test)
Empathy0.06
5
Theme DistributionInsurance Out-of-domain
Acc41.5
4
Theme DistributionFinance Out-of-domain
Accuracy24.6
4
Theme DistributionBANKING
Acc0.368
4
Theme Label QualityBANKING
ROUGE-111.1
4
Theme Label QualityFinance Out-of-domain
ROUGE-15
4
Theme Label QualityInsurance Out-of-domain
ROUGE-112.3
4
Showing 7 of 7 rows

Other info

Follow for update