Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Leveraging Entailment Judgements in Cross-Lingual Summarisation

About

Synthetically created Cross-Lingual Summarisation (CLS) datasets are prone to include document-summary pairs where the reference summary is unfaithful to the corresponding document as it contains content not supported by the document (i.e., hallucinated content). This low data quality misleads model learning and obscures evaluation results. Automatic ways to assess hallucinations and improve training have been proposed for monolingual summarisation, predominantly in English. For CLS, we propose to use off-the-shelf cross-lingual Natural Language Inference (X-NLI) to evaluate faithfulness of reference and model generated summaries. Then, we study training approaches that are aware of faithfulness issues in the training data and propose an approach that uses unlikelihood loss to teach a model about unfaithful summary sequences. Our results show that it is possible to train CLS models that yield more faithful summaries while maintaining comparable or better informativess.

Huajian Zhang, Laura Perez-Beltrachini• 2024

Related benchmarks

TaskDatasetResultRank
Cross-lingual SummarizationXWikis fr-en original (test)
RL Score31.55
5
Cross-lingual SummarizationXWikis fr-en filtered high faithfulness (test)
RL33.49
5
Cross-lingual SummarizationXWikis de-en original (test)
RL Score32.31
5
Cross-lingual SummarizationXWikis de-en filtered high faithfulness (test)
RL Score34.63
5
Cross-lingual SummarizationXWikis zh-en filtered high faithfulness (test)
RL Score33.87
5
Cross-lingual SummarizationXWikis cs-en filtered high faithfulness (test)
RL Score34.89
5
Cross-lingual SummarizationVoxeurop fr-en (test)
ROUGE-L20.95
5
Cross-lingual SummarizationVoxeurop de-en (test)
ROUGE-L21.42
5
Cross-lingual SummarizationVoxeurop cs-en (test)
ROUGE-L21.6
5
SummarizationXWikis en-en (test)
RL Score31.28
5
Showing 10 of 14 rows

Other info

Code

Follow for update