Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Paying More Attention to Source Context: Mitigating Unfaithful Translations from Large Language Model

About

Large language models (LLMs) have showcased impressive multilingual machine translation ability. However, unlike encoder-decoder style models, decoder-only LLMs lack an explicit alignment between source and target contexts. Analyzing contribution scores during generation processes revealed that LLMs can be biased towards previously generated tokens over corresponding source tokens, leading to unfaithful translations. To address this issue, we propose to encourage LLMs to pay more attention to the source context from both source and target perspectives in zeroshot prompting: 1) adjust source context attention weights; 2) suppress irrelevant target prefix influence; Additionally, we propose 3) avoiding over-reliance on the target prefix in instruction tuning. Experimental results from both human-collected unfaithfulness test sets focusing on LLM-generated unfaithful translations and general test sets, verify our methods' effectiveness across multiple language pairs. Further human evaluation shows our method's efficacy in reducing hallucinatory translations and facilitating faithful translation generation.

Hongbin Zhang, Kehai Chen, Xuefeng Bai, Yang Xiang, Min Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Machine TranslationFlores-101 (test)--
24
Machine Translation (Zh-En)WMT 22 (test)
BLEU25.3
23
Machine Translationhuman-collected unfaithful translation De ⇒ En (test)
BLEU30.8
10
Machine Translationhuman-collected unfaithful translation En ⇒ De (test)
BLEU20.9
10
Machine Translationhuman-collected unfaithful translation En ⇒ Zh (test)
BLEU19.1
10
Machine Translationhuman-collected unfaithful translation Zh ⇒ En (test)
BLEU16.6
10
Machine Translation (En-De)Flores-101 (test)
BLEU29.5
5
Machine Translation (En-De)WMT 22 (test)
BLEU35.3
5
Showing 8 of 8 rows

Other info

Code

Follow for update