Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TasTe: Teaching Large Language Models to Translate through Self-Reflection

About

Large language models (LLMs) have exhibited remarkable performance in various natural language processing tasks. Techniques like instruction tuning have effectively enhanced the proficiency of LLMs in the downstream task of machine translation. However, the existing approaches fail to yield satisfactory translation outputs that match the quality of supervised neural machine translation (NMT) systems. One plausible explanation for this discrepancy is that the straightforward prompts employed in these methodologies are unable to fully exploit the acquired instruction-following capabilities. To this end, we propose the TasTe framework, which stands for translating through self-reflection. The self-reflection process includes two stages of inference. In the first stage, LLMs are instructed to generate preliminary translations and conduct self-assessments on these translations simultaneously. In the second stage, LLMs are tasked to refine these preliminary translations according to the evaluation results. The evaluation results in four language directions on the WMT22 benchmark reveal the effectiveness of our approach compared to existing methods. Our work presents a promising approach to unleash the potential of LLMs and enhance their capabilities in MT. The codes and datasets are open-sourced at https://github.com/YutongWang1216/ReflectionLLMMT.

Yutong Wang, Jiali Zeng, Xuebo Liu, Fandong Meng, Jie Zhou, Min Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Machine TranslationWMT De-En 22 (test)
COMET84.11
29
Machine TranslationWMT En-De 2022 (test)
COMET83.8
25
Machine Translation (Zh-En)WMT 22 (test)
BLEU24.87
23
Machine TranslationWMT En-Zh 22 (test)
COMET85.9
18
Machine TranslationWMT 22 (test)
COMET82.92
18
Showing 5 of 5 rows

Other info

Code

Follow for update