Improving LLMs for Machine Translation Using Synthetic Preference Data
About
Large language models have emerged as effective machine translation systems. In this paper, we explore how a general instruction-tuned large language model can be improved for machine translation using relatively few easily produced data resources. Using Slovene as a use case, we improve the GaMS-9B-Instruct model using Direct Preference Optimization (DPO) training on a programmatically curated and enhanced subset of a public dataset. As DPO requires pairs of quality-ranked instances, we generated its training dataset by translating English Wikipedia articles using two LLMs, GaMS-9B-Instruct and EuroLLM-9B-Instruct. We ranked the resulting translations based on heuristics coupled with automatic evaluation metrics such as COMET. The evaluation shows that our fine-tuned model outperforms both models involved in the dataset generation. In comparison to the baseline models, the fine-tuned model achieved a COMET score gain of around 0.04 and 0.02, respectively, on translating Wikipedia articles. It also more consistently avoids language and formatting errors.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| English to Slovene translation | CC News | COMET Score70.2903 | 8 | |
| English to Slovene translation | Wikipedia | COMET Score74.2583 | 8 | |
| English to Slovene translation | English-to-Slovene (Overall) | Overall COMET Score0.708 | 8 | |
| English to Slovene translation | Nemotron-Chat | COMET Score0.6795 | 8 |