Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CLewR: Curriculum Learning with Restarts for Machine Translation Preference Learning

About

Large language models (LLMs) have demonstrated competitive performance in zero-shot multilingual machine translation (MT). Some follow-up works further improved MT performance via preference optimization, but they leave a key aspect largely underexplored: the order in which data samples are given during training. We address this topic by integrating curriculum learning into various state-of-the-art preference optimization algorithms to boost MT performance. We introduce a novel curriculum learning strategy with restarts (CLewR), which reiterates easy-to-hard curriculum multiple times during training to effectively mitigate the catastrophic forgetting of easy examples. We demonstrate consistent gains across several model families (Gemma2, Qwen2.5, Llama3.1) and preference optimization techniques. We publicly release our code at https://github.com/alexandra-dragomir/CLewR.

Alexandra Dragomir, Florin Brad, Radu Tudor Ionescu• 2026

Related benchmarks

TaskDatasetResultRank
Machine TranslationFlores-200 Romance group en->xx (test)
BLEU37.45
46
Machine TranslationFlores-200 Romance group xx->en (test)
BLEU41.45
46
Showing 2 of 2 rows

Other info

Follow for update