Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Dual-objective Language Models: Training Efficiency Without Overfitting

About

This paper combines autoregressive and masked-diffusion training objectives without any architectural modifications, resulting in flexible language models that outperform single-objective models. Autoregressive modeling has been a popular approach, partly because of its training efficiency; however, that comes at the cost of sensitivity to overfitting. On the other hand, masked-diffusion models are less efficient to train while being more resilient to overfitting. In this work, we demonstrate that dual-objective training achieves the best of both worlds. To derive the optimal balance between both objectives, we train and evaluate 50 language models under varying levels of data repetition. We show that it is optimal to combine both objectives under all evaluated settings and that the optimal balance is similar whether targeting autoregressive or masked-diffusion downstream performance.

David Samuel, Lucas Georges Gabriel Charpentier• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy31.1
1891
Question AnsweringARC Easy
Normalized Acc28.6
389
Question AnsweringOpenBookQA
Normalized Accuracy17.6
102
Multitask KnowledgeMMLU
Accuracy4.9
53
Commonsense ReasoningHSWAG
Normalized PLL Score27.8
26
Question AnsweringARC Challenge
Normalized Accuracy5.7
17
Linguistic ProbingBLiMP
Performance63.7
10
Physical ReasoningPIQA
PIQA Normalized Performance40.9
6
Social ReasoningSIQA
Performance (%)14.6
6
Aggregate Zero-shot NLU Performance9-Task Suite Aggregate
Avg Normalized PLL Score25.3
4
Showing 10 of 14 rows

Other info

Follow for update