Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Entropy-Guided Token Dropout: Training Autoregressive Language Models with Limited Domain Data

About

As access to high-quality, domain-specific data grows increasingly scarce, multi-epoch training has become a practical strategy for adapting large language models (LLMs). However, autoregressive models often suffer from performance degradation under repeated data exposure, where overfitting leads to a marked decline in model capability. Through empirical analysis, we trace this degradation to an imbalance in learning dynamics: predictable, low-entropy tokens are learned quickly and come to dominate optimization, while the model's ability to generalize on high-entropy tokens deteriorates with continued training. To address this, we introduce EntroDrop, an entropy-guided token dropout method that functions as structured data regularization. EntroDrop selectively masks low-entropy tokens during training and employs a curriculum schedule to adjust regularization strength in alignment with training progress. Experiments across model scales from 0.6B to 8B parameters show that EntroDrop consistently outperforms standard regularization baselines and maintains robust performance throughout extended multi-epoch training. These findings underscore the importance of aligning regularization with token-level learning dynamics when training on limited data. Our approach offers a promising pathway toward more effective adaptation of LLMs in data-constrained domains.

Jiapeng Wang, Yiwen Hu, Yanzipeng Gao, Haoyu Wang, Shuo Wang, Hongyu Lu, Jiaxin Mao, Wayne Xin Zhao, Junyi Li, Xiao Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval--
850
Language UnderstandingMMLU
Accuracy49
756
Physical Commonsense ReasoningPIQA
Accuracy68.1
329
Instruction FollowingIFEval
Accuracy (0-100)45.1
292
Science Question AnsweringARC-C
Accuracy58
127
Code GenerationMBPP
Accuracy28.8
120
Commonsense ReasoningHella
Accuracy44.5
12
Code GenerationLiveCodeBench v1 (test)
Accuracy30.5
9
Showing 8 of 8 rows

Other info

Follow for update