SED-SFT: Selectively Encouraging Diversity in Supervised Fine-Tuning
About
Supervised Fine-Tuning (SFT) followed by Reinforcement Learning (RL) has emerged as the standard post-training paradigm for large language models (LLMs). However, the conventional SFT process, driven by Cross-Entropy (CE) loss, often induces mode collapse, where models over-concentrate on specific response patterns. This lack of distributional diversity severely restricts the exploration efficiency required for subsequent RL. While recent studies have attempted to improve SFT by replacing the CE loss, aiming to preserve diversity or refine the update policy, they fail to adequately balance diversity and accuracy, thereby yielding suboptimal performance after RL. To address the mode collapse problem, we propose SED-SFT, which adaptively encourages diversity based on the token exploration space. This framework introduces a selective entropy regularization term with a selective masking mechanism into the optimization objective. Extensive experiments across eight mathematical benchmarks demonstrate that SED-SFT significantly enhances generation diversity with a negligible computational overhead increase compared with CE loss, yielding average improvements of 2.06 and 1.20 points in subsequent RL performance over standard CE-based baselines on Llama-3.2-3B-Instruct and Qwen2.5-Math-7B-Instruct, respectively. The code is publicly available at https://github.com/pppa2019/SED-SFT
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | CollegeMATH | Accuracy48.8 | 161 | |
| Mathematical Reasoning | MATH 500 | pass@186.6 | 153 | |
| Mathematical Reasoning | OlympiadBench | Pass Rate50.2 | 36 | |
| Mathematical Reasoning | AIME25 | Pass@818.8 | 29 | |
| Mathematical Reasoning | AIME 24 | Pass Rate (Avg@8)20 | 20 | |
| Mathematical Reasoning | GAOKAO-en | Pass Rate73 | 20 | |
| Mathematical Reasoning | AMC23 | Pass Rate (Avg@8)67.2 | 20 | |
| Mathematical Reasoning | GSM8K | Pass Rate95.2 | 20 |