Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SED-SFT: Selectively Encouraging Diversity in Supervised Fine-Tuning

About

Supervised Fine-Tuning (SFT) followed by Reinforcement Learning (RL) has emerged as the standard post-training paradigm for large language models (LLMs). However, the conventional SFT process, driven by Cross-Entropy (CE) loss, often induces mode collapse, where models over-concentrate on specific response patterns. This lack of distributional diversity severely restricts the exploration efficiency required for subsequent RL. While recent studies have attempted to improve SFT by replacing the CE loss, aiming to preserve diversity or refine the update policy, they fail to adequately balance diversity and accuracy, thereby yielding suboptimal performance after RL. To address the mode collapse problem, we propose SED-SFT, which adaptively encourages diversity based on the token exploration space. This framework introduces a selective entropy regularization term with a selective masking mechanism into the optimization objective. Extensive experiments across eight mathematical benchmarks demonstrate that SED-SFT significantly enhances generation diversity with a negligible computational overhead increase compared with CE loss, yielding average improvements of 2.06 and 1.20 points in subsequent RL performance over standard CE-based baselines on Llama-3.2-3B-Instruct and Qwen2.5-Math-7B-Instruct, respectively. The code is publicly available at https://github.com/pppa2019/SED-SFT

Yijie Chen, Yijin Liu, Fandong Meng• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningCollegeMATH
Accuracy48.8
161
Mathematical ReasoningMATH 500
pass@186.6
153
Mathematical ReasoningOlympiadBench
Pass Rate50.2
36
Mathematical ReasoningAIME25
Pass@818.8
29
Mathematical ReasoningAIME 24
Pass Rate (Avg@8)20
20
Mathematical ReasoningGAOKAO-en
Pass Rate73
20
Mathematical ReasoningAMC23
Pass Rate (Avg@8)67.2
20
Mathematical ReasoningGSM8K
Pass Rate95.2
20
Showing 8 of 8 rows

Other info

Follow for update