ProFit: Leveraging High-Value Signals in SFT via Probability-Guided Token Selection
About
Supervised fine-tuning (SFT) is a fundamental post-training strategy to align Large Language Models (LLMs) with human intent. However, traditional SFT often ignores the one-to-many nature of language by forcing alignment with a single reference answer, leading to the model overfitting to non-core expressions. Although our empirical analysis suggests that introducing multiple reference answers can mitigate this issue, the prohibitive data and computational costs necessitate a strategic shift: prioritizing the mitigation of single-reference overfitting over the costly pursuit of answer diversity. To achieve this, we reveal the intrinsic connection between token probability and semantic importance: high-probability tokens carry the core logical framework, while low-probability tokens are mostly replaceable expressions. Based on this insight, we propose ProFit, which selectively masks low-probability tokens to prevent surface-level overfitting. Extensive experiments confirm that ProFit consistently outperforms traditional SFT baselines on general reasoning and mathematical benchmarks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | GSM8K | Accuracy (GSM8K)89.62 | 358 | |
| Instruction Following | IFEval | Accuracy (0-100)58.02 | 292 | |
| Mathematical Reasoning | MATH 500 | Accuracy82.85 | 119 | |
| Scientific Question Answering | GPQA Diamond | Accuracy46.53 | 64 | |
| Multi-task performance evaluation | GPQA-Diamond, GSM8K, MATH-500, AIME’24, and IFEval Aggregate | Avg Score58.72 | 25 |