Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SFT-GRPO Data Overlap as a Post-Training Hyperparameter for Autoformalization

About

Supervised fine-tuning (SFT) followed by Group Relative Policy Optimization (GRPO) is a common post-training recipe. We conduct a controlled ablation over SFT-GRPO data overlap, evaluating Qwen3-8B (thinking disabled) post-trained for Lean 4 autoformalization under six conditions that differ solely in training recipe: a base model, SFT-only, GRPO-only, and three SFT+GRPO configurations where 0 percent, 30 percent, or 100 percent of the GRPO prompts coincide with the SFT corpus. Keeping SFT and GRPO data disjoint consistently outperforms full overlap at zero additional compute cost. Evaluating on Gaokao-Formal and PutnamBench under both compile pass at k and semantic pass at k assessed by an LLM judge, we find that lower overlap is monotonically associated with higher compilation and semantic accuracy. At 0 percent overlap, GRPO yields a 10.4 percentage point semantic gain over SFT alone on Gaokao, while at 100 percent overlap both metrics remain flat, rendering the GRPO stage effectively redundant. We further show that dual-metric evaluation reveals compile semantic gaps exceeding 30 percentage points for the highest compiling models, a disparity invisible under compile-only benchmarking. To our knowledge, this is the first controlled investigation of SFT-GRPO data overlap as a post-training hyperparameter, demonstrating how model behavior varies based on the degree of data sharing between training stages.

Xiaole Su, Kasey Zhang, Andy Lyu• 2026

Related benchmarks

TaskDatasetResultRank
AutoformalizationGaokao Formal
Mean Score74.2
8
Mathematical FormalizationGaokao-Formal 495 problems
Correctness @177.6
8
Mathematical FormalizationPutnamBench 672 problems
C@147.9
8
Showing 3 of 3 rows

Other info

Follow for update