Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Efficient PRM Training Data Synthesis via Formal Verification

About

Process Reward Models (PRMs) have emerged as a promising approach for improving LLM reasoning capabilities by providing process supervision over reasoning traces. However, existing approaches for constructing PRM training data remain costly and noisy, as they typically rely on human annotation or sampling-based labeling methods that require repeated LLM calls. In this work, we propose FoVer, a framework that synthesizes PRM training data from formal reasoning tasks by annotating step-level error labels using formal verification tools such as Z3 and Isabelle. By leveraging formal verification, FoVer enables efficient and accurate PRM data construction without requiring human annotation or additional LLM calls. Using FoVer, we create PRM training data from formal logic and theorem proving tasks. Experiments on 12 reasoning benchmarks show that fine-tuning on our training data improves PRMs not only on math and logic reasoning tasks, which are informal variants of the training tasks, but also on NLI and BBH benchmarks, which differ substantially from the tasks used to construct the training data. These results demonstrate the practical effectiveness of FoVer, showing that PRM training data created using formal verification improves PRMs on informal reasoning tasks written in natural language. The datasets, models, and code are provided at https://github.com/psunlpgroup/FoVer.

Ryo Kamoi, Yusen Zhang, Nan Zhang, Sarkar Snigdha Sarathi Das, Ranran Haoran Zhang, Wenpeng Yin, Rui Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Reasoning and ClassificationBBH (Big-Bench Hard) (unseen)
BBH Temporal Sequences97.2
17
Process Reward Model AssessmentPROCESSBENCH
GSM8K Accuracy86.6
15
Natural Language InferenceNLI ANLI and HANS (unseen)
ANLI Score30.8
9
Multiple-choice Question AnsweringMMLU Pro NoMath (unseen)
MMLU Pro (NoMath) Score60.4
9
Mathematical ReasoningMath GSM8K, AQuA, AIME
GSM8K Accuracy92.4
4
Informal Logic ReasoningInformal Logic FOLIO, LogicNLI
FOLIO Score63.5
4
Showing 6 of 6 rows

Other info

Follow for update