Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Genius: A Generalizable and Purely Unsupervised Self-Training Framework For Advanced Reasoning

About

Advancing LLM reasoning skills has captivated wide interest. However, current post-training techniques rely heavily on supervisory signals, such as outcome supervision or auxiliary reward models, which face the problem of scalability and high annotation costs. This motivates us to enhance LLM reasoning without the need for external supervision. We introduce a generalizable and purely unsupervised self-training framework, named Genius. Without external auxiliary, Genius requires to seek the optimal response sequence in a stepwise manner and optimize the LLM. To explore the potential steps and exploit the optimal ones, Genius introduces a stepwise foresight re-sampling strategy to sample and estimate the step value by simulating future outcomes. Further, we recognize that the unsupervised setting inevitably induces the intrinsic noise and uncertainty. To provide a robust optimization, we propose an advantage-calibrated optimization (ACO) loss function to mitigate estimation inconsistencies. Combining these techniques together, Genius provides an advanced initial step towards self-improve LLM reasoning with general queries and without supervision, revolutionizing reasoning scaling laws given the vast availability of general queries. The code will be released at https://github.com/xufangzhi/Genius.

Fangzhi Xu, Hang Yan, Chang Ma, Haiteng Zhao, Qiushi Sun, Kanzhi Cheng, Junxian He, Jun Liu, Zhiyong Wu• 2025

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU
Accuracy72.21
842
Mathematical ReasoningGSM8K (test)
Accuracy78.32
797
Mathematical ReasoningMATH (test)
Overall Accuracy34.64
433
Instruction FollowingAlpacaEval
Win Rate26.96
125
Multitask Language UnderstandingMMLU-Pro
Accuracy49.19
99
Logical reasoningLogiQA (test)
Accuracy41.63
92
Logical reasoningReClor (test)
Accuracy58.8
87
Code GenerationLiveCodeBench
Average Score21.25
68
Science ReasoningGPQA (test)
Accuracy34.82
41
General Instruction FollowingArena Hard
Score0.5
35
Showing 10 of 15 rows

Other info

Code

Follow for update