Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Code-A1: Adversarial Evolving of Code LLM and Test LLM via Reinforcement Learning

About

Reinforcement learning for code generation relies on verifiable rewards from unit test pass rates. Yet high-quality test suites are scarce, existing datasets offer limited coverage, and static rewards fail to adapt as models improve. Recent self-play methods unify code and test generation in a single model, but face a inherent dilemma: white-box access leads to self-collusion where the model produces trivial tests for easy rewards, yet black-box restriction yields generic tests that miss implementation-specific bugs. We introduce Code-A1, an adversarial co-evolution framework that jointly optimizes a Code LLM and a Test LLM with opposing objectives. The Code LLM is rewarded for passing more tests, while the Test LLM is rewarded for exposing more defects. This architectural separation eliminates self-collusion risks and safely enables white-box test generation, where the Test LLM can inspect candidate code to craft targeted adversarial tests. We further introduce a Mistake Book mechanism for experience replay and a composite reward balancing test validity with adversarial difficulty. Experiments on Qwen2.5-Coder models demonstrate that Code-A1 achieves code generation performance matching or exceeding models trained on human-annotated tests, while significantly improving test generation capability.

Aozhe Wang, Yuchen Yan, Nan Zhou, Zhengxi Lu, Weiming Lu, Jun Xiao, Yueting Zhuang, Yongliang Shen• 2026

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval+
Average Pass Rate @3285.21
12
Code GenerationMBPP+
Average Success Rate (@32)74.5
12
Code GenerationBigCodeBench
avg@3252.46
12
Code GenerationHumanEval+, MBPP+, and BigCodeBench Aggregate
Average Score70.72
12
Test GenerationUnLeakedTestBench
Pass@136.79
12
Showing 5 of 5 rows

Other info

GitHub

Follow for update