Code-A1: Adversarial Evolving of Code LLM and Test LLM via Reinforcement Learning
About
Reinforcement learning for code generation relies on verifiable rewards from unit test pass rates. Yet high-quality test suites are scarce, existing datasets offer limited coverage, and static rewards fail to adapt as models improve. Recent self-play methods unify code and test generation in a single model, but face a inherent dilemma: white-box access leads to self-collusion where the model produces trivial tests for easy rewards, yet black-box restriction yields generic tests that miss implementation-specific bugs. We introduce Code-A1, an adversarial co-evolution framework that jointly optimizes a Code LLM and a Test LLM with opposing objectives. The Code LLM is rewarded for passing more tests, while the Test LLM is rewarded for exposing more defects. This architectural separation eliminates self-collusion risks and safely enables white-box test generation, where the Test LLM can inspect candidate code to craft targeted adversarial tests. We further introduce a Mistake Book mechanism for experience replay and a composite reward balancing test validity with adversarial difficulty. Experiments on Qwen2.5-Coder models demonstrate that Code-A1 achieves code generation performance matching or exceeding models trained on human-annotated tests, while significantly improving test generation capability.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Code Generation | HumanEval+ | Average Pass Rate @3285.21 | 12 | |
| Code Generation | MBPP+ | Average Success Rate (@32)74.5 | 12 | |
| Code Generation | BigCodeBench | avg@3252.46 | 12 | |
| Code Generation | HumanEval+, MBPP+, and BigCodeBench Aggregate | Average Score70.72 | 12 | |
| Test Generation | UnLeakedTestBench | Pass@136.79 | 12 |