CorrectionPlanner: Self-Correction Planner with Reinforcement Learning in Autonomous Driving
About
Autonomous driving requires safe planning, but most learning-based planners lack explicit self-correction ability: once an unsafe action is proposed, there is no mechanism to correct it. Thus, we propose CorrectionPlanner, an autoregressive planner with self-correction that models planning as motion-token generation within a propose, evaluate, and correct loop. At each planning step, the policy proposes an action, namely a motion token, and a learned collision critic predicts whether it will induce a collision within a short horizon. If the critic predicts a collision, we retain the sequence of historical unsafe motion tokens as a self-correction trace, generate the next motion token conditioned on it, and repeat this process until a safe motion token is proposed or the safety criterion is met. This self-correction trace, consisting of all unsafe motion tokens, represents the planner's correction process in motion-token space, analogous to a reasoning trace in language models. We train the planner with imitation learning followed by model-based reinforcement learning using rollouts from a pretrained world model that realistically models agents' reactive behaviors. Closed-loop evaluations show that CorrectionPlanner reduces collision rate by over 20% on Waymax and achieves state-of-the-art planning scores on nuPlan.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Planning | nuPlan Hard (test) | Planner Score77.29 | 14 | |
| Planning | nuPlan (test-random) | Planner Score91.14 | 14 | |
| Autonomous Driving Planning | Waymax Reactive | Collisions1.68 | 5 | |
| Autonomous Driving Planning | Waymax Non-Reactive | Collision Rate2.43 | 5 | |
| Autonomous Driving Planning | nuPlan NR 14 (val) | Planner Score91.22 | 2 |