Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DriveDPO: Policy Learning via Safety DPO For End-to-End Autonomous Driving

About

End-to-end autonomous driving has substantially progressed by directly predicting future trajectories from raw perception inputs, which bypasses traditional modular pipelines. However, mainstream methods trained via imitation learning suffer from critical safety limitations, as they fail to distinguish between trajectories that appear human-like but are potentially unsafe. Some recent approaches attempt to address this by regressing multiple rule-driven scores but decoupling supervision from policy optimization, resulting in suboptimal performance. To tackle these challenges, we propose DriveDPO, a Safety Direct Preference Optimization Policy Learning framework. First, we distill a unified policy distribution from human imitation similarity and rule-based safety scores for direct policy optimization. Further, we introduce an iterative Direct Preference Optimization stage formulated as trajectory-level preference alignment. Extensive experiments on the NAVSIM benchmark demonstrate that DriveDPO achieves a new state-of-the-art PDMS of 90.0. Furthermore, qualitative results across diverse challenging scenarios highlight DriveDPO's ability to produce safer and more reliable driving behaviors.

Shuyao Shang, Yuntao Chen, Yuqi Wang, Yingyan Li, Zhaoxiang Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Closed-loop PlanningBench2Drive
Driving Score62.02
137
Autonomous Driving PlanningNAVSIM (navtest)
NC98.5
68
Autonomous Driving PlanningNAVSIM v1 (test)
NC98.5
59
Closed-loop Autonomous DrivingBench2Drive
Driving Score (DS)62.02
49
End-to-end PlanningNAVSIM v1
NC0.985
32
End-to-end Autonomous DrivingBench2Drive
Driving Score62.02
31
Autonomous DrivingBench2Drive base (train)
Driving Score62.02
19
Showing 7 of 7 rows

Other info

Follow for update