Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

High-speed Autonomous Racing using Trajectory-aided Deep Reinforcement Learning

About

The classical method of autonomous racing uses real-time localisation to follow a precalculated optimal trajectory. In contrast, end-to-end deep reinforcement learning (DRL) can train agents to race using only raw LiDAR scans. While classical methods prioritise optimization for high-performance racing, DRL approaches have focused on low-performance contexts with little consideration of the speed profile. This work addresses the problem of using end-to-end DRL agents for high-speed autonomous racing. We present trajectory-aided learning (TAL) that trains DRL agents for high-performance racing by incorporating the optimal trajectory (racing line) into the learning formulation. Our method is evaluated using the TD3 algorithm on four maps in the open-source F1Tenth simulator. The results demonstrate that our method achieves a significantly higher lap completion rate at high speeds compared to the baseline. This is due to TAL training the agent to select a feasible speed profile of slowing down in the corners and roughly tracking the optimal trajectory.

Benjamin David Evans, Herman Arnold Engelbrecht, Hendrik Willem Jordaan• 2023

Related benchmarks

TaskDatasetResultRank
Autonomous RacingBerlin Tempelhof Airport Street Circuit (test)
Average Lap Speed (m/s)38.67
4
Showing 1 of 1 rows

Other info

Follow for update