Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CAR-Net: Clairvoyant Attentive Recurrent Network

About

We present an interpretable framework for path prediction that leverages dependencies between agents' behaviors and their spatial navigation environment. We exploit two sources of information: the past motion trajectory of the agent of interest and a wide top-view image of the navigation scene. We propose a Clairvoyant Attentive Recurrent Network (CAR-Net) that learns where to look in a large image of the scene when solving the path prediction task. Our method can attend to any area, or combination of areas, within the raw image (e.g., road intersections) when predicting the trajectory of the agent. This allows us to visualize fine-grained semantic elements of navigation scenes that influence the prediction of trajectories. To study the impact of space on agents' trajectories, we build a new dataset made of top-view images of hundreds of scenes (Formula One racing tracks) where agents' behaviors are heavily influenced by known areas in the images (e.g., upcoming turns). CAR-Net successfully attends to these salient regions. Additionally, CAR-Net reaches state-of-the-art accuracy on the standard trajectory forecasting benchmark, Stanford Drone Dataset (SDD). Finally, we show CAR-Net's ability to generalize to unseen scenes.

Amir Sadeghian, Ferdinand Legros, Maxime Voisin, Ricky Vesel, Alexandre Alahi, Silvio Savarese• 2017

Related benchmarks

TaskDatasetResultRank
Future Trajectory PredictionSDD (Stanford Drone Dataset) (test)
ADE25.72
51
Trajectory ForecastingStanford Drone Dataset
Average Displacement Error (ADE)25.72
35
Trajectory PredictionStanford Drone (test)--
19
Pedestrian trajectory predictionStanford Drone Dataset
ADE25.7
17
Trajectory PredictionStanford Drone Dataset (SDD) v1.0 (test)
ADE25.72
11
Vehicle Trajectory PredictionnuScenes
Error @ 1s0.38
8
Vehicle Trajectory PredictionNuScenes v1.0 (test)
FDE (1s)0.38
7
Showing 7 of 7 rows

Other info

Follow for update