Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

End-to-end Learning of Driving Models from Large-scale Video Datasets

About

Robust perception-action models should be learned from training data with diverse visual appearances and realistic behaviors, yet current approaches to deep visuomotor policy learning have been generally limited to in-situ models learned from a single vehicle or a simulation environment. We advocate learning a generic vehicle motion model from large scale crowd-sourced video data, and develop an end-to-end trainable architecture for learning to predict a distribution over future vehicle egomotion from instantaneous monocular camera observations and previous vehicle state. Our model incorporates a novel FCN-LSTM architecture, which can be learned from large-scale crowd-sourced vehicle action data, and leverages available scene segmentation side tasks to improve performance under a privileged learning paradigm.

Huazhe Xu, Yang Gao, Fisher Yu, Trevor Darrell• 2016

Related benchmarks

TaskDatasetResultRank
Object TrackingOTB-50 2015
AUC53
15
Object TrackingOTB 2013 (full)
AUC61.1
11
Object TrackingOTB-100 2015 (full)
AUC56.8
10
Steering angle predictionBDD100K
Accuracy82.03
5
Showing 4 of 4 rows

Other info

Follow for update