Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Orthographic Feature Transform for Monocular 3D Object Detection

About

3D object detection from monocular images has proven to be an enormously challenging task, with the performance of leading systems not yet achieving even 10\% of that of LiDAR-based counterparts. One explanation for this performance gap is that existing systems are entirely at the mercy of the perspective image-based representation, in which the appearance and scale of objects varies drastically with depth and meaningful distances are difficult to infer. In this work we argue that the ability to reason about the world in 3D is an essential element of the 3D object detection task. To this end, we introduce the orthographic feature transform, which enables us to escape the image domain by mapping image-based features into an orthographic 3D space. This allows us to reason holistically about the spatial configuration of the scene in a domain where scale is consistent and distances between objects are meaningful. We apply this transformation as part of an end-to-end deep learning architecture and achieve state-of-the-art performance on the KITTI 3D object benchmark.\footnote{We will release full source code and pretrained models upon acceptance of this manuscript for publication.

Thomas Roddick, Alex Kendall, Roberto Cipolla• 2018

Related benchmarks

TaskDatasetResultRank
3D Object DetectionnuScenes (val)--
941
3D Object DetectionnuScenes (test)
mAP12.6
829
3D Object DetectionKITTI (test)
AP_3D (Easy)2.5
83
3D Object DetectionKITTI Pedestrian (test)
AP3D (Easy)63
63
3D Object DetectionKITTI (test)--
60
3D Object DetectionKITTI (val)--
57
Bird's eye view object detectionKITTI (test)
APBEV@0.7 (Easy)9.5
53
3D Object DetectionKITTI Cyclist (test)
AP3D Easy36
49
3D Object DetectionKITTI cars (val)
AP Easy4.07
48
Birds-Eye-View DetectionKITTI (test)
AP BEV (Easy)0.095
41
Showing 10 of 30 rows

Other info

Follow for update