Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Single-Eye View: Monocular Real-time Perception Package for Autonomous Driving

About

Amidst the rapid advancement of camera-based autonomous driving technology, effectiveness is often prioritized with limited attention to computational efficiency. To address this issue, this paper introduces LRHPerception, a real-time monocular perception package for autonomous driving that uses single-view camera video to interpret the surrounding environment. The proposed system combines the computational efficiency of end-to-end learning with the rich representational detail of local mapping methodologies. With significant improvements in object tracking and prediction, road segmentation, and depth estimation integrated into a unified framework, LRHPerception processes monocular image data into a five-channel tensor consisting of RGB, road segmentation, and pixel-level depth estimation, augmented with object detection and trajectory prediction. Experimental results demonstrate strong performance, achieving real-time processing at 29 FPS on a single GPU, representing a 555% speedup over the fastest mapping-based approach.

Haixi Zhang, Aiyinsi Zuo, Zirui Li, Chunshu Wu, Tong Geng, Zhiyao Duan• 2026

Related benchmarks

TaskDatasetResultRank
Depth EstimationKITTI--
106
Multi-Object TrackingMOT17
IDF181.2
104
Semantic segmentationCityscapes
mIoU88.9
82
Local MappingKITTI (test)
FPS28.8
6
Trajectory PredictionJAAD
MSE (0.5s)43
5
Trajectory PredictionPIE
MSE @ 0.5s19
5
Showing 6 of 6 rows

Other info

Follow for update