Single-Eye View: Monocular Real-time Perception Package for Autonomous Driving
About
Amidst the rapid advancement of camera-based autonomous driving technology, effectiveness is often prioritized with limited attention to computational efficiency. To address this issue, this paper introduces LRHPerception, a real-time monocular perception package for autonomous driving that uses single-view camera video to interpret the surrounding environment. The proposed system combines the computational efficiency of end-to-end learning with the rich representational detail of local mapping methodologies. With significant improvements in object tracking and prediction, road segmentation, and depth estimation integrated into a unified framework, LRHPerception processes monocular image data into a five-channel tensor consisting of RGB, road segmentation, and pixel-level depth estimation, augmented with object detection and trajectory prediction. Experimental results demonstrate strong performance, achieving real-time processing at 29 FPS on a single GPU, representing a 555% speedup over the fastest mapping-based approach.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Depth Estimation | KITTI | -- | 106 | |
| Multi-Object Tracking | MOT17 | IDF181.2 | 104 | |
| Semantic segmentation | Cityscapes | mIoU88.9 | 82 | |
| Local Mapping | KITTI (test) | FPS28.8 | 6 | |
| Trajectory Prediction | JAAD | MSE (0.5s)43 | 5 | |
| Trajectory Prediction | PIE | MSE @ 0.5s19 | 5 |