RBF Weighted Hyper-Involution for RGB-D Object Detection
About
A vast majority of augmented reality devices come equipped with depth and color cameras. Despite their advantages, extracting both photometric and depth features simultaneously in real-time remains challenging due to inherent differences between depth and color images. Furthermore, standard convolution operations are insufficient for extracting information directly from raw depth images, leading to inefficient intermediate representations. To address these issues, we propose a real-time two-stream RGBD object detection model. Our model introduces two new components: a dynamic radial basis function (RBF) weighted depth-based hyper-involution that adjusts dynamically based on spatial interaction patterns in raw depth maps, and an up-sampling based trainable fusion layer that combines extracted depth and color image features without obstructing information transfer between them. Experimental results demonstrate that the proposed approach achieves the strongest performance among existing RGB-D 2D object detection methods on NYU Depth V2, while remaining competitive on the SUN RGB-D benchmark.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Detection | SUN RGB-D (test) | mAP53.3 | 25 | |
| Object Detection | NYU Depth V2 (test) | mAP55.4 | 17 | |
| Object Detection | SUN RGB-D | GFLOPS26.72 | 12 | |
| Object Detection | Automatically synthesized RGB-D dataset | mAP58.9 | 2 | |
| Object Detection | Outdoor RGB-D dataset | mAP80.2 | 2 |