Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

EagerMOT: 3D Multi-Object Tracking via Sensor Fusion

About

Multi-object tracking (MOT) enables mobile robots to perform well-informed motion planning and navigation by localizing surrounding objects in 3D space and time. Existing methods rely on depth sensors (e.g., LiDAR) to detect and track targets in 3D space, but only up to a limited sensing range due to the sparsity of the signal. On the other hand, cameras provide a dense and rich visual signal that helps to localize even distant objects, but only in the image domain. In this paper, we propose EagerMOT, a simple tracking formulation that eagerly integrates all available object observations from both sensor modalities to obtain a well-informed interpretation of the scene dynamics. Using images, we can identify distant incoming objects, while depth estimates allow for precise trajectory localization as soon as objects are within the depth-sensing range. With EagerMOT, we achieve state-of-the-art results across several MOT tasks on the KITTI and NuScenes datasets. Our code is available at https://github.com/aleksandrkim61/EagerMOT.

Aleksandr Kim, Aljo\v{s}a O\v{s}ep, Laura Leal-Taix\'e• 2021

Related benchmarks

TaskDatasetResultRank
3D Multi-Object TrackingnuScenes (test)
ID Switches1.16e+3
130
3D Multi-Object TrackingnuScenes (val)
AMOTA71.2
115
2D Multi-Object TrackingKITTI car (test)
MOTA87.82
65
Multi-target trackingKITTI Pedestrian (test)
MOTA49.82
33
3D Multi-Object TrackingKITTI car (val)
SAMOTA96.93
26
Multi-Object TrackingKITTI leaderboard (test)
HOTA74.39
25
3D Multi-Object TrackingnuScenes
AMOTA0.68
4
Multi-Object Tracking and SegmentationKITTI MOTS car (test)
HOTA74.66
4
Multi-Object Tracking and SegmentationKITTI MOTS pedestrian (test)
HOTA57.65
4
3D Multi-Object TrackingKITTI Pedestrian (val)
sAMOTA92.92
3
Showing 10 of 10 rows

Other info

Code

Follow for update