Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Referring Multi-Object Tracking

About

Existing referring understanding tasks tend to involve the detection of a single text-referred object. In this paper, we propose a new and general referring understanding task, termed referring multi-object tracking (RMOT). Its core idea is to employ a language expression as a semantic cue to guide the prediction of multi-object tracking. To the best of our knowledge, it is the first work to achieve an arbitrary number of referent object predictions in videos. To push forward RMOT, we construct one benchmark with scalable expressions based on KITTI, named Refer-KITTI. Specifically, it provides 18 videos with 818 expressions, and each expression in a video is annotated with an average of 10.7 objects. Further, we develop a transformer-based architecture TransRMOT to tackle the new task in an online manner, which achieves impressive detection performance and outperforms other counterparts. The dataset and code will be available at https://github.com/wudongming97/RMOT.

Dongming Wu, Wencheng Han, Tiancai Wang, Xingping Dong, Xiangyu Zhang, Jianbing Shen• 2023

Related benchmarks

TaskDatasetResultRank
Referring Multi-Object TrackingRefer-KITTI V2 44 (test)
HOTA31
11
Referring Multi-Object TrackingRefer-KITTI 37 (test)
HOTA46.56
11
Referring Multi-Object TrackingRefer-KITTI
IDSW6.13
8
Referring Multi-Object TrackingRefer-KITTI (test)
HOTA35.54
7
Referring Multi-Object TrackingLaMOT
HOTA27.74
5
Referring Multi-Object TrackingMeViS v2
HOTA*18.6
4
RGBD Referring Multi-Object TrackingDRSet (test)
HOTA98
4
Referring Multi-Object TrackingORSet (test)
HOTA2.41
3
Referring Multi-Object TrackingRefer-Dance
HOTA9.58
3
Showing 9 of 9 rows

Other info

Code

Follow for update