Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Memory Enhanced Global-Local Aggregation for Video Object Detection

About

How do humans recognize an object in a piece of video? Due to the deteriorated quality of single frame, it may be hard for people to identify an occluded object in this frame by just utilizing information within one image. We argue that there are two important cues for humans to recognize objects in videos: the global semantic information and the local localization information. Recently, plenty of methods adopt the self-attention mechanisms to enhance the features in key frame with either global semantic information or local localization information. In this paper we introduce memory enhanced global-local aggregation (MEGA) network, which is among the first trials that takes full consideration of both global and local information. Furthermore, empowered by a novel and carefully-designed Long Range Memory (LRM) module, our proposed MEGA could enable the key frame to get access to much more content than any previous methods. Enhanced by these two sources of information, our method achieves state-of-the-art performance on ImageNet VID dataset. Code is available at \url{https://github.com/Scalsol/mega.pytorch}.

Yihong Chen, Yue Cao, Han Hu, Liwei Wang• 2020

Related benchmarks

TaskDatasetResultRank
Video Object DetectionImageNet VID (val)
mAP (%)85.4
341
Video Object DetectionImageNet VID v1.0 (val)
AP5085.4
41
Lesion DetectionCVA-BUS high-quality labels re-annotated version
Pr@8093.9
16
Polyp LocalizationCVC-VideoClinicDB (test)
Precision91.8
13
Breast Lesion DetectionBLUVD-186 (test)
AP32.3
12
Video Polyp DetectionCVC-VideoClinic
Precision91.6
12
Polyp DetectionCVC-VideoClinicDB (test)
Precision91.6
11
Video Polyp DetectionSUN Database
Precision80.4
10
Video Polyp DetectionLDPolypVideo
Precision69.2
10
Polyp DetectionASU-Mayo Clinic Colonoscopy Video (test)
Precision96.8
9
Showing 10 of 13 rows

Other info

Code

Follow for update