Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Action Unit Memory Network for Weakly Supervised Temporal Action Localization

About

Weakly supervised temporal action localization aims to detect and localize actions in untrimmed videos with only video-level labels during training. However, without frame-level annotations, it is challenging to achieve localization completeness and relieve background interference. In this paper, we present an Action Unit Memory Network (AUMN) for weakly supervised temporal action localization, which can mitigate the above two challenges by learning an action unit memory bank. In the proposed AUMN, two attention modules are designed to update the memory bank adaptively and learn action units specific classifiers. Furthermore, three effective mechanisms (diversity, homogeneity and sparsity) are designed to guide the updating of the memory network. To the best of our knowledge, this is the first work to explicitly model the action units with a memory network. Extensive experimental results on two standard benchmarks (THUMOS14 and ActivityNet) demonstrate that our AUMN performs favorably against state-of-the-art methods. Specifically, the average mAP of IoU thresholds from 0.1 to 0.5 on the THUMOS14 dataset is significantly improved from 47.0% to 52.1%.

Wang Luo, Tianzhu Zhang, Wenfei Yang, Jingen Liu, Tao Mei, Feng Wu, Yongdong Zhang• 2021

Related benchmarks

TaskDatasetResultRank
Temporal Action LocalizationTHUMOS-14 (test)
mAP@0.354.9
308
Temporal Action LocalizationActivityNet 1.3 (val)
AP@0.538.3
257
Temporal Action LocalizationActivityNet 1.2 (val)
mAP@IoU 0.542
110
Temporal Action LocalizationActivityNet v1.3 (test)
mAP @ IoU=0.538.3
47
Temporal Action LocalizationActivityNet 1.2
mAP@0.542
32
Showing 5 of 5 rows

Other info

Follow for update