Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adversarial Background-Aware Loss for Weakly-supervised Temporal Activity Localization

About

Temporally localizing activities within untrimmed videos has been extensively studied in recent years. Despite recent advances, existing methods for weakly-supervised temporal activity localization struggle to recognize when an activity is not occurring. To address this issue, we propose a novel method named A2CL-PT. Two triplets of the feature space are considered in our approach: one triplet is used to learn discriminative features for each activity class, and the other one is used to distinguish the features where no activity occurs (i.e. background features) from activity-related features for each video. To further improve the performance, we build our network using two parallel branches which operate in an adversarial way: the first branch localizes the most salient activities of a video and the second one finds other supplementary activities from non-localized parts of the video. Extensive experiments performed on THUMOS14 and ActivityNet datasets demonstrate that our proposed method is effective. Specifically, the average mAP of IoU thresholds from 0.1 to 0.9 on the THUMOS14 dataset is significantly improved from 27.9% to 30.0%.

Kyle Min, Jason J. Corso• 2020

Related benchmarks

TaskDatasetResultRank
Temporal Action LocalizationTHUMOS14 (test)
AP @ IoU=0.530.1
319
Temporal Action LocalizationTHUMOS-14 (test)
mAP@0.348.1
308
Temporal Action LocalizationActivityNet 1.3 (val)
AP@0.536.8
257
Temporal Action LocalizationTHUMOS 2014
mAP@0.3048.1
93
Temporal Action LocalizationActivityNet v1.3 (test)
mAP @ IoU=0.536.8
47
Temporal Action LocalizationActivityNet 1.3
Average mAP22.5
32
Temporal Activity LocalizationActivityNet 1.3 (test)
mAP@0.536.8
21
Showing 7 of 7 rows

Other info

Follow for update