Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning to Refactor Action and Co-occurrence Features for Temporal Action Localization

About

The main challenge of Temporal Action Localization is to retrieve subtle human actions from various co-occurring ingredients, e.g., context and background, in an untrimmed video. While prior approaches have achieved substantial progress through devising advanced action detectors, they still suffer from these co-occurring ingredients which often dominate the actual action content in videos. In this paper, we explore two orthogonal but complementary aspects of a video snippet, i.e., the action features and the co-occurrence features. Especially, we develop a novel auxiliary task by decoupling these two types of features within a video snippet and recombining them to generate a new feature representation with more salient action information for accurate action localization. We term our method RefactorNet, which first explicitly factorizes the action content and regularizes its co-occurrence features, and then synthesizes a new action-dominated video representation. Extensive experimental results and ablation studies on THUMOS14 and ActivityNet v1.3 demonstrate that our new representation, combined with a simple action detector, can significantly improve the action localization performance.

Kun Xia, Le Wang, Sanping Zhou, Nanning Zheng, Wei Tang• 2022

Related benchmarks

TaskDatasetResultRank
Temporal Action LocalizationTHUMOS-14 (test)
mAP@0.370.7
308
Temporal Action LocalizationActivityNet 1.3 (val)
AP@0.556.6
257
Temporal Action LocalizationTHUMOS-14 (test)
mAP@0.370.7
36
Showing 3 of 3 rows

Other info

Follow for update