Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

End-to-end Learning of Action Detection from Frame Glimpses in Videos

About

In this work we introduce a fully end-to-end approach for action detection in videos that learns to directly predict the temporal bounds of actions. Our intuition is that the process of detecting actions is naturally one of observation and refinement: observing moments in video, and refining hypotheses about when an action is occurring. Based on this insight, we formulate our model as a recurrent neural network-based agent that interacts with a video over time. The agent observes video frames and decides both where to look next and when to emit a prediction. Since backpropagation is not adequate in this non-differentiable setting, we use REINFORCE to learn the agent's decision policy. Our model achieves state-of-the-art results on the THUMOS'14 and ActivityNet datasets while observing only a fraction (2% or less) of the video frames.

Serena Yeung, Olga Russakovsky, Greg Mori, Li Fei-Fei• 2015

Related benchmarks

TaskDatasetResultRank
Temporal Action DetectionTHUMOS-14 (test)
mAP@tIoU=0.517.1
330
Temporal Action LocalizationTHUMOS14 (test)
AP @ IoU=0.517.1
319
Action DetectionTHUMOS 2014 (test)
mAP (alpha=0.5)17.1
79
Temporal Action DetectionTHUMOS 14
mAP@0.336
71
Temporal Action LocalizationTHUMOS 14
mAP@0.336
44
Temporal Action LocalizationTHUMOS 2014 (test)
mAP (theta=0.5)17.1
35
Action LocalizationThumos14
mAP@0.517.1
34
Action RecognitionActivityNet
Accuracy62.8
22
Temporal Action DetectionTHUMOS 2014 (test)
mAP@0.148.9
8
Action RecognitionFCVID
Accuracy71.7
6
Showing 10 of 12 rows

Other info

Follow for update