Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Tripping through time: Efficient Localization of Activities in Videos

About

Localizing moments in untrimmed videos via language queries is a new and interesting task that requires the ability to accurately ground language into video. Previous works have approached this task by processing the entire video, often more than once, to localize relevant activities. In the real world applications of this approach, such as video surveillance, efficiency is a key system requirement. In this paper, we present TripNet, an end-to-end system that uses a gated attention architecture to model fine-grained textual and visual representations in order to align text and video content. Furthermore, TripNet uses reinforcement learning to efficiently localize relevant activity clips in long videos, by learning how to intelligently skip around the video. It extracts visual features for few frames to perform activity classification. In our evaluation over Charades-STA, ActivityNet Captions and the TACoS dataset, we find that TripNet achieves high accuracy and saves processing time by only looking at 32-41% of the entire video.

Meera Hahn, Asim Kadav, James M. Rehg, Hans Peter Graf• 2019

Related benchmarks

TaskDatasetResultRank
Moment RetrievalCharades-STA (test)
R@0.536.61
172
Video Moment RetrievalTACOS (test)
Recall@1 (0.5 Threshold)19.17
70
Natural Language Video LocalizationCharades-STA (test)
R@1 (IoU=0.5)36.61
61
Video GroundingTACOS
Recall@1 (IoU=0.5)19.17
45
Video GroundingActivityNet Captions
R@1 (IoU=0.5)32.19
43
Video GroundingTACOS
IoU@0.519.17
19
Video GroundingActivityNet Caption
IoU@0.532.19
14
Video Temporal GroundingActivityNet Captions (val)
Recall@0.532.19
10
Showing 8 of 8 rows

Other info

Follow for update