Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

UnLoc: A Unified Framework for Video Localization Tasks

About

While large-scale image-text pretrained models such as CLIP have been used for multiple video-level tasks on trimmed videos, their use for temporal localization in untrimmed videos is still a relatively unexplored task. We design a new approach for this called UnLoc, which uses pretrained image and text towers, and feeds tokens to a video-text fusion model. The output of the fusion module are then used to construct a feature pyramid in which each level connects to a head to predict a per-frame relevancy score and start/end time displacements. Unlike previous works, our architecture enables Moment Retrieval, Temporal Localization, and Action Segmentation with a single stage model, without the need for action proposals, motion based pretrained features or representation masking. Unlike specialized models, we achieve state of the art results on all three different localization tasks with a unified approach. Code will be available at: \url{https://github.com/google-research/scenic}.

Shen Yan, Xuehan Xiong, Arsha Nagrani, Anurag Arnab, Zhonghao Wang, Weina Ge, David Ross, Cordelia Schmid• 2023

Related benchmarks

TaskDatasetResultRank
Temporal Action LocalizationActivityNet 1.3 (val)
AP@0.559.3
257
Moment RetrievalQVHighlights (test)
R@1 (IoU=0.5)66.1
170
Temporal Video GroundingCharades-STA (test)
Recall@IoU=0.560.8
117
Video Moment RetrievalCharades-STA (test)
Recall@1 (IoU=0.5)60.8
77
Temporal GroundingCharades-STA (test)
Recall@1 (IoU=0.5)60.8
68
Moment RetrievalQVHighlights (val)
R@1 (IoU=0.5)66.1
53
Video Moment RetrievalCharades-STA
R1@0.560.8
44
Action SegmentationCOIN (test)
Frame Accuracy72.8
23
Temporal Action LocalizationActivityNet 1.3 (50%-50%)--
17
Temporal Action DetectionActivityNet (val)--
16
Showing 10 of 18 rows

Other info

Code

Follow for update