Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Joint Visual-Temporal Embedding for Unsupervised Learning of Actions in Untrimmed Sequences

About

Understanding the structure of complex activities in untrimmed videos is a challenging task in the area of action recognition. One problem here is that this task usually requires a large amount of hand-annotated minute- or even hour-long video data, but annotating such data is very time consuming and can not easily be automated or scaled. To address this problem, this paper proposes an approach for the unsupervised learning of actions in untrimmed video sequences based on a joint visual-temporal embedding space. To this end, we combine a visual embedding based on a predictive U-Net architecture with a temporal continuous function. The resulting representation space allows detecting relevant action clusters based on their visual as well as their temporal appearance. The proposed method is evaluated on three standard benchmark datasets, Breakfast Actions, INRIA YouTube Instructional Videos, and 50 Salads. We show that the proposed approach is able to provide a meaningful visual and temporal embedding out of the visual cues present in contiguous video frames and is suitable for the task of unsupervised temporal segmentation of actions.

Rosaura G. VidalMata, Walter J. Scheirer, Anna Kukleva, David Cox, Hilde Kuehne• 2020

Related benchmarks

TaskDatasetResultRank
Action SegmentationBreakfast
MoF52.2
66
Action SegmentationBreakfast (test)
MoF48.1
31
Action SegmentationBreakfast 14
MoF52.2
26
Temporal action segmentation50 Salads granularity (Eval)
MoF30.6
24
Action Segmentation50Salads mid granularity
MoF24.2
19
Action SegmentationYouTube Instructions (test)
F1 Score (%)29.9
17
Action Segmentation50 Salads Mid--
17
Action SegmentationYouTube Instructions
F129.9
16
Temporal Video SegmentationBreakfast
MoF0.522
14
Unsupervised Activity Segmentation50 Salads eval granularity
MOF30.6
14
Showing 10 of 17 rows

Other info

Follow for update