Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Temporal Relational Modeling with Self-Supervision for Action Segmentation

About

Temporal relational modeling in video is essential for human action understanding, such as action recognition and action segmentation. Although Graph Convolution Networks (GCNs) have shown promising advantages in relation reasoning on many tasks, it is still a challenge to apply graph convolution networks on long video sequences effectively. The main reason is that large number of nodes (i.e., video frames) makes GCNs hard to capture and model temporal relations in videos. To tackle this problem, in this paper, we introduce an effective GCN module, Dilated Temporal Graph Reasoning Module (DTGRM), designed to model temporal relations and dependencies between video frames at various time spans. In particular, we capture and model temporal relations via constructing multi-level dilated temporal graphs where the nodes represent frames from different moments in video. Moreover, to enhance temporal reasoning ability of the proposed model, an auxiliary self-supervised task is proposed to encourage the dilated temporal graph reasoning module to find and correct wrong temporal relations in videos. Our DTGRM model outperforms state-of-the-art action segmentation models on three challenging datasets: 50Salads, Georgia Tech Egocentric Activities (GTEA), and the Breakfast dataset. The code is available at https://github.com/redwang/DTGRM.

Dong Wang, Di Hu, Xingjian Li, Dejing Dou• 2020

Related benchmarks

TaskDatasetResultRank
Action Segmentation50Salads
Edit Distance72
114
Action SegmentationBreakfast
F1@1068.7
107
Temporal action segmentationGTEA
F1 Score @ 10% Threshold87.8
99
Temporal action segmentationBreakfast
Accuracy68.3
96
Action SegmentationGTEA
F1@10%87.8
39
Action SegmentationGTEA (test)
F1@10%87.8
25
Action SegmentationGTEA (full)
Edit Score80.7
16
Action Segmentation50Salads (test)
F1@1079.1
16
Showing 8 of 8 rows

Other info

Code

Follow for update