Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TricorNet: A Hybrid Temporal Convolutional and Recurrent Network for Video Action Segmentation

About

Action segmentation as a milestone towards building automatic systems to understand untrimmed videos has received considerable attention in the recent years. It is typically being modeled as a sequence labeling problem but contains intrinsic and sufficient differences than text parsing or speech processing. In this paper, we introduce a novel hybrid temporal convolutional and recurrent network (TricorNet), which has an encoder-decoder architecture: the encoder consists of a hierarchy of temporal convolutional kernels that capture the local motion changes of different actions; the decoder is a hierarchy of recurrent neural networks that are able to learn and memorize long-term action dependencies after the encoding stage. Our model is simple but extremely effective in terms of video sequence labeling. The experimental results on three public action segmentation datasets have shown that the proposed model achieves superior performance over the state of the art.

Li Ding, Chenliang Xu• 2017

Related benchmarks

TaskDatasetResultRank
Action Segmentation50Salads
Edit Distance62.8
114
Temporal action segmentation50Salads
Accuracy67.5
106
Temporal action segmentationGTEA
F1 Score @ 10% Threshold76
99
Temporal action segmentation50 Salads granularity (Eval)
MoF73.4
24
Action Segmentation50Salads mid granularity
MoF67.5
19
Action SegmentationJIGSAWS
Accuracy82.9
19
Action Segmentation50 Salads Mid
Accuracy67.5
17
Action SegmentationGTEA
Accuracy64.8
15
Showing 8 of 8 rows

Other info

Follow for update