Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Temporal Convolutional Networks: A Unified Approach to Action Segmentation

About

The dominant paradigm for video-based action segmentation is composed of two steps: first, for each frame, compute low-level features using Dense Trajectories or a Convolutional Neural Network that encode spatiotemporal information locally, and second, input these features into a classifier that captures high-level temporal relationships, such as a Recurrent Neural Network (RNN). While often effective, this decoupling requires specifying two separate models, each with their own complexities, and prevents capturing more nuanced long-range spatiotemporal relationships. We propose a unified approach, as demonstrated by our Temporal Convolutional Network (TCN), that hierarchically captures relationships at low-, intermediate-, and high-level time-scales. Our model achieves superior or competitive performance using video or sensor data on three public action segmentation datasets and can be trained in a fraction of the time it takes to train an RNN.

Colin Lea, Rene Vidal, Austin Reiter, Gregory D. Hager• 2016

Related benchmarks

TaskDatasetResultRank
Human Activity RecognitionREALDISP
F195.11
94
Action Triplet RecognitionCholecT50 (test)
Top-5 Accuracy54.5
30
Activity RecognitionPAMAP2
Accuracy95.27
22
Action SegmentationJIGSAWS
Accuracy81.4
19
Blood glucose predictionT1DEXI
RMSE41.09
19
SST forecastingOISST
RMSE0.682
18
Action RecognitionJIGSAWS Suturing (LOSO)
Per-frame Accuracy79.6
18
Activity of Daily Living RecognitionCairo
Accuracy66.8
18
Activity of Daily Living RecognitionMilan
Accuracy62.2
18
Activity of Daily Living RecognitionKyoto7
Accuracy44.2
18
Showing 10 of 24 rows

Other info

Follow for update