Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Cross-Enhancement Transformer for Action Segmentation

About

Temporal convolutions have been the paradigm of choice in action segmentation, which enhances long-term receptive fields by increasing convolution layers. However, high layers cause the loss of local information necessary for frame recognition. To solve the above problem, a novel encoder-decoder structure is proposed in this paper, called Cross-Enhancement Transformer. Our approach can be effective learning of temporal structure representation with interactive self-attention mechanism. Concatenated each layer convolutional feature maps in encoder with a set of features in decoder produced via self-attention. Therefore, local and global information are used in a series of frame actions simultaneously. In addition, a new loss function is proposed to enhance the training process that penalizes over-segmentation errors. Experiments show that our framework performs state-of-the-art on three challenging datasets: 50Salads, Georgia Tech Egocentric Activities and the Breakfast dataset.

Jiahui Wang, Zhenyou Wang, Shanna Zhuang, Hui Wang• 2022

Related benchmarks

TaskDatasetResultRank
Action SegmentationBreakfast
Acc74.9
116
Action Segmentation50Salads
Edit Distance81.7
114
Action SegmentationGTEA
Accuracy80.3
49
Action SegmentationGTEA
F1@1091.8
23
Showing 4 of 4 rows

Other info

Follow for update