Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos

About

Action recognition models have shown a promising capability to classify human actions in short video clips. In a real scenario, multiple correlated human actions commonly occur in particular orders, forming semantically meaningful human activities. Conventional action recognition approaches focus on analyzing single actions. However, they fail to fully reason about the contextual relations between adjacent actions, which provide potential temporal logic for understanding long videos. In this paper, we propose a prompt-based framework, Bridge-Prompt (Br-Prompt), to model the semantics across adjacent actions, so that it simultaneously exploits both out-of-context and contextual information from a series of ordinal actions in instructional videos. More specifically, we reformulate the individual action labels as integrated text prompts for supervision, which bridge the gap between individual action semantics. The generated text prompts are paired with corresponding video clips, and together co-train the text encoder and the video encoder via a contrastive approach. The learned vision encoder has a stronger capability for ordinal-action-related downstream tasks, e.g. action segmentation and human activity recognition. We evaluate the performances of our approach on several video datasets: Georgia Tech Egocentric Activities (GTEA), 50Salads, and the Breakfast dataset. Br-Prompt achieves state-of-the-art on multiple benchmarks. Code is available at https://github.com/ttlmh/Bridge-Prompt

Muheng Li, Lei Chen, Yueqi Duan, Zhilan Hu, Jianjiang Feng, Jie Zhou, Jiwen Lu• 2022

Related benchmarks

TaskDatasetResultRank
Action Segmentation50Salads
Edit Distance83.8
114
Temporal action segmentation50Salads
Accuracy88.1
106
Temporal action segmentationGTEA
F1 Score @ 10% Threshold91.6
99
Action SegmentationGTEA
F1@10%94.1
39
Action SegmentationGTEA
F1@1094.1
23
Temporal action segmentation50 Salads 65
F1@1089.2
22
Temporal action segmentationGTEA 23
F1@10%94.1
19
Human Activity RecognitionBreakfast
Accuracy80
14
Showing 8 of 8 rows

Other info

Code

Follow for update