Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Distinguishing Visually Similar Actions: Prompt-Guided Semantic Prototype Modulation for Few-Shot Action Recognition

About

Few-shot action recognition aims to enable models to quickly learn new action categories from limited labeled samples, addressing the challenge of data scarcity in real-world applications. Current research primarily addresses three core challenges: (1) temporal modeling, where models are prone to interference from irrelevant static background information and struggle to capture the essence of dynamic action features; (2) visual similarity, where categories with subtle visual differences are difficult to distinguish; and (3) the modality gap between visual-textual support prototypes and visual-only queries, which complicates alignment within a shared embedding space. To address these challenges, this paper proposes a CLIP-SPM framework, which includes three components: (1) the Hierarchical Synergistic Motion Refinement (HSMR) module, which aligns deep and shallow motion features to improve temporal modeling by reducing static background interference; (2) the Semantic Prototype Modulation (SPM) strategy, which generates query-relevant text prompts to bridge the modality gap and integrates them with visual features, enhancing the discriminability between similar actions; and (3) the Prototype-Anchor Dual Modulation (PADM) method, which refines support prototypes and aligns query features with a global semantic anchor, improving consistency across support and query samples. Comprehensive experiments across standard benchmarks, including Kinetics, SSv2-Full, SSv2-Small, UCF101, and HMDB51, demonstrate that our CLIP-SPM achieves competitive performance under 1-shot, 3-shot, and 5-shot settings. Extensive ablation studies and visual analyses further validate the effectiveness of each component and its contributions to addressing the core challenges. The source code and models are publicly available at GitHub.

Xiaoyang Li, Mingming Lu, Ruiqi Wang, Hao Li, Zewei Le• 2025

Related benchmarks

TaskDatasetResultRank
Action RecognitionKinetics
Accuracy (5-shot)94.3
47
Action RecognitionSSv2 Few-shot
Top-1 Acc (5-way 1-shot)66.7
42
Action RecognitionSSv2 Small
Top-1 Acc (1-shot)57.8
26
Action RecognitionHMDB51
1-Shot Top-1 Acc78.2
22
Action RecognitionUCF101
Top-1 Accuracy (1-shot)96.2
22
Showing 5 of 5 rows

Other info

Follow for update