Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CSTA: CNN-based Spatiotemporal Attention for Video Summarization

About

Video summarization aims to generate a concise representation of a video, capturing its essential content and key moments while reducing its overall length. Although several methods employ attention mechanisms to handle long-term dependencies, they often fail to capture the visual significance inherent in frames. To address this limitation, we propose a CNN-based SpatioTemporal Attention (CSTA) method that stacks each feature of frames from a single video to form image-like frame representations and applies 2D CNN to these frame features. Our methodology relies on CNN to comprehend the inter and intra-frame relations and to find crucial attributes in videos by exploiting its ability to learn absolute positions within images. In contrast to previous work compromising efficiency by designing additional modules to focus on spatial importance, CSTA requires minimal computational overhead as it uses CNN as a sliding window. Extensive experiments on two benchmark datasets (SumMe and TVSum) demonstrate that our proposed approach achieves state-of-the-art performance with fewer MACs compared to previous methods. Codes are available at https://github.com/thswodnjs3/CSTA.

Jaewon Son, Jaehun Park, Kwangsu Kim• 2024

Related benchmarks

TaskDatasetResultRank
Video SummarizationTVSum
Kendall's Tau0.194
55
Video SummarizationSumMe
Kendall's τ0.246
32
Video SummarizationTVSum
Kendall's τ0.194
24
Video highlight detectionMr.HiSum
mAP (rho=50%)61.12
14
Video SummarizationSumMe
Rank1
8
Video SummarizationTVSum
Rank2
8
Showing 6 of 6 rows

Other info

Code

Follow for update