Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Query Twice: Dual Mixture Attention Meta Learning for Video Summarization

About

Video summarization aims to select representative frames to retain high-level information, which is usually solved by predicting the segment-wise importance score via a softmax function. However, softmax function suffers in retaining high-rank representations for complex visual or sequential information, which is known as the Softmax Bottleneck problem. In this paper, we propose a novel framework named Dual Mixture Attention (DMASum) model with Meta Learning for video summarization that tackles the softmax bottleneck problem, where the Mixture of Attention layer (MoA) effectively increases the model capacity by employing twice self-query attention that can capture the second-order changes in addition to the initial query-key attention, and a novel Single Frame Meta Learning rule is then introduced to achieve more generalization to small datasets with limited training sources. Furthermore, the DMASum significantly exploits both visual and sequential attention that connects local key-frame and global attention in an accumulative way. We adopt the new evaluation protocol on two public datasets, SumMe, and TVSum. Both qualitative and quantitative experiments manifest significant improvements over the state-of-the-art methods.

Junyan Wang, Yang Bai, Yang Long, Bingzhang Hu, Zhenhua Chai, Yu Guan, Xiaolin Wei• 2020

Related benchmarks

TaskDatasetResultRank
Video SummarizationTVSum
Kendall's Tau0.203
55
Video SummarizationTVSum (test)
F-score0.614
47
Video SummarizationSumMe (test)
F-score54.3
35
Video SummarizationSumMe
Kendall's τ0.063
32
Video SummarizationTVSum
Kendall's τ0.203
24
Video SummarizationSumMe (5-fold cross-validation)
F1 Score54.3
12
Video SummarizationTVSum Canonical (C) 5-fold cross-validation
F1 Score61.4
10
Video SummarizationTVSum (5-fold cross-val)
Kendall's Tau0.203
9
Showing 8 of 8 rows

Other info

Follow for update