Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Bag of Visual Words and Fusion Methods for Action Recognition: Comprehensive Study and Good Practice

About

Video based action recognition is one of the important and challenging problems in computer vision research. Bag of Visual Words model (BoVW) with local features has become the most popular method and obtained the state-of-the-art performance on several realistic datasets, such as the HMDB51, UCF50, and UCF101. BoVW is a general pipeline to construct a global representation from a set of local features, which is mainly composed of five steps: (i) feature extraction, (ii) feature pre-processing, (iii) codebook generation, (iv) feature encoding, and (v) pooling and normalization. Many efforts have been made in each step independently in different scenarios and their effect on action recognition is still unknown. Meanwhile, video data exhibits different views of visual pattern, such as static appearance and motion dynamics. Multiple descriptors are usually extracted to represent these different views. Many feature fusion methods have been developed in other areas and their influence on action recognition has never been investigated before. This paper aims to provide a comprehensive study of all steps in BoVW and different fusion methods, and uncover some good practice to produce a state-of-the-art action recognition system. Specifically, we explore two kinds of local features, ten kinds of encoding methods, eight kinds of pooling and normalization strategies, and three kinds of fusion methods. We conclude that every step is crucial for contributing to the final recognition rate. Furthermore, based on our comprehensive study, we propose a simple yet effective representation, called hybrid representation, by exploring the complementarity of different BoVW frameworks and local descriptors. Using this representation, we obtain the state-of-the-art on the three challenging datasets: HMDB51 (61.1%), UCF50 (92.3%), and UCF101 (87.9%).

Xiaojiang Peng, Limin Wang, Xingxing Wang, Yu Qiao• 2014

Related benchmarks

TaskDatasetResultRank
Action RecognitionUCF101
Accuracy87.9
365
Action RecognitionUCF101 (mean of 3 splits)
Accuracy87.9
357
Action RecognitionUCF101 (test)
Accuracy87.9
307
Action RecognitionHMDB51 (test)
Accuracy0.611
249
Action RecognitionHMDB51
3-Fold Accuracy61.1
191
Video ClassificationUCF101 (3-split average)
Accuracy87.9
41
Action RecognitionUCF-101
3-Fold Accuracy87.9
32
Video ClassificationHMDB-51
Top-1 Accuracy61.1
29
action similarity labelingASLAN
Accuracy68.7
9
Activity RecognitionUCF101 (Average across 3 splits)
Mean Accuracy87.9
5
Showing 10 of 10 rows

Other info

Follow for update