Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Hierarchical Modular Network for Video Captioning

About

Video captioning aims to generate natural language descriptions according to the content, where representation learning plays a crucial role. Existing methods are mainly developed within the supervised learning framework via word-by-word comparison of the generated caption against the ground-truth text without fully exploiting linguistic semantics. In this work, we propose a hierarchical modular network to bridge video representations and linguistic semantics from three levels before generating captions. In particular, the hierarchy is composed of: (I) Entity level, which highlights objects that are most likely to be mentioned in captions. (II) Predicate level, which learns the actions conditioned on highlighted objects and is supervised by the predicate in captions. (III) Sentence level, which learns the global semantic representation and is supervised by the whole caption. Each level is implemented by one module. Extensive experimental results show that the proposed method performs favorably against the state-of-the-art models on the two widely-used benchmarks: MSVD 104.0% and MSR-VTT 51.5% in CIDEr score.

Hanhua Ye, Guorong Li, Yuankai Qi, Shuhui Wang, Qingming Huang, Ming-Hsuan Yang• 2021

Related benchmarks

TaskDatasetResultRank
Video CaptioningMSVD
CIDEr104
128
Video CaptioningMSR-VTT (test)
CIDEr51.5
121
Video CaptioningMSVD (test)
CIDEr104
111
Video CaptioningMSRVTT
CIDEr51.5
101
Video CaptioningMSRVTT (test)
CIDEr51.5
61
Showing 5 of 5 rows

Other info

Follow for update