Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Progressively Guide to Attend: An Iterative Alignment Framework for Temporal Sentence Grounding

About

A key solution to temporal sentence grounding (TSG) exists in how to learn effective alignment between vision and language features extracted from an untrimmed video and a sentence description. Existing methods mainly leverage vanilla soft attention to perform the alignment in a single-step process. However, such single-step attention is insufficient in practice, since complicated relations between inter- and intra-modality are usually obtained through multi-step reasoning. In this paper, we propose an Iterative Alignment Network (IA-Net) for TSG task, which iteratively interacts inter- and intra-modal features within multiple steps for more accurate grounding. Specifically, during the iterative reasoning process, we pad multi-modal features with learnable parameters to alleviate the nowhere-to-attend problem of non-matched frame-word pairs, and enhance the basic co-attention mechanism in a parallel manner. To further calibrate the misaligned attention caused by each reasoning step, we also devise a calibration module following each attention module to refine the alignment knowledge. With such iterative alignment scheme, our IA-Net can robustly capture the fine-grained relations between vision and language domains step-by-step for progressively reasoning the temporal boundaries. Extensive experiments conducted on three challenging benchmarks demonstrate that our proposed model performs better than the state-of-the-arts.

Daizong Liu, Xiaoye Qu, Pan Zhou• 2021

Related benchmarks

TaskDatasetResultRank
Video GroundingCharades-STA
R@1 IoU=0.563.98
113
Video GroundingTACOS
Recall@1 (IoU=0.5)32.27
45
Video GroundingActivityNet Captions
R@1 (IoU=0.5)51.87
43
Showing 3 of 3 rows

Other info

Follow for update