Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Augmented 2D-TAN: A Two-stage Approach for Human-centric Spatio-Temporal Video Grounding

About

We propose an effective two-stage approach to tackle the problem of language-based Human-centric Spatio-Temporal Video Grounding (HC-STVG) task. In the first stage, we propose an Augmented 2D Temporal Adjacent Network (Augmented 2D-TAN) to temporally ground the target moment corresponding to the given description. Primarily, we improve the original 2D-TAN from two aspects: First, a temporal context-aware Bi-LSTM Aggregation Module is developed to aggregate clip-level representations, replacing the original max-pooling. Second, we propose to employ Random Concatenation Augmentation (RCA) mechanism during the training phase. In the second stage, we use pretrained MDETR model to generate per-frame bounding boxes via language query, and design a set of hand-crafted rules to select the best matching bounding box outputted by MDETR for each frame within the grounded moment.

Chaolei Tan, Zihang Lin, Jian-Fang Hu, Xiang Li, Wei-Shi Zheng• 2021

Related benchmarks

TaskDatasetResultRank
Spatio-Temporal Video GroundingHCSTVG v2 (val)
m_vIoU30.4
38
Spatio-Temporal Video GroundingHC-STVG (val)
Mean vIoU30.4
19
Showing 2 of 2 rows

Other info

Follow for update