Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VSGNet: Spatial Attention Network for Detecting Human Object Interactions Using Graph Convolutions

About

Comprehensive visual understanding requires detection frameworks that can effectively learn and utilize object interactions while analyzing objects individually. This is the main objective in Human-Object Interaction (HOI) detection task. In particular, relative spatial reasoning and structural connections between objects are essential cues for analyzing interactions, which is addressed by the proposed Visual-Spatial-Graph Network (VSGNet) architecture. VSGNet extracts visual features from the human-object pairs, refines the features with spatial configurations of the pair, and utilizes the structural connections between the pair via graph convolutions. The performance of VSGNet is thoroughly evaluated using the Verbs in COCO (V-COCO) and HICO-DET datasets. Experimental results indicate that VSGNet outperforms state-of-the-art solutions by 8% or 4 mAP in V-COCO and 16% or 3 mAP in HICO-DET.

Oytun Ulutan, A S M Iftekhar, B.S. Manjunath• 2020

Related benchmarks

TaskDatasetResultRank
Human-Object Interaction DetectionHICO-DET (test)
mAP (full)19.8
493
Human-Object Interaction DetectionV-COCO (test)
AP (Role, Scenario 1)51.8
270
Human-Object Interaction DetectionHICO-DET
mAP (Full)19.8
233
Human-Object Interaction DetectionV-COCO 1.0 (test)
AP_role (#1)51.8
76
HOI DetectionHICO-DET (test)
Box mAP (Full)19.8
32
Human-Object Interaction DetectionV-COCO
Box mAP (Scenario 1)51.8
32
HOI DetectionHICO-DET v1.0 (test)
mAP (Default, Full)19.8
29
Human-Object Interaction DetectionV-COCO
AP (Role)51.8
23
Human-Object Interaction DetectionV-COCO 8 (test)
AP Role (S1)51.8
11
HOI DetectionV-COCO (test)
Scenario 151.8
10
Showing 10 of 10 rows

Other info

Code

Follow for update