Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Polysemy Deciphering Network for Robust Human-Object Interaction Detection

About

Human-Object Interaction (HOI) detection is important to human-centric scene understanding tasks. Existing works tend to assume that the same verb has similar visual characteristics in different HOI categories, an approach that ignores the diverse semantic meanings of the verb. To address this issue, in this paper, we propose a novel Polysemy Deciphering Network (PD-Net) that decodes the visual polysemy of verbs for HOI detection in three distinct ways. First, we refine features for HOI detection to be polysemyaware through the use of two novel modules: namely, Language Prior-guided Channel Attention (LPCA) and Language Prior-based Feature Augmentation (LPFA). LPCA highlights important elements in human and object appearance features for each HOI category to be identified; moreover, LPFA augments human pose and spatial features for HOI detection using language priors, enabling the verb classifiers to receive language hints that reduce intra-class variation for the same verb. Second, we introduce a novel Polysemy-Aware Modal Fusion module (PAMF), which guides PD-Net to make decisions based on feature types deemed more important according to the language priors. Third, we propose to relieve the verb polysemy problem through sharing verb classifiers for semantically similar HOI categories. Furthermore, to expedite research on the verb polysemy problem, we build a new benchmark dataset named HOI-VerbPolysemy (HOIVP), which includes common verbs (predicates) that have diverse semantic meanings in the real world. Finally, through deciphering the visual polysemy of verbs, our approach is demonstrated to outperform state-of-the-art methods by significant margins on the HICO-DET, V-COCO, and HOI-VP databases. Code and data in this paper are available at https://github.com/MuchHair/PD-Net.

Xubin Zhong, Changxing Ding, Xian Qu, Dacheng Tao• 2020

Related benchmarks

TaskDatasetResultRank
Human-Object Interaction DetectionHICO-DET (test)
mAP (full)24.78
493
Human-Object Interaction DetectionV-COCO (test)
AP (Role, Scenario 1)53.3
270
Human-Object Interaction DetectionHICO-DET
mAP (Full)22.37
233
Human-Object Interaction DetectionHICO-DET Known Object (test)
mAP (Full)26.86
112
Human-Object Interaction DetectionV-COCO 1.0 (test)--
76
Human-Object Interaction DetectionHICO-DET Zero-Shot
mAP (Default Unseen)15.95
33
Human-Object Interaction DetectionHOI-VP
mAP63.66
11
Showing 7 of 7 rows

Other info

Code

Follow for update