Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

QAHOI: Query-Based Anchors for Human-Object Interaction Detection

About

Human-object interaction (HOI) detection as a downstream of object detection tasks requires localizing pairs of humans and objects and extracting the semantic relationships between humans and objects from an image. Recently, one-stage approaches have become a new trend for this task due to their high efficiency. However, these approaches focus on detecting possible interaction points or filtering human-object pairs, ignoring the variability in the location and size of different objects at spatial scales. To address this problem, we propose a transformer-based method, QAHOI (Query-Based Anchors for Human-Object Interaction detection), which leverages a multi-scale architecture to extract features from different spatial scales and uses query-based anchors to predict all the elements of an HOI instance. We further investigate that a powerful backbone significantly increases accuracy for QAHOI, and QAHOI with a transformer-based backbone outperforms recent state-of-the-art methods by large margins on the HICO-DET benchmark. The source code is available at $\href{https://github.com/cjw2021/QAHOI}{\text{this https URL}}$.

Junwen Chen, Keiji Yanai• 2021

Related benchmarks

TaskDatasetResultRank
Human-Object Interaction DetectionHICO-DET (test)
mAP (full)35.78
493
Human-Object Interaction DetectionV-COCO (test)
AP (Role, Scenario 1)58.2
270
Human-Object Interaction DetectionHICO-DET
mAP (Full)35.78
233
Human-Object Interaction DetectionHICO-DET Known Object (test)
mAP (Full)37.59
112
HOI DetectionHICO-DET
mAP (Rare)29.8
34
HOI DetectionHICO-DET v1 (test)
mAP (Rare)29.8
24
Human-Object Interaction DetectionHOI-SDC
mAP (Role)19.55
5
Showing 7 of 7 rows

Other info

Code

Follow for update