Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MSTR: Multi-Scale Transformer for End-to-End Human-Object Interaction Detection

About

Human-Object Interaction (HOI) detection is the task of identifying a set of <human, object, interaction> triplets from an image. Recent work proposed transformer encoder-decoder architectures that successfully eliminated the need for many hand-designed components in HOI detection through end-to-end training. However, they are limited to single-scale feature resolution, providing suboptimal performance in scenes containing humans, objects and their interactions with vastly different scales and distances. To tackle this problem, we propose a Multi-Scale TRansformer (MSTR) for HOI detection powered by two novel HOI-aware deformable attention modules called Dual-Entity attention and Entity-conditioned Context attention. While existing deformable attention comes at a huge cost in HOI detection performance, our proposed attention modules of MSTR learn to effectively attend to sampling points that are essential to identify interactions. In experiments, we achieve the new state-of-the-art performance on two HOI detection benchmarks.

Bumsoo Kim, Jonghwan Mun, Kyoung-Woon On, Minchul Shin, Junhyun Lee, Eun-Sol Kim• 2022

Related benchmarks

TaskDatasetResultRank
Human-Object Interaction DetectionHICO-DET (test)
mAP (full)31.17
493
Human-Object Interaction DetectionV-COCO (test)
AP (Role, Scenario 1)62
270
Human-Object Interaction DetectionHICO-DET
mAP (Full)34.02
233
Human-Object Interaction DetectionHICO-DET Known Object (test)
mAP (Full)34.02
112
Human-Object Interaction DetectionV-COCO 1.0 (test)
AP_role (#1)62
76
Human-Object Interaction DetectionV-COCO
AP^1 Role62
65
Showing 6 of 6 rows

Other info

Follow for update