Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Detecting Any Human-Object Interaction Relationship: Universal HOI Detector with Spatial Prompt Learning on Foundation Models

About

Human-object interaction (HOI) detection aims to comprehend the intricate relationships between humans and objects, predicting $<human, action, object>$ triplets, and serving as the foundation for numerous computer vision tasks. The complexity and diversity of human-object interactions in the real world, however, pose significant challenges for both annotation and recognition, particularly in recognizing interactions within an open world context. This study explores the universal interaction recognition in an open-world setting through the use of Vision-Language (VL) foundation models and large language models (LLMs). The proposed method is dubbed as \emph{\textbf{UniHOI}}. We conduct a deep analysis of the three hierarchical features inherent in visual HOI detectors and propose a method for high-level relation extraction aimed at VL foundation models, which we call HO prompt-based learning. Our design includes an HO Prompt-guided Decoder (HOPD), facilitates the association of high-level relation representations in the foundation model with various HO pairs within the image. Furthermore, we utilize a LLM (\emph{i.e.} GPT) for interaction interpretation, generating a richer linguistic understanding for complex HOIs. For open-category interaction recognition, our method supports either of two input types: interaction phrase or interpretive sentence. Our efficient architecture design and learning methods effectively unleash the potential of the VL foundation models and LLMs, allowing UniHOI to surpass all existing methods with a substantial margin, under both supervised and zero-shot settings. The code and pre-trained weights are available at: \url{https://github.com/Caoyichao/UniHOI}.

Yichao Cao, Qingfei Tang, Xiu Su, Chen Song, Shan You, Xiaobo Lu, Chang Xu• 2023

Related benchmarks

TaskDatasetResultRank
Human-Object Interaction DetectionHICO-DET
mAP (Full)40.95
233
Human-Object Interaction DetectionHICO-DET Known Object (test)
mAP (Full)43.26
112
Human-Object Interaction DetectionHICO-DET (Rare First Unseen Combination (RF-UC))
mAP (Full)32.27
77
Human-Object Interaction DetectionV-COCO 1.0 (test)
AP_role (#1)68.05
76
Human-Object Interaction DetectionV-COCO
AP^1 Role68.1
65
Human-Object Interaction DetectionHICO-DET Non-rare First Unseen Composition (NF-UC)
AP (Unseen)28.45
49
Human-Object Interaction DetectionHICO-DET (NF-UC)
mAP (Full)31.79
40
Human-Object Interaction DetectionHICO-DET (UO)
mAP (Full)31.56
31
Human-Object Interaction DetectionHICO-DET (UV)
mAP (Full)34.68
30
HOI DetectionHICO-DET v1.0 (test)
mAP (Default, Full)40.95
29
Showing 10 of 15 rows

Other info

Code

Follow for update