Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Agglomerative Transformer for Human-Object Interaction Detection

About

We propose an agglomerative Transformer (AGER) that enables Transformer-based human-object interaction (HOI) detectors to flexibly exploit extra instance-level cues in a single-stage and end-to-end manner for the first time. AGER acquires instance tokens by dynamically clustering patch tokens and aligning cluster centers to instances with textual guidance, thus enjoying two benefits: 1) Integrality: each instance token is encouraged to contain all discriminative feature regions of an instance, which demonstrates a significant improvement in the extraction of different instance-level cues and subsequently leads to a new state-of-the-art performance of HOI detection with 36.75 mAP on HICO-Det. 2) Efficiency: the dynamical clustering mechanism allows AGER to generate instance tokens jointly with the feature learning of the Transformer encoder, eliminating the need of an additional object detector or instance decoder in prior methods, thus allowing the extraction of desirable extra cues for HOI detection in a single-stage and end-to-end pipeline. Concretely, AGER reduces GFLOPs by 8.5% and improves FPS by 36%, even compared to a vanilla DETR-like pipeline without extra cue extraction.

Danyang Tu, Wei Sun, Guangtao Zhai, Wei Shen• 2023

Related benchmarks

TaskDatasetResultRank
Human-Object Interaction DetectionHICO-DET (test)
mAP (full)36.75
493
Human-Object Interaction DetectionV-COCO (test)
AP (Role, Scenario 1)65.7
270
Human-Object Interaction DetectionHICO-DET
mAP (Full)36.75
233
Human-Object Interaction DetectionHICO-DET Known Object (test)
mAP (Full)39.84
112
Human-Object Interaction DetectionV-COCO
AP^1 Role65.7
65
HOI DetectionV-COCO
AP Role 165.7
40
HOI DetectionHICO-DET
mAP (Default Full)36.75
21
Showing 7 of 7 rows

Other info

Follow for update