Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

What if Agents Could Imagine? Reinforcing Open-Vocabulary HOI Comprehension through Generation

About

Multimodal Large Language Models have shown promising capabilities in bridging visual and textual reasoning, yet their reasoning capabilities in Open-Vocabulary Human-Object Interaction (OV-HOI) are limited by cross-modal hallucinations and occlusion-induced ambiguity. To address this, we propose \textbf{ImagineAgent}, an agentic framework that harmonizes cognitive reasoning with generative imagination for robust visual understanding. Specifically, our method innovatively constructs cognitive maps that explicitly model plausible relationships between detected entities and candidate actions. Subsequently, it dynamically invokes tools including retrieval augmentation, image cropping, and diffusion models to gather domain-specific knowledge and enriched visual evidence, thereby achieving cross-modal alignment in ambiguous scenarios. Moreover, we propose a composite reward that balances prediction accuracy and tool efficiency. Evaluations on SWIG-HOI and HICO-DET datasets demonstrate our SOTA performance, requiring approximately 20\% of training data compared to existing methods, validating our robustness and efficiency.

Zhenlong Yuan, Xiangyan Qu, Jing Tang, Rui Chen, Lei Sun, Ruidong Chen, Hongwei Yu, Chengxuan Qian, Xiangxiang Chu, Shuo Li, Yuyin Zhou• 2026

Related benchmarks

TaskDatasetResultRank
Human-Object Interaction DetectionHICO-DET (test)--
493
Human-Object Interaction DetectionSWIG-HOI Non-rare (test)
mAP22.89
11
Human-Object Interaction DetectionSWIG-HOI Rare (test)
mAP18.02
11
Human-Object Interaction DetectionHICO-DET (Unseen)
mAP29.71
10
Human-Object Interaction DetectionHICO-DET (Full)
mAP (Full)28.96
10
Human-Object Interaction DetectionSWIG-HOI Unseen (test)
mAP12.53
9
Human-Object Interaction DetectionSWIG-HOI (Full)
mAP17.75
8
Showing 7 of 7 rows

Other info

Follow for update