Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TeHOR: Text-Guided 3D Human and Object Reconstruction with Textures

About

Joint reconstruction of 3D human and object from a single image is an active research area, with pivotal applications in robotics and digital content creation. Despite recent advances, existing approaches suffer from two fundamental limitations. First, their reconstructions rely heavily on physical contact information, which inherently cannot capture non-contact human-object interactions, such as gazing at or pointing toward an object. Second, the reconstruction process is primarily driven by local geometric proximity, neglecting the human and object appearances that provide global context crucial for understanding holistic interactions. To address these issues, we introduce TeHOR, a framework built upon two core designs. First, beyond contact information, our framework leverages text descriptions of human-object interactions to enforce semantic alignment between the 3D reconstruction and its textual cues, enabling reasoning over a wider spectrum of interactions, including non-contact cases. Second, we incorporate appearance cues of the 3D human and object into the alignment process to capture holistic contextual information, thereby ensuring visually plausible reconstructions. As a result, our framework produces accurate and semantically coherent reconstructions, achieving state-of-the-art performance.

Hyeongjin Nam, Daniel Sungho Jung, Kyoung Mu Lee• 2026

Related benchmarks

TaskDatasetResultRank
Joint Human and Object ReconstructionBEHAVE (test)
CD (SMPL) (cm)5.241
8
Joint 3D human and object reconstructionOpen3DHOI (test)
CD Human4.403
6
3D human-object reconstructionOpen3DHOI non-contact scenarios
CDhuman4.958
5
Appearance-Text AlignmentOpen3DHOI (test)
CLIPScore0.706
5
3D human-object reconstructionOpen3DHOI Seen object categories
CD (Human)2.511
2
3D human-object reconstructionOpen3DHOI Whole object categories
CD (Human)2.582
2
Showing 6 of 6 rows

Other info

Follow for update