Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diffusion-Guided Reconstruction of Everyday Hand-Object Interaction Clips

About

We tackle the task of reconstructing hand-object interactions from short video clips. Given an input video, our approach casts 3D inference as a per-video optimization and recovers a neural 3D representation of the object shape, as well as the time-varying motion and hand articulation. While the input video naturally provides some multi-view cues to guide 3D inference, these are insufficient on their own due to occlusions and limited viewpoint variations. To obtain accurate 3D, we augment the multi-view signals with generic data-driven priors to guide reconstruction. Specifically, we learn a diffusion network to model the conditional distribution of (geometric) renderings of objects conditioned on hand configuration and category label, and leverage it as a prior to guide the novel-view renderings of the reconstructed scene. We empirically evaluate our approach on egocentric videos across 6 object categories, and observe significant improvements over prior single-view and multi-view methods. Finally, we demonstrate our system's ability to reconstruct arbitrary clips from YouTube, showing both 1st and 3rd person interactions.

Yufei Ye, Poorvi Hebbar, Abhinav Gupta, Shubham Tulsiani• 2023

Related benchmarks

TaskDatasetResultRank
3D Hand-Object ReconstructionHO3D Full View v3
CDr (cm^2)68.8
5
HOI ReconstructionHOI4D (test)
F5 Accuracy62
5
HOI ReconstructionDexYCB (test)
Contact Distance (cm)7.2
4
3D Hand-Object ReconstructionHO3D v3 (train)
MPJPE (mm)32.3
3
Showing 4 of 4 rows

Other info

Follow for update