EgoHandICL: Egocentric 3D Hand Reconstruction with In-Context Learning
About
Robust 3D hand reconstruction in egocentric vision is challenging due to depth ambiguity, self-occlusion, and complex hand-object interactions. Prior methods mitigate these issues by scaling training data or adding auxiliary cues, but they often struggle in unseen contexts. We present EgoHandICL, the first in-context learning (ICL) framework for 3D hand reconstruction that improves semantic alignment, visual consistency, and robustness under challenging egocentric conditions. EgoHandICL introduces complementary exemplar retrieval guided by vision-language models (VLMs), an ICL-tailored tokenizer for multimodal context, and a masked autoencoder (MAE)-based architecture trained with hand-guided geometric and perceptual objectives. Experiments on ARCTIC and EgoExo4D show consistent gains over state-of-the-art methods. We also demonstrate real-world generalization and improve EgoVLM hand-object interaction reasoning by using reconstructed hands as visual prompts. Code and data: https://github.com/Nicous20/EgoHandICL
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 3D Hand Mesh Reconstruction | EgoExo4D General Setting | MPJPE21.1 | 5 | |
| 3D Hand Mesh Reconstruction | EgoExo4D Bimanual Setting | P-MPJPE7.5 | 5 | |
| 3D Hand Mesh Reconstruction | ARCTIC General Setting | P-MPJPE4 | 5 | |
| 3D Hand Mesh Reconstruction | ARCTIC Bimanual Setting | P-MPVPE3.7 | 5 |