Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PAWS: Perception of Articulation in the Wild at Scale from Egocentric Videos

About

Articulation perception aims to recover the motion and structure of articulated objects (e.g., drawers and cupboards), and is fundamental to 3D scene understanding in robotics, simulation, and animation. Existing learning-based methods rely heavily on supervised training with high-quality 3D data and manual annotations, limiting scalability and diversity. To address this limitation, we propose PAWS, a method that directly extracts object articulations from hand-object interactions in large-scale in-the-wild egocentric videos. We evaluate our method on the public data sets, including HD-EPIC and Arti4D data sets, achieving significant improvements over baselines. We further demonstrate that the extracted articulations benefit downstream tasks, including fine-tuning 3D articulation prediction models and enabling robot manipulation. See the project website at https://aaltoml.github.io/PAWS/.

Yihao Wang, Yang Miao, Wenshuai Zhao, Wenyan Yang, Zihan Wang, Joni Pajarinen, Luc Van Gool, Danda Pani Paudel, Juho Kannala, Xi Wang, Arno Solin• 2026

Related benchmarks

TaskDatasetResultRank
Articulation EstimationHD-EPIC 64 (test)
Match (%)71.43
5
Articulation EstimationArti4D 82 (test)
Match Rate (%)48.02
4
Showing 2 of 2 rows

Other info

Follow for update