Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pose Priors from Language Models

About

Language is often used to describe physical interaction, yet most 3D human pose estimation methods overlook this rich source of information. We bridge this gap by leveraging large multimodal models (LMMs) as priors for reconstructing contact poses, offering a scalable alternative to traditional methods that rely on human annotations or motion capture data. Our approach extracts contact-relevant descriptors from an LMM and translates them into tractable losses to constrain 3D human pose optimization. Despite its simplicity, our method produces compelling reconstructions for both two-person interactions and self-contact scenarios, accurately capturing the semantics of physical and social interactions. Our results demonstrate that LMMs can serve as powerful tools for contact prediction and pose estimation, offering an alternative to costly manual human annotations or motion capture data. Our code is publicly available at https://prosepose.github.io.

Sanjay Subramanian, Evonne Ng, Lea M\"uller, Dan Klein, Shiry Ginosar, Trevor Darrell• 2024

Related benchmarks

TaskDatasetResultRank
Two-person Pose RefinementHi4D 50 (test)
PA-MPJPE88
6
Two-person Pose RefinementFlickrCI3D 10 (test)
PA-MPJPE58
6
Two-person Pose RefinementCHI3D 10 (val)
PA-MPJPE69
6
One-person 3D pose refinementMOYO (test)
PA-MPJPE82
3
Showing 4 of 4 rows

Other info

Code

Follow for update