Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PhysPT: Physics-aware Pretrained Transformer for Estimating Human Dynamics from Monocular Videos

About

While current methods have shown promising progress on estimating 3D human motion from monocular videos, their motion estimates are often physically unrealistic because they mainly consider kinematics. In this paper, we introduce Physics-aware Pretrained Transformer (PhysPT), which improves kinematics-based motion estimates and infers motion forces. PhysPT exploits a Transformer encoder-decoder backbone to effectively learn human dynamics in a self-supervised manner. Moreover, it incorporates physics principles governing human motion. Specifically, we build a physics-based body representation and contact force model. We leverage them to impose novel physics-inspired training losses (i.e., force loss, contact loss, and Euler-Lagrange loss), enabling PhysPT to capture physical properties of the human body and the forces it experiences. Experiments demonstrate that, once trained, PhysPT can be directly applied to kinematics-based estimates to significantly enhance their physical plausibility and generate favourable motion forces. Furthermore, we show that these physically meaningful quantities translate into improved accuracy of an important downstream task: human action recognition.

Yufei Zhang, Jeffrey O. Kephart, Zijun Cui, Qiang Ji• 2024

Related benchmarks

TaskDatasetResultRank
3D Human Pose EstimationHuman3.6M (test)
MPJPE (Average)52.7
547
3D Human Pose EstimationHuman3.6M
MPJPE52.7
160
Action RecognitionPenn-Action (test)
Accuracy98
27
3D Human Motion Estimation3DOH
MJE53
7
Global Pose EstimationMotionPRO
MPJPE56.4
7
Global Motion RecoveryHuman3.6M (test)
G-MPJPE335.7
2
Showing 6 of 6 rows

Other info

Code

Follow for update