Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mesh Graphormer

About

We present a graph-convolution-reinforced transformer, named Mesh Graphormer, for 3D human pose and mesh reconstruction from a single image. Recently both transformers and graph convolutional neural networks (GCNNs) have shown promising progress in human mesh reconstruction. Transformer-based approaches are effective in modeling non-local interactions among 3D mesh vertices and body joints, whereas GCNNs are good at exploiting neighborhood vertex interactions based on a pre-specified mesh topology. In this paper, we study how to combine graph convolutions and self-attentions in a transformer to model both local and global interactions. Experimental results show that our proposed method, Mesh Graphormer, significantly outperforms the previous state-of-the-art methods on multiple benchmarks, including Human3.6M, 3DPW, and FreiHAND datasets. Code and pre-trained models are available at https://github.com/microsoft/MeshGraphormer

Kevin Lin, Lijuan Wang, Zicheng Liu• 2021

Related benchmarks

TaskDatasetResultRank
3D Human Pose EstimationHuman3.6M (test)--
547
3D Human Pose Estimation3DPW (test)
PA-MPJPE45.6
505
3D Human Mesh Recovery3DPW (test)
PA-MPJPE45.6
264
3D Human Pose EstimationHuman3.6M
MPJPE51.2
160
3D Human Pose and Shape Estimation3DPW (test)
MPJPE-PA45.6
158
3D Hand ReconstructionFreiHAND (test)
F@15mm98.7
148
Human Mesh Recovery3DPW
PA-MPJPE45.6
123
3D Human Mesh RecoveryHuman3.6M (test)
PA-MPJPE34.5
120
3D Human Pose and Shape EstimationHuman3.6M (test)
PA-MPJPE34.5
119
3D Human Pose Estimation3DPW
PA-MPJPE45.6
119
Showing 10 of 50 rows

Other info

Code

Follow for update