BCNet: Learning Body and Cloth Shape from A Single Image
About
In this paper, we consider the problem to automatically reconstruct garment and body shapes from a single near-front view RGB image. To this end, we propose a layered garment representation on top of SMPL and novelly make the skinning weight of garment independent of the body mesh, which significantly improves the expression ability of our garment model. Compared with existing methods, our method can support more garment categories and recover more accurate geometry. To train our model, we construct two large scale datasets with ground truth body and garment geometries as well as paired color images. Compared with single mesh or non-parametric representation, our method can achieve more flexible control with separate meshes, makes applications like re-pose, garment transfer, and garment texture mapping possible. Code and some data is available at https://github.com/jby1993/BCNet.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 3D clothed human reconstruction | 3DPW | Chamfer Distance118.8 | 6 | |
| 3D Mesh Reconstruction | BUFF (rough A-pose) | Upper Error1.07 | 5 | |
| Single-view 3D shape reconstruction | 3D garment dataset | Chamfer Dist.4.69 | 4 | |
| 3D Garment Reconstruction | Synthetic Sequence Female1 | CD (cm)3.184 | 4 | |
| 3D Garment Reconstruction | Synthetic Sequence Female3 | CD (cm)3.447 | 4 | |
| 3D Garment Reconstruction | Synthetic Sequence Male1 | CD (cm)2.929 | 4 | |
| 3D Garment Reconstruction | Synthetic Sequence Male2 | CD (cm)5.234 | 4 | |
| 3D Mesh Reconstruction | Digital Wardrobe | Upper Component Error1.44 | 3 | |
| 3D clothed human reconstruction | MSCOCO | Accuracy (Upper Body)41.5 | 3 |