Learning Compositional Representation for 4D Captures with Neural ODE
About
Learning based representation has become the key to the success of many computer vision systems. While many 3D representations have been proposed, it is still an unaddressed problem how to represent a dynamically changing 3D object. In this paper, we introduce a compositional representation for 4D captures, i.e. a deforming 3D object over a temporal span, that disentangles shape, initial state, and motion respectively. Each component is represented by a latent code via a trained encoder. To model the motion, a neural Ordinary Differential Equation (ODE) is trained to update the initial state conditioned on the learned motion code, and a decoder takes the shape code and the updated state code to reconstruct the 3D model at each time stamp. To this end, we propose an Identity Exchange Training (IET) strategy to encourage the network to learn effectively decoupling each component. Extensive experiments demonstrate that the proposed method outperforms existing state-of-the-art deep learning based methods on 4D reconstruction, and significantly improves on various tasks, including motion transfer and completion.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Dynamic Human Body Modeling | D-FAUST 5 (Seen Individual) | Chamfer Distance0.068 | 18 | |
| Dynamic Human Body Modeling | D-FAUST 5 (Unseen Individual) | IoU6.99e+3 | 12 | |
| 4D Reconstruction | D-FAUST S1 Seen Individuals Unseen Motions (test) | Chamfer Distance (x10^-3)166.7 | 7 | |
| 4D Reconstruction | D-FAUST S2: Unseen Individuals, Seen Motions (test) | Chamfer Distance2.22e-4 | 7 | |
| Motion Retargeting | CAPE (test) | PA-MPJPE52.2 | 5 | |
| Shape and Motion Recovery | CAPE (test) | PA-MPJPE49.8 | 5 | |
| Future Prediction | CAPE | PA-MPJPE91.9 | 4 | |
| 4D Reconstruction | CAPE (test) | IoU62.9 | 3 | |
| Future Prediction | CAPE (test) | IoU64 | 3 | |
| Motion Completion | CAPE (test) | IoU76.6 | 3 |