NeMF: Neural Motion Fields for Kinematic Animation
About
We present an implicit neural representation to learn the spatio-temporal space of kinematic motions. Unlike previous work that represents motion as discrete sequential samples, we propose to express the vast motion space as a continuous function over time, hence the name Neural Motion Fields (NeMF). Specifically, we use a neural network to learn this function for miscellaneous sets of motions, which is designed to be a generative model conditioned on a temporal coordinate $t$ and a random vector $z$ for controlling the style. The model is then trained as a Variational Autoencoder (VAE) with motion encoders to sample the latent space. We train our model with a diverse human motion dataset and quadruped dataset to prove its versatility, and finally deploy it as a generic motion prior to solve task-agnostic problems and show its superiority in different motion generation and editing applications, such as motion interpolation, in-betweening, and re-navigating. More details can be found on our project page: https://cs.yale.edu/homes/che/projects/nemf/.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Motion In-betweening | LaFAN1 (test) | L2Q0.18 | 77 | |
| Motion clips in-betweening | Motion in-betweening dataset (test) | FID0.024 | 15 | |
| Sparse keyframe in-betweening | AIST++ | FID0.085 | 12 | |
| Motion Reconstruction | AMASS (test) | MRE5.988 | 3 | |
| Motion Synthesis | AMASS (test) | FID6.508 | 3 |