EventNeuS: 3D Mesh Reconstruction from a Single Event Camera
About
Event cameras offer a considerable alternative to RGB cameras in many scenarios. While there are recent works on event-based novel-view synthesis, dense 3D mesh reconstruction remains scarcely explored and existing event-based techniques are severely limited in their 3D reconstruction accuracy. To address this limitation, we present EventNeuS, a self-supervised neural model for learning 3D representations from monocular colour event streams. Our approach, for the first time, combines 3D signed distance function and density field learning with event-based supervision. Furthermore, we introduce spherical harmonics encodings into our model for enhanced handling of view-dependent effects. EventNeuS outperforms existing approaches by a significant margin, achieving 34% lower Chamfer distance and 31% lower mean absolute error on average compared to the best previous method.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 3D surface reconstruction | NeRF synthetic dataset 19 | Chair CD0.04 | 4 | |
| Novel View Synthesis | NeRF synthetic dataset Mic | PSNR30.57 | 2 | |
| Novel View Synthesis | NeRF synthetic dataset Lego | PSNR24.34 | 2 | |
| Novel View Synthesis | NeRF synthetic dataset Drums | PSNR28.65 | 2 | |
| Novel View Synthesis | NeRF synthetic dataset Chair | PSNR30.94 | 2 | |
| Novel View Synthesis | NeRF synthetic dataset Hotdog | PSNR28.35 | 2 | |
| Novel View Synthesis | NeRF Synthetic (Average) | PSNR28.57 | 2 |