Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred Objects in Videos
About
We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video. To this end, we model the blurred appearance of a fast moving object in a generative fashion by parametrizing its 3D position, rotation, velocity, acceleration, bounces, shape, and texture over the duration of a predefined time window spanning multiple frames. Using differentiable rendering, we are able to estimate all parameters by minimizing the pixel-wise reprojection error to the input video via backpropagating through a rendering pipeline that accounts for motion blur by averaging the graphics output over short time intervals. For that purpose, we also estimate the camera exposure gap time within the same optimization. To account for abrupt motion changes like bounces, we model the motion trajectory as a piece-wise polynomial, and we are able to estimate the specific time of the bounce at sub-frame accuracy. Experiments on established benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Fast moving object deblurring | Falling Objects | PSNR27.54 | 7 | |
| Fast moving object deblurring | TbD-3D Dataset | PSNR26.57 | 7 | |
| Fast moving object deblurring | TbD Dataset | PSNR26.63 | 7 | |
| 3D reconstruction of fast moving objects | Synthetic dataset (at most 90° rotation over 3 frames) large rotation | Translational Error20 | 2 | |
| 3D reconstruction of fast moving objects | Synthetic dataset at most 30° rotation over 3 frames (small rotation) | Translational Error8.8 | 2 |