E-CIR: Event-Enhanced Continuous Intensity Recovery
About
A camera begins to sense light the moment we press the shutter button. During the exposure interval, relative motion between the scene and the camera causes motion blur, a common undesirable visual artifact. This paper presents E-CIR, which converts a blurry image into a sharp video represented as a parametric function from time to intensity. E-CIR leverages events as an auxiliary input. We discuss how to exploit the temporal event structure to construct the parametric bases. We demonstrate how to train a deep learning model to predict the function coefficients. To improve the appearance consistency, we further introduce a refinement module to propagate visual features among consecutive frames. Compared to state-of-the-art event-enhanced deblurring approaches, E-CIR generates smoother and more realistic results. The implementation of E-CIR is available at https://github.com/chensong1995/E-CIR.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Motion Deblurring | MVSEC Single frame prediction | PSNR27.792 | 11 | |
| Motion Deblurring | DSEC-large Single frame prediction | PSNR21.46 | 11 | |
| Motion Deblurring | MVSEC | PSNR26.773 | 10 | |
| Motion Deblurring | StEIC | PSNR20.706 | 10 | |
| Motion Deblurring | StEIC Single frame prediction | PSNR22.076 | 10 | |
| Motion Deblurring | DSEC large | PSNR21.089 | 10 | |
| Video Deblurring | REDS (val) | MSE0.114 | 4 |