Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks

About

Event-based cameras are becoming increasingly popular for their ability to capture high-speed motion with low latency and high dynamic range. However, generating videos from events remains challenging due to the highly sparse and varying nature of event data. To address this, in this study, we propose HyperE2VID, a dynamic neural network architecture for event-based video reconstruction. Our approach uses hypernetworks to generate per-pixel adaptive filters guided by a context fusion module that combines information from event voxel grids and previously reconstructed intensity images. We also employ a curriculum learning strategy to train the network more robustly. Our comprehensive experimental evaluations across various benchmark datasets reveal that HyperE2VID not only surpasses current state-of-the-art methods in terms of reconstruction quality but also achieves this with fewer parameters, reduced computational requirements, and accelerated inference times.

Burak Ercan, Onur Eker, Canberk Saglam, Aykut Erdem, Erkut Erdem• 2023

Related benchmarks

TaskDatasetResultRank
Video ReconstructionHQF
MSE0.031
38
Video ReconstructionMVSEC
MSE0.076
22
Event-based video frame reconstructionReal-world
MSE0.0632
10
Event-based video frame reconstructionSynthetic
MSE0.0727
10
Video Frame PredictionBS-ERGB 1 frame (test)
PSNR18.79
10
Video Frame PredictionBS-ERGB 3 frames (test)
PSNR18.78
10
Video Frame ReconstructionSynthetic
FID224.4
10
Video Frame PredictionHS-ERGB 7 frames (test)
PSNR18.2
10
Video Frame PredictionGoPro 7 frames
PSNR9.72
10
Video Frame PredictionGoPro 15 frames
PSNR9.62
10
Showing 10 of 22 rows

Other info

Code

Follow for update