Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Blind Video Temporal Consistency via Deep Video Prior

About

Applying image processing algorithms independently to each video frame often leads to temporal inconsistency in the resulting video. To address this issue, we present a novel and general approach for blind video temporal consistency. Our method is only trained on a pair of original and processed videos directly instead of a large dataset. Unlike most previous methods that enforce temporal consistency with optical flow, we show that temporal consistency can be achieved by training a convolutional network on a video with the Deep Video Prior. Moreover, a carefully designed iteratively reweighted training strategy is proposed to address the challenging multimodal inconsistency problem. We demonstrate the effectiveness of our approach on 7 computer vision tasks on videos. Extensive quantitative and perceptual experiments show that our approach obtains superior performance than state-of-the-art methods on blind video temporal consistency. Our source codes are publicly available at github.com/ChenyangLEI/deep-video-prior.

Chenyang Lei, Yazhou Xing, Qifeng Chen• 2020

Related benchmarks

TaskDatasetResultRank
Brightness de-flickeringVFHQ 1.0 (test)
FVD14.53
5
Pixel De-flickeringAI-generated videos SD-x4-upscaler
FVD15.09
5
Video Temporal ConsistencyBlind Deflickering Dataset
Dehazing (Ewarp)0.109
5
Showing 3 of 3 rows

Other info

Follow for update