Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Deep multi-scale video prediction beyond mean square error

About

Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset

Michael Mathieu, Camille Couprie, Yann LeCun• 2015

Related benchmarks

TaskDatasetResultRank
Video InterpolationUCF-101 (test)
PSNR32.8
65
Video PredictionUCF Sports t+1 (test)
PSNR26.42
32
Video PredictionCaltech Pedestrian 10 -> 1 (test)
SSIM0.847
31
Next-frame predictionCalTech Pedestrian transfer from KITTI (test)
SSIM84.7
29
Video PredictionUCF Sports 4 frames -> 6 frames
PSNR26.42
22
Video PredictionMMNIST
MSE0.0275
12
Precipitation nowcastingHKO-7 (test)
CSI (r >= 0.5)51.12
12
Next-frame predictionCaltech Pedestrian (test)
SSIM88.1
10
Next-frame video predictionVanHateren (train)
PSNR (dB)31.16
7
Video PredictionDAVIS (train)
PSNR25.78
7
Showing 10 of 17 rows

Other info

Follow for update