Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VCT: A Video Compression Transformer

About

We show how transformers can be used to vastly simplify neural video compression. Previous methods have been relying on an increasing number of architectural biases and priors, including motion prediction and warping operations, resulting in complex models. Instead, we independently map input frames to representations and use a transformer to model their dependencies, letting it predict the distribution of future representations given the past. The resulting video compression transformer outperforms previous methods on standard video compression data sets. Experiments on synthetic data show that our model learns to handle complex motion patterns such as panning, blurring and fading purely from data. Our approach is easy to implement, and we release code to facilitate future research.

Fabian Mentzer, George Toderici, David Minnen, Sung-Jin Hwang, Sergi Caelles, Mario Lucic, Eirikur Agustsson• 2022

Related benchmarks

TaskDatasetResultRank
Video CompressionMCL-JCV
BD-Rate (PSNR)-17.03
60
Video CompressionUVG
BD-Rate (PSNR)-34.28
49
Video CompressionUVG (test)
BD-Bitrate (PSNR)65.49
30
Video CompressionMCL-JCV (test)
BD-Bitrate (PSNR)44.92
26
Video Compression1080p videos
Encoding Latency (s)1.564
14
Showing 5 of 5 rows

Other info

Code

Follow for update