Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

HyTIP: Hybrid Temporal Information Propagation for Masked Conditional Residual Video Coding

About

Most frame-based learned video codecs can be interpreted as recurrent neural networks (RNNs) propagating reference information along the temporal dimension. This work revisits the limitations of the current approaches from an RNN perspective. The output-recurrence methods, which propagate decoded frames, are intuitive but impose dual constraints on the output decoded frames, leading to suboptimal rate-distortion performance. In contrast, the hidden-to-hidden connection approaches, which propagate latent features within the RNN, offer greater flexibility but require large buffer sizes. To address these issues, we propose HyTIP, a learned video coding framework that combines both mechanisms. Our hybrid buffering strategy uses explicit decoded frames and a small number of implicit latent features to achieve competitive coding performance. Experimental results show that our HyTIP outperforms the sole use of either output-recurrence or hidden-to-hidden approaches. Furthermore, it achieves comparable performance to state-of-the-art methods but with a much smaller buffer size, and outperforms VTM 17.0 (Low-delay B) in terms of PSNR-RGB and MS-SSIM-RGB. The source code of HyTIP is available at https://github.com/NYCU-MAPL/HyTIP.

Yi-Hsin Chen, Yi-Chen Yao, Kuan-Wei Ho, Chun-Hung Wu, Huu-Tai Phung, Martin Benjak, J\"orn Ostermann, Wen-Hsiao Peng• 2025

Related benchmarks

TaskDatasetResultRank
Video CompressionMCL-JCV--
79
Video CompressionUVG
BD-Rate-23.74
23
Video CompressionHEVC Class D
BD-Rate-25.28
23
Video CompressionHEVC Class B
BD-Rate-15.92
23
Video CompressionHEVC Class C
BD-Rate-5.99
23
Video CompressionHEVC Class E
BD-Rate1.57
23
Showing 6 of 6 rows

Other info

Follow for update