Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Balanced Rate-Distortion Optimization in Learned Image Compression

About

Learned image compression (LIC) using deep learning architectures has seen significant advancements, yet standard rate-distortion (R-D) optimization often encounters imbalanced updates due to diverse gradients of the rate and distortion objectives. This imbalance can lead to suboptimal optimization, where one objective dominates, thereby reducing overall compression efficiency. To address this challenge, we reformulate R-D optimization as a multi-objective optimization (MOO) problem and introduce two balanced R-D optimization strategies that adaptively adjust gradient updates to achieve more equitable improvements in both rate and distortion. The first proposed strategy utilizes a coarse-to-fine gradient descent approach along standard R-D optimization trajectories, making it particularly suitable for training LIC models from scratch. The second proposed strategy analytically addresses the reformulated optimization as a quadratic programming problem with an equality constraint, which is ideal for fine-tuning existing models. Experimental results demonstrate that both proposed methods enhance the R-D performance of LIC models, achieving around a 2\% BD-Rate reduction with acceptable additional training cost, leading to a more balanced and efficient optimization process. Code will be available at https://gitlab.com/viper-purdue/Balanced-RD.

Yichi Zhang, Zhihao Duan, Yuning Huang, Fengqing Zhu• 2025

Related benchmarks

TaskDatasetResultRank
Image CompressionKodak--
50
Image CompressionTecnick--
36
Image CompressionCLIC 2022
BD-Rate-1.87
6
Showing 3 of 3 rows

Other info

Code

Follow for update