Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Lossy Common Information in a Learnable Gray-Wyner Network

About

Many computer vision tasks share substantial overlapping information, yet conventional codecs tend to ignore this, leading to redundant and inefficient representations. The Gray-Wyner network, a classical concept from information theory, offers a principled framework for separating common and task-specific information. Inspired by this idea, we develop a learnable three-channel codec that disentangles shared information from task-specific details across multiple vision tasks. We characterize the limits of this approach through the notion of lossy common information, and propose an optimization objective that balances inherent tradeoffs in learning such representations. Through comparisons of three codec architectures on two-task scenarios spanning six vision benchmarks, we demonstrate that our approach substantially reduces redundancy and consistently outperforms independent coding. These results highlight the practical value of revisiting Gray-Wyner theory in modern machine learning contexts, bridging classic information theory with task-driven representation learning.

Anderson de Andrade, Alon Harell, Ivan V. Baji\'c• 2026

Related benchmarks

TaskDatasetResultRank
Object DetectionCOCO 2017--
279
Semantic segmentationCityscapes
R11.384
16
Keypoint DetectionCOCO 2017
R20.845
13
Showing 3 of 3 rows

Other info

Follow for update