VisualDeltas: Learning Preferences from Visual Quality Perturbations
About
We present VisualDeltas, a lightweight preference-learning framework that extracts supervision from visual quality variations in multimodal data. By leveraging the systematic impact of image quality on visual perception and reasoning, VisualDeltas induces informative preference signals without relying on human annotations or external teachers. The framework supports both label-free and label-based regimes, enabling flexible use of available supervision when present. Across diverse multimodal benchmarks and model scales, VisualDeltas consistently outperforms rejection-sampling fine-tuning and improves generalization, and extends naturally to a range of visual degradations.
Hailiang Huang, Yihao Liu, Shengyue Guan, Haoze Li, Sujian Li• 2026
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | GQA | Accuracy47.5 | 505 | |
| Table Question Answering | HiTab | Accuracy71.91 | 121 | |
| Table Question Answering | WikiTQ | Accuracy69.9 | 118 | |
| Visual Question Answering | VQA | Accuracy68.2 | 52 | |
| Mathematical Visual Question Answering | MathVision | Accuracy25.66 | 34 |
Showing 5 of 5 rows