Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CPPO: Contrastive Perception for Vision Language Policy Optimization

About

We introduce CPPO, a Contrastive Perception Policy Optimization method for finetuning vision-language models (VLMs). While reinforcement learning (RL) has advanced reasoning in language models, extending it to multimodal reasoning requires improving both the perception and reasoning aspects. Prior works tackle this challenge mainly with explicit perception rewards, but disentangling perception tokens from reasoning tokens is difficult, requiring extra LLMs, ground-truth data, forced separation of perception from reasoning by policy model, or applying rewards indiscriminately to all output tokens. CPPO addresses this problem by detecting perception tokens via entropy shifts in the model outputs under perturbed input images. CPPO then extends the RL objective function with a Contrastive Perception Loss (CPL) that enforces consistency under information-preserving perturbations and sensitivity under information-removing ones. Experiments show that CPPO surpasses previous perception-rewarding methods, while avoiding extra models, making training more efficient and scalable.

Ahmad Rezaei, Mohsen Gholami, Saeed Ranjbar Alvar, Kevin Cannons, Mohammad Asiful Hossain, Zhou Weimin, Shunbo Zhou, Yong Zhang, Mohammad Akbari• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningWeMath
Accuracy44.8
75
Mathematical ReasoningMathVerse
Accuracy46.5
39
Visual Logical ReasoningLogicVista
Accuracy48.2
28
Mathematical ReasoningDynaMath DMath
Accuracy56.9
18
Mathematical ReasoningMathVista MVistam
Accuracy72.2
18
Mathematical ReasoningMathVision MVisionm
Accuracy29.9
18
Visual ReasoningMMMU Pro Vision
Accuracy39
18
Showing 7 of 7 rows

Other info

Follow for update