Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CASHEW: Stabilizing Multimodal Reasoning via Iterative Trajectory Aggregation

About

Vision-language models achieve strong performance across a wide range of multimodal understanding and reasoning tasks, yet their multi-step reasoning remains unstable. Repeated sampling over the same input often produces divergent reasoning trajectories and inconsistent final predictions. To address this, we introduce two complementary approaches inspired by test-time scaling: (1) CASHEW, an inference-time framework that stabilizes reasoning by iteratively aggregating multiple candidate trajectories into higher-quality reasoning traces, with explicit visual verification filtering hallucinated steps and grounding reasoning in visual evidence, and (2) CASHEW-RL, a learned variant that internalizes this aggregation behavior within a single model. CASHEW-RL is trained using Group Sequence Policy Optimization (GSPO) with a composite reward that encourages correct answers grounded in minimal yet sufficient visual evidence, while adaptively allocating reasoning effort based on task difficulty. This training objective enables robust self-aggregation at inference. Extensive experiments on 13 image understanding, video understanding, and video reasoning benchmarks show significant performance improvements, including gains of up to +23.6 percentage points on ScienceQA and +8.1 percentage points on EgoSchema.

Chaoyu Li, Deeparghya Dutta Barua, Fei Tao, Pooyan Fazli• 2026

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy90.2
935
Multimodal EvaluationMME--
557
Multimodal Question AnsweringScienceQA
Accuracy97.8
35
Image UnderstandingSEED-Bench Image
Accuracy0.808
20
Showing 4 of 4 rows

Other info

Follow for update