Decoding the Pulse of Reasoning VLMs in Multi-Image Understanding Tasks
About
Multi-image reasoning remains a significant challenge for vision-language models (VLMs). We investigate a previously overlooked phenomenon: during chain-of-thought (CoT) generation, the text-to-image (T2I) attention of reasoning VLMs exhibits diffuse "pulses": sporadic and unfocused attention patterns that fail to concentrate on task-relevant images. We further reveal a systematic positional bias in attention allocation across images. Motivated by these observations, we propose PulseFocus, a training-free, inference-time method that structures CoT reasoning into interleaved plan/focus blocks with soft attention gating. By forcing the model to explicitly plan which image to examine and then gating decode-time attention to the referenced image, PulseFocus sharpens attention focus and yields consistent improvements on multi-image benchmarks like BLINK benchmark (+3.7%) and MuirBench (+1.07%).
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-image Reasoning | MuirBench | Accuracy57.88 | 61 | |
| Multimodal Reasoning | BLINK | Accuracy56.4 | 15 |