Thinking with Images via Self-Calling Agent
About
Thinking-with-images paradigms have showcased remarkable visual reasoning capability by integrating visual information as dynamic elements into the Chain-of-Thought (CoT). However, optimizing interleaved multimodal CoT (iMCoT) through reinforcement learning remains challenging, as it relies on scarce high-quality reasoning data. In this study, we propose Self-Calling Chain-of-Thought (sCoT), a novel visual reasoning paradigm that reformulates iMCoT as a language-only CoT with self-calling. Specifically, a main agent decomposes the complex visual reasoning task to atomic subtasks and invokes its virtual replicas, i.e. parameter-sharing subagents, to solve them in isolated context. sCoT enjoys substantial training effectiveness and efficiency, as it requires no explicit interleaving between modalities. sCoT employs group-relative policy optimization to reinforce effective reasoning behavior to enhance optimization. Experiments on HR-Bench 4K show that sCoT improves the overall reasoning performance by up to $1.9\%$ with $\sim 75\%$ fewer GPU hours compared to strong baseline approaches. Code is available at https://github.com/YWenxi/think-with-images-through-self-calling.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Hallucination Evaluation | POPE | -- | 132 | |
| Optical Character Recognition Evaluation | OCRBench | Score0.845 | 46 | |
| Visual Grounding | RefCOCO+ | Accuracy @ 0.5 IoU81.97 | 20 | |
| Visual Grounding | RefCOCOg | Accuracy82.96 | 17 | |
| Visual Reasoning | V* | Overall Score91.6 | 10 | |
| Visual Reasoning | HR-Bench-4K | FSP0.933 | 7 | |
| Visual Reasoning | HR-Bench-8K | FSP87 | 7 |