TikArt: Aperture-Guided Observation for Fine-Grained Visual Reasoning via Reinforcement Learning
About
We address fine-grained visual reasoning in multimodal large language models (MLLMs), where key evidence may reside in tiny objects, cluttered regions, or subtle markings that are lost under a single global image encoding. We introduce TikArt (Thinking Aperture), an aperture-guided agent that casts multi-step vision-language reasoning as a decision process over regions of interest. TikArt follows a Think-Aperture-Observe loop, alternating between language generation and two aperture actions: Zoom extracts rectangular crops, while Segment invokes SAM2 to obtain mask-based crops for irregular targets. After every action, the model must produce an explicit observation, turning local visual cues into persistent linguistic memory. Built on Qwen3-VL-8B, TikArt optimizes its reasoning policy with AGRPO, a GRPO-style reinforcement learning algorithm with a two-stage curriculum: it warms up segmentation actions and then jointly optimizes visual math, fine-grained VQA, and segmentation, using rewards that couple task success with purposeful aperture use. Experiments on V*, HR-Bench-4K/8K, MME-RealWorld-Lite, MMStar, RefCOCO, and ReasonSeg show consistent gains over the backbone and yield interpretable aperture trajectories for high-resolution reasoning.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multimodal Understanding | MMStar | -- | 197 | |
| Reasoning Segmentation | ReasonSeg (test) | gIoU73.8 | 102 | |
| Referring Segmentation | RefCOCO (val) | cIoU77.1 | 51 | |
| Multimodal Understanding | MME-RealWorld-Lite | Overall Score56.97 | 34 | |
| High-resolution Visual Understanding | HR-Bench-8K | FSP89.25 | 29 | |
| High-resolution perception | V* | Overall Score89.53 | 20 | |
| High-resolution perception | HR-Bench-4K | Overall Score82.25 | 19 |