Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TikArt: Aperture-Guided Observation for Fine-Grained Visual Reasoning via Reinforcement Learning

About

We address fine-grained visual reasoning in multimodal large language models (MLLMs), where key evidence may reside in tiny objects, cluttered regions, or subtle markings that are lost under a single global image encoding. We introduce TikArt (Thinking Aperture), an aperture-guided agent that casts multi-step vision-language reasoning as a decision process over regions of interest. TikArt follows a Think-Aperture-Observe loop, alternating between language generation and two aperture actions: Zoom extracts rectangular crops, while Segment invokes SAM2 to obtain mask-based crops for irregular targets. After every action, the model must produce an explicit observation, turning local visual cues into persistent linguistic memory. Built on Qwen3-VL-8B, TikArt optimizes its reasoning policy with AGRPO, a GRPO-style reinforcement learning algorithm with a two-stage curriculum: it warms up segmentation actions and then jointly optimizes visual math, fine-grained VQA, and segmentation, using rewards that couple task success with purposeful aperture use. Experiments on V*, HR-Bench-4K/8K, MME-RealWorld-Lite, MMStar, RefCOCO, and ReasonSeg show consistent gains over the backbone and yield interpretable aperture trajectories for high-resolution reasoning.

Hao Ding, Zhichuan Yang, Weijie Ge, Ziqin Gao, Chaoyi Lu, Lei Zhao• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal UnderstandingMMStar--
197
Reasoning SegmentationReasonSeg (test)
gIoU73.8
102
Referring SegmentationRefCOCO (val)
cIoU77.1
51
Multimodal UnderstandingMME-RealWorld-Lite
Overall Score56.97
34
High-resolution Visual UnderstandingHR-Bench-8K
FSP89.25
29
High-resolution perceptionV*
Overall Score89.53
20
High-resolution perceptionHR-Bench-4K
Overall Score82.25
19
Showing 7 of 7 rows

Other info

Follow for update