Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Lang2Act: Fine-Grained Visual Reasoning through Self-Emergent Linguistic Toolchains

About

Visual Retrieval-Augmented Generation (VRAG) enhances Vision-Language Models (VLMs) by incorporating external visual documents to address a given query. Existing VRAG frameworks usually depend on rigid, pre-defined external tools to extend the perceptual capabilities of VLMs, typically by explicitly separating visual perception from subsequent reasoning processes. However, this decoupled design can lead to unnecessary loss of visual information, particularly when image-based operations such as cropping are applied. In this paper, we propose Lang2Act, which enables fine-grained visual perception and reasoning through self-emergent linguistic toolchains. Rather than invoking fixed external engines, Lang2Act collects self-emergent actions as linguistic tools and leverages them to enhance the visual perception capabilities of VLMs. To support this mechanism, we design a two-stage Reinforcement Learning (RL)-based training framework. Specifically, the first stage optimizes VLMs to self-explore high-quality actions for constructing a reusable linguistic toolbox, and the second stage further optimizes VLMs to exploit these linguistic tools for downstream reasoning effectively. Experimental results demonstrate the effectiveness of Lang2Act in substantially enhancing the visual perception capabilities of VLMs, achieving performance improvements of over 4%. All code and data are available at https://github.com/NEUIR/Lang2Act.

Yuqi Xiong, Chunyi Peng, Zhipeng Xu, Zhenghao Liu, Zulong Chen, Yukun Yan, Shuo Wang, Yu Gu, Ge Yu• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal Document Question AnsweringMMLongBench-Doc
Acc (TXT Evidence)43.96
30
Document Visual Question AnsweringMMLongBench-Doc
Accuracy36.55
29
Visual Question AnsweringSlideVQA
Single Accuracy83.62
28
Multimodal Document ReasoningSlideVQA, MMLongBench-Doc, and ViDoSeek
Average Score55.6
14
Video Document SeekingViDoSeek
Single Score44.34
14
Visual Question AnsweringViDoSeek
Single Accuracy0.7425
14
Showing 6 of 6 rows

Other info

Follow for update