Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FRISM: Fine-Grained Reasoning Injection via Subspace-Level Model Merging for Vision-Language Models

About

Efficiently enhancing the reasoning capabilities of Vision-Language Models (VLMs) by merging them with Large Reasoning Models (LRMs) has emerged as a promising direction. However, existing methods typically operate at a coarse-grained layer level, which often leads to a trade-off between injecting reasoning capabilities and preserving visual capabilities. To address this limitation, we propose {FRISM} (Fine-grained Reasoning Injection via Subspace-level model Merging), a fine-grained reasoning injection framework based on subspace-level model merging. Observing that reasoning capabilities are encoded in distinct subspaces, FRISM decomposes LRM task vectors via Singular Value Decomposition (SVD) and adaptively tunes the scaling coefficients of each subspace through learning to realize fine-grained reasoning injection. Furthermore, we introduce a label-free self-distillation learning strategy with a dual-objective optimization using common vision-language perception datasets. Extensive experiments demonstrate that FRISM effectively improves reasoning capabilities without compromising the model's original visual capabilities by consistently achieving state-of-the-art performance across diverse visual reasoning benchmarks.

Chenyu Huang, Peng Ye, Xudong Tan, Jinhan Mu, Shenghe Zheng, Li Shen, Tao Chen• 2026

Related benchmarks

TaskDatasetResultRank
Vision-Language ReasoningVL Reasoning Benchmarks
MVista Score74
28
Vision-Language PerceptionVL Perception Benchmarks
TextVQA85.5
28
Vision-Language ReasoningVL Reasoning Benchmarks MathVista, MVerse, MathVision, MMMU, R1-OV, MMStar
MathVista Acc79.8
25
Vision-Language PerceptionVL Perception Benchmarks TextVQA, POPE, Seed-Bench
TextVQA Score85
25
Showing 4 of 4 rows

Other info

Follow for update