Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GRASP: Guided Region-Aware Sparse Prompting for Adapting MLLMs to Remote Sensing

About

In recent years, Multimodal Large Language Models (MLLMs) have made significant progress in visual question answering tasks. However, directly applying existing fine-tuning methods to remote sensing (RS) images often leads to issues such as overfitting on background noise or neglecting target details. This is primarily due to the large-scale variations, sparse target distributions, and complex regional semantic features inherent in RS images. These challenges limit the effectiveness of MLLMs in RS tasks. To address these challenges, we propose a parameter-efficient fine-tuning (PEFT) strategy called Guided Region-Aware Sparse Prompting (GRASP). GRASP introduces spatially structured soft prompts associated with spatial blocks extracted from a frozen visual token grid. Through a question-guided sparse fusion mechanism, GRASP dynamically aggregates task-specific context into a compact global prompt, enabling the model to focus on relevant regions while filtering out background noise. Extensive experiments on multiple RSVQA benchmarks show that GRASP achieves competitive performance compared to existing fine-tuning and prompt-based methods while maintaining high parameter efficiency.

Qigan Sun, Chaoning Zhang, Jianwei Zhang, Xudong Wang, Jiehui Xie, Pengcheng Zheng, Haoyu Wang, Sungyoung Lee, Chi-lok Andy Tai, Yang Yang, Heng Tao Shen• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringRSVQA-HR (test)
HR Presence Score79.2
17
Remote Sensing Visual Question AnsweringRSVQA-LR unified
Count31.1
8
Remote Sensing Visual Question AnsweringRSVQA HR unified
Count62.7
8
Remote Sensing Visual Question AnsweringRSIVQA (unified)
Yes/No Accuracy93.1
8
Visual Question AnsweringRSVQA-LR (test)
Count Accuracy32.8
8
Visual Question AnsweringRSIVQA (test)
Accuracy (Yes/No)92.4
8
Showing 6 of 6 rows

Other info

Follow for update