Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Filter Images First, Generate Instructions Later: Pre-Instruction Data Selection for Visual Instruction Tuning

About

Visual instruction tuning (VIT) for large vision-language models (LVLMs) requires training on expansive datasets of image-instruction pairs, which can be costly. Recent efforts in VIT data selection aim to select a small subset of high-quality image-instruction pairs, reducing VIT runtime while maintaining performance comparable to full-scale training. However, a major challenge often overlooked is that generating instructions from unlabeled images for VIT is highly expensive. Most existing VIT datasets rely heavily on human annotations or paid services like the GPT API, which limits users with constrained resources from creating VIT datasets for custom applications. To address this, we introduce Pre-Instruction Data Selection (PreSel), a more practical data selection paradigm that directly selects the most beneficial unlabeled images and generates instructions only for the selected images. PreSel first estimates the relative importance of each vision task within VIT datasets to derive task-wise sampling budgets. It then clusters image features within each task, selecting the most representative images with the budget. This approach reduces computational overhead for both instruction generation during VIT data formation and LVLM fine-tuning. By generating instructions for only 15% of the images, PreSel achieves performance comparable to full-data VIT on the LLaVA-1.5 and Vision-Flan datasets. The link to our project page: https://bardisafa.github.io/PreSel

Bardia Safaei, Faizan Siddiqui, Jiacong Xu, Vishal M. Patel, Shao-Yuan Lo• 2025

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy87.2
1455
Visual Question AnsweringVQA v2
Accuracy76.5
1362
Visual Question AnsweringTextVQA
Accuracy55.2
1285
Text-based Visual Question AnsweringTextVQA
Accuracy55.2
807
Multimodal EvaluationMME
Score1.50e+3
658
Multimodal ReasoningMM-Vet
MM-Vet Score29.6
431
Multimodal Capability EvaluationMM-Vet
Score37.7
345
Visual Question AnsweringGQA
Mean Accuracy57.9
196
Visual Question AnsweringGQA
Score41.9
193
Multimodal EvaluationMM-Vet
Score29.1
180
Showing 10 of 28 rows

Other info

Code

Follow for update