Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DA-VPT: Semantic-Guided Visual Prompt Tuning for Vision Transformers

About

Visual Prompt Tuning (VPT) has become a promising solution for Parameter-Efficient Fine-Tuning (PEFT) approach for Vision Transformer (ViT) models by partially fine-tuning learnable tokens while keeping most model parameters frozen. Recent research has explored modifying the connection structures of the prompts. However, the fundamental correlation and distribution between the prompts and image tokens remain unexplored. In this paper, we leverage metric learning techniques to investigate how the distribution of prompts affects fine-tuning performance. Specifically, we propose a novel framework, Distribution Aware Visual Prompt Tuning (DA-VPT), to guide the distributions of the prompts by learning the distance metric from their class-related semantic data. Our method demonstrates that the prompts can serve as an effective bridge to share semantic information between image patches and the class token. We extensively evaluated our approach on popular benchmarks in both recognition and segmentation tasks. The results demonstrate that our approach enables more effective and efficient fine-tuning of ViT models by leveraging semantic information to guide the learning of the prompts, leading to improved performance on various downstream vision tasks.

Li Ren, Chen Chen, Liqiang Wang, Kien Hua• 2025

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU46.5
2731
Semantic segmentationPASCAL Context (val)--
323
Visual Task AdaptationVTAB 1K
Average Accuracy76.14
78
Fine-grained Visual CategorizationFGVC
Mean Accuracy91.94
40
Showing 4 of 4 rows

Other info

Code

Follow for update