Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Vision Transformer Adapters for Generalizable Multitask Learning

About

We introduce the first multitasking vision transformer adapters that learn generalizable task affinities which can be applied to novel tasks and domains. Integrated into an off-the-shelf vision transformer backbone, our adapters can simultaneously solve multiple dense vision tasks in a parameter-efficient manner, unlike existing multitasking transformers that are parametrically expensive. In contrast to concurrent methods, we do not require retraining or fine-tuning whenever a new task or domain is added. We introduce a task-adapted attention mechanism within our adapter framework that combines gradient-based task similarities with attention-based ones. The learned task affinities generalize to the following settings: zero-shot task transfer, unsupervised domain adaptation, and generalization without fine-tuning to novel domains. We demonstrate that our approach outperforms not only the existing convolutional neural network-based multitasking methods but also the vision transformer-based ones. Our project page is at \url{https://ivrl.github.io/VTAGML}.

Deblina Bhattacharjee, Sabine S\"usstrunk, Mathieu Salzmann• 2023

Related benchmarks

TaskDatasetResultRank
Semantic segmentationSYNTHIA-to-Cityscapes (SYN2CS) 16 classes (val)--
50
Semantic segmentationVKITTI2 -> Cityscapes 8 classes
mIoU70.93
19
Depth EstimationSYNTHIA to Cityscapes (val)
RMSE6.99
12
Depth EstimationVirtual KITTI to Cityscapes 2 (val)
RMSE8.66
12
Showing 4 of 4 rows

Other info

Follow for update