CoLA: Cross-Modal Low-rank Adaptation for Multimodal Downstream Tasks
About
Foundation models have revolutionized AI, but adapting them efficiently for multimodal tasks, particularly in dual-stream architectures composed of unimodal encoders, such as DINO and BERT, remains a significant challenge. Parameter-Efficient Fine-Tuning (PEFT) methods like Low-Rank Adaptation (LoRA) enable lightweight adaptation, yet they operate in isolation within each modality, limiting their ability in capturing cross-modal interactions. In this paper, we take a step in bridging this gap with Cross-Modal Low-Rank Adaptation (CoLA), a novel PEFT framework that extends LoRA by introducing a dedicated inter-modal adaptation pathway alongside the standard intra-modal one. This dual-path design enables CoLA to adapt unimodal foundation models to multimodal tasks effectively, without interference between modality-specific and cross-modal learning. We evaluate CoLA across a range of vision-language (RefCOCO, RefCOCO+, RefCOCOg) and audio-visual (AVE, AVS) benchmarks, where it consistently outperforms LORA, achieving a relative gain of around 3\% and 2\%, respectively, while maintaining parameter efficiency. Notably, CoLA enables the first multi-task PEFT framework for visual grounding, bridging a key gap in efficient multimodal adaptation.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Referring Expression Comprehension | RefCOCO+ (val) | Accuracy79.6 | 354 | |
| Referring Expression Comprehension | RefCOCO (val) | Accuracy89.4 | 344 | |
| Referring Expression Comprehension | RefCOCO (testA) | Accuracy0.91 | 342 | |
| Referring Expression Comprehension | RefCOCOg (test) | Accuracy81.8 | 300 | |
| Referring Expression Comprehension | RefCOCOg (val) | Accuracy81.7 | 300 | |
| Referring Expression Segmentation | RefCOCO (testA) | -- | 257 | |
| Referring Expression Comprehension | RefCOCO+ (testB) | Accuracy71.9 | 244 | |
| Referring Expression Segmentation | RefCOCO+ (testA) | -- | 230 | |
| Referring Expression Segmentation | RefCOCO+ (val) | -- | 223 | |
| Referring Expression Comprehension | RefCOCO+ (testA) | Accuracy84.7 | 216 |