Convolution Meets LoRA: Parameter Efficient Finetuning for Segment Anything Model
About
The Segment Anything Model (SAM) stands as a foundational framework for image segmentation. While it exhibits remarkable zero-shot generalization in typical scenarios, its advantage diminishes when applied to specialized domains like medical imagery and remote sensing. To address this limitation, this paper introduces Conv-LoRA, a simple yet effective parameter-efficient fine-tuning approach. By integrating ultra-lightweight convolutional parameters into Low-Rank Adaptation (LoRA), Conv-LoRA can inject image-related inductive biases into the plain ViT encoder, further reinforcing SAM's local prior assumption. Notably, Conv-LoRA not only preserves SAM's extensive segmentation knowledge but also revives its capacity of learning high-level image semantics, which is constrained by SAM's foreground-background segmentation pretraining. Comprehensive experimentation across diverse benchmarks spanning multiple domains underscores Conv-LoRA's superiority in adapting SAM to real-world semantic segmentation tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic segmentation | Cityscapes | mIoU62.43 | 218 | |
| Semantic segmentation | CamVid | mIoU66.96 | 70 | |
| Semantic segmentation | ISBI 2012 | mIoU79.87 | 13 | |
| Semantic segmentation | Kvasir-Seg | mIoU85.2 | 13 | |
| Semantic segmentation | M-Building | mIoU77.32 | 9 | |
| Semantic segmentation | Trans10K | mIoU86.47 | 9 | |
| Semantic segmentation | Synapse | mIoU43.41 | 9 |