Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Autoadaptive Medical Segment Anything Model

About

Medical image segmentation is a key task in the imaging workflow, influencing many image-based decisions. Traditional, fully-supervised segmentation models rely on large amounts of labeled training data, typically obtained through manual annotation, which can be an expensive, time-consuming, and error-prone process. This signals a need for accurate, automatic, and annotation-efficient methods of training these models. We propose ADA-SAM (automated, domain-specific, and adaptive segment anything model), a novel multitask learning framework for medical image segmentation that leverages class activation maps from an auxiliary classifier to guide the predictions of the semi-supervised segmentation branch, which is based on the Segment Anything (SAM) framework. Additionally, our ADA-SAM model employs a novel gradient feedback mechanism to create a learnable connection between the segmentation and classification branches by using the segmentation gradients to guide and improve the classification predictions. We validate ADA-SAM on real-world clinical data collected during rehabilitation trials, and demonstrate that our proposed method outperforms both fully-supervised and semi-supervised baselines by double digits in limited label settings. Our code is available at: https://github.com/tbwa233/ADA-SAM.

Tyler Ward, Meredith K. Owen, O'Kira Coleman, Brian Noehren, Abdullah-Al-Zubaer Imran• 2025

Related benchmarks

TaskDatasetResultRank
Anatomical Structure SegmentationCombined laparoscopic datasets (Dresden, CholecSeg8k, AutoLaparoT3, EndoScapes-CVS201, M2caiSeg) (test)
P182.37
16
Laparoscopic SegmentationGynsurg (unseen)
Dice (C2)37.56
16
Surgical Instrument SegmentationSurgical Instrument combined (test)
P3 Dice83.26
16
Tissue SegmentationCombined (Dresden, CholecSeg8k, AutoLaparoT3, EndoScapes-CVS201, M2caiSeg) (test)
Dice P280.65
16
Showing 4 of 4 rows

Other info

Follow for update