Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SAM-COD: SAM-guided Unified Framework for Weakly-Supervised Camouflaged Object Detection

About

Most Camouflaged Object Detection (COD) methods heavily rely on mask annotations, which are time-consuming and labor-intensive to acquire. Existing weakly-supervised COD approaches exhibit significantly inferior performance compared to fully-supervised methods and struggle to simultaneously support all the existing types of camouflaged object labels, including scribbles, bounding boxes, and points. Even for Segment Anything Model (SAM), it is still problematic to handle the weakly-supervised COD and it typically encounters challenges of prompt compatibility of the scribble labels, extreme response, semantically erroneous response, and unstable feature representations, producing unsatisfactory results in camouflaged scenes. To mitigate these issues, we propose a unified COD framework in this paper, termed SAM-COD, which is capable of supporting arbitrary weakly-supervised labels. Our SAM-COD employs a prompt adapter to handle scribbles as prompts based on SAM. Meanwhile, we introduce response filter and semantic matcher modules to improve the quality of the masks obtained by SAM under COD prompts. To alleviate the negative impacts of inaccurate mask predictions, a new strategy of prompt-adaptive knowledge distillation is utilized to ensure a reliable feature representation. To validate the effectiveness of our approach, we have conducted extensive empirical experiments on three mainstream COD benchmarks. The results demonstrate the superiority of our method against state-of-the-art weakly-supervised and even fully-supervised methods.

Huafeng Chen, Pengxu Wei, Guangqian Guo, Shan Gao• 2024

Related benchmarks

TaskDatasetResultRank
Camouflaged Object DetectionCAMO 1.0 (test)
MAE0.06
23
Camouflaged Object DetectionCOD10K 1.0 (test)
MAE0.029
23
Camouflaged Object DetectionNC4K 1.0
MAE0.039
21
Showing 3 of 3 rows

Other info

Follow for update