Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CamoFormer: Masked Separable Attention for Camouflaged Object Detection

About

How to identify and segment camouflaged objects from the background is challenging. Inspired by the multi-head self-attention in Transformers, we present a simple masked separable attention (MSA) for camouflaged object detection. We first separate the multi-head self-attention into three parts, which are responsible for distinguishing the camouflaged objects from the background using different mask strategies. Furthermore, we propose to capture high-resolution semantic representations progressively based on a simple top-down decoder with the proposed MSA to attain precise segmentation results. These structures plus a backbone encoder form a new model, dubbed CamoFormer. Extensive experiments show that CamoFormer surpasses all existing state-of-the-art methods on three widely-used camouflaged object detection benchmarks. There are on average around 5% relative improvements over previous methods in terms of S-measure and weighted F-measure.

Bowen Yin, Xuying Zhang, Qibin Hou, Bo-Yuan Sun, Deng-Ping Fan, Luc Van Gool• 2022

Related benchmarks

TaskDatasetResultRank
Camouflaged Object DetectionCOD10K (test)
S-measure (S_alpha)0.872
174
Camouflaged Object DetectionChameleon--
96
Camouflaged Object DetectionCAMO (test)--
85
Camouflaged Object DetectionCAMO 250 (test)
M (Mean Score)0.046
59
Camouflaged Object DetectionCAMO 1.0 (test)
MAE0.046
23
Camouflaged Object DetectionCOD10K 1.0 (test)
MAE0.023
23
Camouflaged Object DetectionNC4K 1.0
MAE0.03
21
Camouflaged Object DetectionCOD10K 2026 images (test)
S-measure (Sm)0.869
20
Camouflaged Object DetectionNC4K 4121 images (test)
Sm0.892
17
Concealed Defect DetectionCDS2K (test)
S_alpha0.589
7
Showing 10 of 10 rows

Other info

Follow for update