Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CamoFormer: Masked Separable Attention for Camouflaged Object Detection

About

How to identify and segment camouflaged objects from the background is challenging. Inspired by the multi-head self-attention in Transformers, we present a simple masked separable attention (MSA) for camouflaged object detection. We first separate the multi-head self-attention into three parts, which are responsible for distinguishing the camouflaged objects from the background using different mask strategies. Furthermore, we propose to capture high-resolution semantic representations progressively based on a simple top-down decoder with the proposed MSA to attain precise segmentation results. These structures plus a backbone encoder form a new model, dubbed CamoFormer. Extensive experiments show that CamoFormer surpasses all existing state-of-the-art methods on three widely-used camouflaged object detection benchmarks. There are on average around 5% relative improvements over previous methods in terms of S-measure and weighted F-measure.

Bowen Yin, Xuying Zhang, Qibin Hou, Bo-Yuan Sun, Deng-Ping Fan, Luc Van Gool• 2022

Related benchmarks

TaskDatasetResultRank
Camouflaged Object DetectionCOD10K (test)
S-measure (S_alpha)0.872
224
Camouflaged Object DetectionCOD10K
S-measure (S_alpha)0.868
178
Camouflaged Object DetectionChameleon
S-measure (S_alpha)91
150
Camouflaged Object DetectionCAMO (test)--
111
Camouflaged Object DetectionNC4K
M Score0.031
67
Camouflaged Object DetectionCAMO 250 (test)
M (Mean Score)0.046
59
Camouflaged Object DetectionNC4K
Sm89.2
58
Camouflaged Object DetectionCAMO
M Score0.043
37
Camouflaged Object DetectionCAMO 1.0 (test)
MAE0.046
23
Camouflaged Object DetectionCOD10K 1.0 (test)
MAE0.023
23
Showing 10 of 17 rows

Other info

Follow for update