Texture-guided Saliency Distilling for Unsupervised Salient Object Detection
About
Deep Learning-based Unsupervised Salient Object Detection (USOD) mainly relies on the noisy saliency pseudo labels that have been generated from traditional handcraft methods or pre-trained networks. To cope with the noisy labels problem, a class of methods focus on only easy samples with reliable labels but ignore valuable knowledge in hard samples. In this paper, we propose a novel USOD method to mine rich and accurate saliency knowledge from both easy and hard samples. First, we propose a Confidence-aware Saliency Distilling (CSD) strategy that scores samples conditioned on samples' confidences, which guides the model to distill saliency knowledge from easy samples to hard samples progressively. Second, we propose a Boundary-aware Texture Matching (BTM) strategy to refine the boundaries of noisy labels by matching the textures around the predicted boundary. Extensive experiments on RGB, RGB-D, RGB-T, and video SOD benchmarks prove that our method achieves state-of-the-art USOD performance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Camouflaged Object Detection | COD10K (test) | S-measure (S_alpha)0.6428 | 224 | |
| Skin Lesion Segmentation | ISIC 2017 (test) | Dice Score61.35 | 113 | |
| Camouflaged Object Detection | CAMO (test) | E_phi0.7071 | 111 | |
| Skin Lesion Segmentation | ISIC 2018 (test) | Dice Score75.01 | 87 | |
| Camouflaged Object Detection | NC4K (test) | Sm0.7131 | 68 | |
| Camouflaged Object Detection | Chameleon (test) | -- | 66 | |
| Skin Lesion Segmentation | PH2 (test) | DSC78.21 | 34 |