Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SalFBNet: Learning Pseudo-Saliency Distribution via Feedback Convolutional Networks

About

Feed-forward only convolutional neural networks (CNNs) may ignore intrinsic relationships and potential benefits of feedback connections in vision tasks such as saliency detection, despite their significant representation capabilities. In this work, we propose a feedback-recursive convolutional framework (SalFBNet) for saliency detection. The proposed feedback model can learn abundant contextual representations by bridging a recursive pathway from higher-level feature blocks to low-level layer. Moreover, we create a large-scale Pseudo-Saliency dataset to alleviate the problem of data deficiency in saliency detection. We first use the proposed feedback model to learn saliency distribution from pseudo-ground-truth. Afterwards, we fine-tune the feedback model on existing eye-fixation datasets. Furthermore, we present a novel Selective Fixation and Non-Fixation Error (sFNE) loss to make proposed feedback model better learn distinguishable eye-fixation-based features. Extensive experimental results show that our SalFBNet with fewer parameters achieves competitive results on the public saliency detection benchmarks, which demonstrate the effectiveness of proposed feedback model and Pseudo-Saliency data. Source codes and Pseudo-Saliency dataset can be found at https://github.com/gqding/SalFBNet

Guanqun Ding, Nevrez Imamoglu, Ali Caglayan, Masahiro Murakawa, Ryosuke Nakamura• 2021

Related benchmarks

TaskDatasetResultRank
Saliency PredictionSALICON (test)
NSS1.952
25
Saliency PredictionSALICON LSUN'17 competition (test)
CC0.892
18
Saliency PredictionGeneric Efficiency Benchmark
FLOPS (G)76.29
10
Saliency Heatmap PredictionCAT2000
CC0.703
5
Showing 4 of 4 rows

Other info

Follow for update