Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Contextual Encoder-Decoder Network for Visual Saliency Prediction

About

Predicting salient regions in natural images requires the detection of objects that are present in a scene. To develop robust representations for this challenging task, high-level visual features at multiple spatial scales must be extracted and augmented with contextual information. However, existing models aimed at explaining human fixation maps do not incorporate such a mechanism explicitly. Here we propose an approach based on a convolutional neural network pre-trained on a large-scale image classification task. The architecture forms an encoder-decoder structure and includes a module with multiple convolutional layers at different dilation rates to capture multi-scale features in parallel. Moreover, we combine the resulting representations with global scene information for accurately predicting visual saliency. Our model achieves competitive and consistent results across multiple evaluation metrics on two public saliency benchmarks and we demonstrate the effectiveness of the suggested approach on five datasets and selected examples. Compared to state of the art approaches, the network is based on a lightweight image classification backbone and hence presents a suitable choice for applications with limited computational resources, such as (virtual) robotic systems, to estimate human fixations across complex natural scenes.

Alexander Kroner, Mario Senden, Kurt Driessens, Rainer Goebel• 2019

Related benchmarks

TaskDatasetResultRank
Saliency PredictionMIT300 (test)
CC0.79
56
Saliency PredictionSALICON (test)
NSS1.931
25
Visual Saliency PredictionCAT2000 (test)
Correlation Coefficient (CC)0.87
19
Saliency PredictionMIT1003 (test)
NSS2.8007
18
Saliency PredictionSALICON LSUN'17 competition (test)
CC0.899
18
Saliency PredictionSalECI (test)
CC0.459
11
Saliency PredictionCAT2000 Natural scene
CC0.866
8
Showing 7 of 7 rows

Other info

Code

Follow for update