Depth Saliency Based on
Anisotropic Center-surround Difference

Ran Ju, Ling Ge, Wenjing Geng, Tongwei Ren, and Gangshan Wu

Figure 1. Saliency detection aims at finding the most conspicuous regions rapidly from sight. We demonstrate that depth cue contributes a powerful impact in visual attention. First row: color image (left view), disparity map and saliency map generated by our method. Second row: saliency results generated by three depth based methods. Third row: results generated by three color based methods.


Most previous works on saliency detection are dedicated to 2D images. Recently it has been shown that 3D visual information supplies a powerful cue for saliency analysis. In this paper, we propose a novel saliency method that works on depth images based on anisotropic center-surround difference. Instead of depending on absolute depth, we measure the saliency of a point by how much it outstands from surroundings, which takes the global depth structure into consideration. Besides, two common priors based on depth and location are used for refinement. The proposed method works within a complexity of O(N) and the evaluation on a dataset of over 1000 stereo images shows that our method outperforms state-of-the-art.


Figure 2. Comparisons with state-of-the-art methods. The first column shows the left views of the stereo images. The second and third column shows the depth images and ground truth salient object masks repectively. The next three columns are the saliency results of color image based methods. The last four columns show the results of depth saliency methods.


  • Ran Ju, Ling Ge, Wenjing Geng, Tongwei Ren, and Gangshan Wu. Depth saliency based on anisotropic center-surround difference. IEEE International Conference on Image Processing (ICIP'14), Paris, France, 2014. (poster) [pdf, 2.0MB]
  • Ran Ju, Yang Liu, Ling Ge, Tongwei Ren, and Gangshan Wu. Depth-aware salient object detection based on anisotropic center-surround difference. Accepted by Signal Processing: Image Communication.


  • Our NJU2000 (234MB) dataset consists 2,000 stereo images, as well as corresponding depth maps and manually labeled groundtruth. The dataset is built for evaluating salient object detection methods using depth information. The depth maps generated using Sun's flow method (see our paper) are together packed. You may either use our depth maps or try other methods to calculate depth since the stereo images are provided.
  • The C++ source code (6.06MB) of the conference version is available.
  • The C++ source code including salient object detection and segmentation will be available upon request. For evaluation, a windows executable package is free here (2.5MB).
  • A source-level optimized code for SLIC, 5x faster than the original version.

Please cite our paper if you use our dataset or code.


  • Context-aware saliency detection. Stas Goferman, Lihi Zelnik-Manor, and Ayellet Tal. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 10, pp. 1915–1926, 2012.
  • Global contrast based salient region detection. Ming-Ming Cheng, Guo-Xin Zhang, Niloy J Mitra, Xiaolei Huang, and Shi-Min Hu. IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2011, pp. 409–416.
  • What makes a patch distinct? R. Margolin, A. Tal, and L. Zelnik-Manor. IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2013, pp. 1139–1146.
  • Mesh saliency. Chang Ha Lee, Amitabh Varshney, and DavidWJacobs. ACM Transactions on Graphics. ACM, 2005, vol. 24, pp. 659–666.
  • Leveraging stereopsis for saliency analysis. Yuzhen Niu, Yujie Geng, Xueqing Li, and Feng Liu. IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012, pp. 454–461.
  • Depth matters: Influence of depth cues on visual saliency. Congyan Lang, Tam V Nguyen, Harish Katti, Karthik Yadati, Mohan Kankanhalli, and Shuicheng Yan. European Conference on Computer Vision,pp. 101–115. Springer, 2012.
  • Peng, H., Li, B., Xiong, W., Hu, W., Ji, R. (2014). RGBD Salient Object Detection: A Benchmark and Algorithms. In Computer Vision–ECCV 2014 (pp. 92-109). Springer International Publishing.