Abstract
Symmetry has been observed as an important indicator of visual attention. In this paper, we propose a novel saliency prediction method based on fast radial symmetry transform (FRST) and its generalization (GFRST). We made two contributions. First, a novel saliency predictor based on FRST is proposed. The new approach does not require a whole set of visual features (intensity, color, orientation) as in most previous works but uses only symmetry and center bias to model human fixations at the behavioral level. The new model is shown to have higher prediction accuracy and lower computational complexity than an existing saliency prediction method based on symmetry. Second, we propose using GFRST for predicting visual attention. GFRST is shown to outperform FRST, as it can detect symmetries distorted by parallel projection.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
López MT, Fernández-Caballero A, Fernández MA, Mira J, Delgado AE. Visual surveillance by dynamic visual attention method. Pattern Reconit. 2006;39:2194–211.
Begum M, Karray F. Visual attention for robotic cognition: a survey. IEEE Trans Auton Ment Dev. 2011;3(1):92–105.
Harding P, Robertson NM. Visual saliency from image features with application to compression. Cognit Comput. 2013;5(1):76–98.
Li Z, Qin S, Itti L. Visual attention guided bit allocation in video compression. Image Vis Comput. 2011;29:1–14.
Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell. 1998;20(11):1254–9.
Le Meur O, Le Callet P, Barba D, Thoreau D. A coherent computational approach to model bottom-up visual attention. IEEE Trans Pattern Anal Mach Intell. 2006;28(5):802–17.
Liang J, Yuen SY. Edge detection with automatic scale selection approach to improve coherent visual attention model. In: IAPR international conference on machine vision applications; 2013.
Kootstra G, de Boer B, Schomaker LRB. Predicting eye fixations on complex visual stimuli using local symmetry. Cognit Comput. 2011;3:223–40.
Reisfeld D, Wolfson H, Yeshurun Y. Context-free attentional operators: the generalized symmetry transform. Int J Comput Vis. 1995;14:119–30.
Heidemann G. Focus-of-attention from local color symmetries. IEEE Trans Pattern Anal Mach Intell. 2004;26(7):817–30.
Judd T, Ehinger K, Durand F, Torralba A. Learning to predict where humans look. In: Proceedings of international conference on computer vision; 2009.
Zhang J, Sclaroff S. Saliency detection: a boolean map approach. In: IEEE international conference on computer vision (ICCV); 2013. p. 153–60.
Huang L, Pashler H. A boolean map theory of visual attention. Psychol Rev. 2007;114(3):599.
Tünnermann J, Mertsching B. Region-based artificial visual attention in space and time. Cognit Comput. 2014;6(1):125–43.
Erdem E, Erdem A. Visual saliency estimation by nonlinearly integrating features using region covariances. J Vis. 2013;13(4):11.
Marat S, Rahman A, Pellerin D, Guyader N, Houzet D. Improving visual saliency by adding ‘face feature map’ and ‘center bias’. Cognit Comput. 2013;5(1):63–75.
Cerf M, Harel J, Einhauser W, Koch C. Predicting human gaze using low-level saliency combined with face detection. In: Platt JC, Koller D, Singer Y, Roweis ST, editors. Advances in neural information processing systems. MIT Press; 2007.
Zhao J, Sun S, Liu X, Sun J, Yang A. A novel biologically inspired visual saliency model. Cognit Comput. 2014;6(4):841–8.
Hershler O, Hochstein S. At first sight: a high-level pop out effect for faces. Vis Res. 2005;45(13):1707–24.
Van Rullen R. On second glance: still no high-level pop-out effect for faces. Vis Res. 2006;46(18):3017–27.
Palmer SE, Hemenway K. Orientation and symmetry: effects of multiple, rotational, and near symmetries. J Exp Psychol Hum Percept Perform. 1978;4(4):691–702.
Kaufman L, Richards W. Spontaneous fixation tendencies for visual forms. Percept Psychophys. 1969;5(2):85–8.
Zhou X, Chu H, Li X, Zhan Y. Center of mass attracts attention. Neuroreport. 2006;17(1):85–8.
Orabona F, Metta G, Sandini G. A proto-object based visual attention model. Attention in cognitive systems. Theories and systems from an interdisciplinary viewpoint. Berlin: Springer; 2008.
Sun Y. Hierarchical object-based visual attention for machine vision. Ph.D. Thesis. School of Informatics, University of Edinburgh; 2003.
Bindemann M, Scheepers C, Burton AM. Viewpoint and center of gravity affect eye movements to human faces. J Vis. 2009;9(2):7.
Coren S, Hoenig P. Effect of non-target stimuli upon length of voluntary saccades. Percept Mot Skills. 1972;34(2):499–508.
Findlay JM. Local and global influences on saccadic eye movements. In: Fisher DE, Monty RA, Senders JW, editors. Eye movements: cognition and visual perception. Hillsdale: Lawrence Erlbaum; 1981.
Findlay JM. Global visual processing for saccadic eye movements. Vis Res. 1982;22(8):1033–45.
Findlay JM, Gilchrist ID. Spatial scale and saccade programming. Perception. 1997;26(9):1159–67.
He PY, Kowler E. The role of location probability in the programming of saccades: implications for “center-of-gravity” tendencies. Vis Res. 1989;29(9):1165–81.
Harel J, Koch C, Perona P. Graph-based visual saliency. In: Advances in neural information processing systems; 2006. p. 545–52.
Goferman S, Zelnik-Manor L, Tal A. Context-aware saliency detection. IEEE Trans Pattern Anal Mach Intell. 2012;34(10):1915–26.
Bruce NDB, Tsotsos JK. Saliency based on information maximization. Adv Neural Inf Process Syst. 2006;18:155–62.
Rahtu E, Kannala J, Salo M, Heikkilä J. Segmenting salient objects from images and videos. In: Computer Vision–ECCV 2010. Springer, Berlin, Heidelberg; 2010. p. 366–79.
Zhang L, Tong M, Marks T, Shan H, Cottrell G. SUN: a Bayesian framework for saliency using natural statistics. J Vis. 2008;8(7):32.
Hou X, Zhang L. Saliency detection: a spectral residual approach. In: IEEE conference on computer vision and pattern recognition (CVPR); 2007.
Achanta R, Hemami S, Estrada F, Susstrunk S. Frequency-tuned salient region detection. In: IEEE conference on computer vision and pattern recognition. Miami, FL; 2009. p. 1597–604.
Yuen SY. Shape from contour using symmetries. Lect Notes Comput Sci. 1990;427:437–53.
Loy G, Zelinsky A. Fast radial symmetry for detecting points of interest. IEEE Trans Pattern Anal Mach Intell. 2003;25(8):959–73.
Ni J, Singh MK, Bahlmann C. Fast radial symmetry detection under affine transformations. In: Mortensen E, editor. Computer vision and pattern recognition (CVPR); 2012.
Le Meur O, Castellan X, Le Callet P, Barba D. Efficient saliency-based repurposing method. In: IEEE international conference on image processing; 2006. p. 421–24.
Loy G. Computer vision to see people: a basis for enhanced human computer interaction. Ph.D. thesis, Department of Systems Engineering, Aust Natl Univ; 2003.
Borji A, Itti L. State-of-the-art in visual attention modeling. IEEE Trans Pattern Anal Mach Intell. 2013;35(1):185–207.
Gao D, Vasconcelos N. Bottom-up saliency is a discriminant process. In: IEEE 11th international conference on computer vision (ICCV); 2007.
Kienzle W, Wichmann FA, Franz MO, Schölkopf B. A nonparametric approach to bottom-up visual saliency. In: Advances in neural information processing systems; 2006. p. 689–96.
Le Meur O, Baccino T. Methods for comparing scanpaths and saliency maps: strengths and weaknesses. Behav Res Methods. 2012;45(1):251–66.
Zhao Q, Koch C. Learning a saliency map using fixated locations in natural scenes. J Vis. 2011;11(3):9.
Li J, Levine MD, An X, Xu X, He H. Visual saliency based on scale-space analysis in the frequency domain. IEEE Trans Pattern Anal Mach Intell. 2013;35(4):996–1010.
Borji A, Sihite DN, Itti L. Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study. IEEE Trans Image Process. 2013;22(1):55–69.
Sun C. Fast stereo matching using rectangular subregioning and 3D Maximum-surface techniques. Int J Comput Vis. 2002;47:99–117.
Zitova B, Flusser J. Image registration methods: a survey. Image Vis Comput. 2003;21:977–1000.
Liang J, Yuen SY. An edge detection with automatic scale selection approach to improve coherent visual attention model. Pattern Recognit Lett. 2013;34(13):1519–24.
Ouerhani N, Von Wartburg R, Hügli H, Müri R. Empirical validation of the saliency-based model of visual attention. Electro Lett Comput Vis Image Anal. 2004;3(1):13–24.
Le Meur O, Le Callet P, Barba D. Predicting visual fixations on video based on low-level visual features. Vis Res. 2007;47(19):2483–98.
Mancas M. Computational attention modelisation and application to audio and image processing. Ph.D. thesis ; 2007.
Rajashekar U, Van Der Linde I, Bovik AC, Cormack LK. GAFFE: a gaze-attentive fixation finding engine. IEEE Trans Image Process. 2008;17(4):564–73.
Pele O, Werman M. Fast and robust earth mover’s distances. In: IEEE 12th international conference on computer vision; 2009. p. 460–67.
Judd T, Durand F, Torralba A. A benchmark of computational models of saliency to predict human fixations. MIT technical report; 2012
Riche N, Duvinage M, Mancas M, Gosselin B, Dutoit T. Saliency and human fixations: state-of-the-art and study of comparison metrics. In: IEEE international conference on computer vision (ICCV); 2013.
Acknowledgments
The work described in this paper was supported by a Research Studentship and a grant from CityU (Project No. 7004240). We thank Mr. Yang Lou for proofreading the manuscript.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
Jiayu Liang and Shiu Yin Yuen declare that they have no conflict of interest.
Informed Consent
All procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2008 (5). Additional informed consent was obtained from all patients for which identifying information is included in this article.
Human and Animal Rights
This article does not contain any studies with human participants or animals performed by any of the authors.
Rights and permissions
About this article
Cite this article
Liang, J., Yuen, S.Y. A Novel Saliency Prediction Method Based on Fast Radial Symmetry Transform and Its Generalization. Cogn Comput 8, 693–702 (2016). https://doi.org/10.1007/s12559-016-9406-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12559-016-9406-8