Abstract
Fitting the skills of the natural vision is an appealing perspective for artificial vision systems, especially in robotics applications where visual perception of the surrounding environment is a key requirement. Focusing on the visual attention dilemma for autonomous visual perception, in this work we propose a model for artificial visual attention combining a statistical foundation of visual saliency and a genetic optimization. The computational issue of our model relies on center-surround statistical features calculations and a nonlinear fusion of different resulting maps. Statistical foundation and bottom-up nature of the proposed model provide as well the advantage to make it usable without needing prior information as a comprehensive solid theoretical basement. The eye-fixation paradigm has been considered as evaluation benchmark providing MIT1003 and Toronto image datasets for experimental validation. The reported experimental results show scores challenging currently best algorithms used in the aforementioned field with faster execution speed of our approach.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Borji, A., Itti, L.: State-of-the-Art in Visual Attention Modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(1), 185–207 (2013)
Borji, A., Tavakoli, H.R., Sihite D.N., Itti, L.: Analysis of Scores, Datasets, and Models in Visual Saliency Prediction. In: Proc. IEEE ICCV, pp. 921–928 (December 2013)
Bruce, N., Tsotsos, J.: Attention based on information maximization. J. Vision 7(9), 950–950 (2007)
Contreras-Reyes, J.E., Arellano-Valle, R.B.: Küllback-Leibler divergence measure for multivariate skew-normal distributions. Entropy 14(9), 1606–1626 (2012)
Fawcett, T.: An introduction to ROC analysis. Pattern Recognition Letters 27, 861–874 (2006)
Hayhoe, M., Ballard, D.: Eye Movements in Natural Behavior. Trends in Cognitive Sciences 9, 188–194 (2005)
Holzbach, A., Cheng, G.: A scalable and efficient method for salient region detection using sampled template collation. In: Proc. IEEE ICIP (2014)
Jiang, M., Xu, J., Zhao, Q.: Saliency in crowd. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part VII. LNCS, vol. 8695, pp. 17–32. Springer, Heidelberg (2014)
Judd, T., Durand, F., Torralba, A.: A Benchmark of Computational Models of Saliency to Predict Human Fixations. MIT Technical Report (2012). http://saliency.mit.edu/
Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to Predict Where Humans Look. In: Proc. IEEE ICCV, pp. 2106–2113 (2009)
Kadir, T., Brady, M.: Saliency, Scale and Image Description. J. Vision 45(2), 83–105 (2001)
Kachurka, V., Madani, K., Sabourin, C., Golovko, V.: A statistical approach to human-like visual attention and saliency detection for robot vision: application to wildland fires’ detection. In: Golovko, V., Imada, A. (eds.) ICNNAI 2014. CCIS, vol. 440, pp. 124–135. Springer, Heidelberg (2014)
Kienzle, W., Franz, M.O., Schölkopf, B., Wichmann, F.A.: Center-Surround Patterns Emerge as Optimal Predictors for Human Saccade Targets. J. Vision 9, 1–15 (2009)
Koehler, K., Guo, F., Zhang, S., Eckstein, M.P.: What do saliency models predict? J. Vision 14(3), 1–27 (2014)
Liu, T., Sun, J., Zheng, N.N., Shum, H.Y.: Learning to Detect a Salient Object. In: Proc. IEEE ICCV, pp. 1–8 (2007)
Navalpakkam, V., Itti, L.: An integrated model of top-down and bottom-up attention for optimizing detection speed. In: Proc. IEEE CVPR, pp. 2049–2056 (2006)
Rajashekar, U., van der Linde, I., Bovik, A.C., Cormack, L.K.: GAFFE: A Gaze-Attentive Fixation Finding Engine. IEEE Trans. Image Processing 17(4), 564–573 (2008)
Ramík, D.M.: Contribution to complex visual information processing and autonomous knowledge extraction: application to autonomous robotics. Ph.D. dissertation, Université Paris-Est, Pub. No. 2012PEST1100 (2012)
Ramík, D.M., Sabourin, C., Madani, K.: Hybrid salient object extraction approach with automatic estimation of visual attention scale. In: Proc. IEEE SITIS (2011)
Riche, N., Duvinage, M., Mancas, M., Gosselin, B., Dutoit, T.: Saliency and human fixations: state-of-the-art and study of comparison metrics. In: Proc. IEEE ICCV, pp. 1153–1160, December 2013
Riche, N., Mancas, M., Duvinage, M., Mibulumukini, M., Gosselin B., Dutoit, T.: RARE2012: A multi-scale rarity-based saliency detection with its comparative statistical analysis. Signal Processing: Image Communication (2013), doi:10.1016/j.image.2013.03.009, issn:0923–5965
Shen, C., Zhao, Q.: Webpage saliency. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part VII. LNCS, vol. 8695, pp. 33–46. Springer, Heidelberg (2014)
Ramanathan, S., Katti, H., Sebe, N., Kankanhalli, M., Chua, T.-S.: An eye fixation database for saliency detection in images. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 30–43. Springer, Heidelberg (2010)
Tatler, B.W.: The Central Fixation Bias in Scene Viewing: Selecting an Optimal Viewing Position Independently of Motor Bases and Image Feature Distributions. J. Vision 14, 1–17 (2007)
Triesch, J., Ballard, D.H., Hayhoe, M.M., Sullivan, B.T.: What You See Is What You Need. J. Vision 3, 86–94 (2003)
Vig, E., Dorr, M., Cox. D.: Large-Scale optimization of hierarchical features for saliency prediction in natural images computer vision and pattern recognition. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 2798–2805 (2014)
Võ, M.L.-H., Smith, T.J., Mital, P.K., Henderson, J.M.: Do the eyes really have it? Dynamic allocation of attention when viewing moving faces? J. Vision 12(13), 3 (2012)
Zhang, J., Sclaroff, S.: Saliency detection: a boolean map approach. In: Proc. IEEE ICCV, pp. 153–160 (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Kachurka, V., Madani, K., Sabourin, C., Golovko, V. (2015). From Human Eye Fixation to Human-like Autonomous Artificial Vision. In: Rojas, I., Joya, G., Catala, A. (eds) Advances in Computational Intelligence. IWANN 2015. Lecture Notes in Computer Science(), vol 9094. Springer, Cham. https://doi.org/10.1007/978-3-319-19258-1_15
Download citation
DOI: https://doi.org/10.1007/978-3-319-19258-1_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-19257-4
Online ISBN: 978-3-319-19258-1
eBook Packages: Computer ScienceComputer Science (R0)