Nothing Special   »   [go: up one dir, main page]

Skip to main content

From Human Eye Fixation to Human-like Autonomous Artificial Vision

  • Conference paper
  • First Online:
Advances in Computational Intelligence (IWANN 2015)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9094))

Included in the following conference series:

Abstract

Fitting the skills of the natural vision is an appealing perspective for artificial vision systems, especially in robotics applications where visual perception of the surrounding environment is a key requirement. Focusing on the visual attention dilemma for autonomous visual perception, in this work we propose a model for artificial visual attention combining a statistical foundation of visual saliency and a genetic optimization. The computational issue of our model relies on center-surround statistical features calculations and a nonlinear fusion of different resulting maps. Statistical foundation and bottom-up nature of the proposed model provide as well the advantage to make it usable without needing prior information as a comprehensive solid theoretical basement. The eye-fixation paradigm has been considered as evaluation benchmark providing MIT1003 and Toronto image datasets for experimental validation. The reported experimental results show scores challenging currently best algorithms used in the aforementioned field with faster execution speed of our approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Borji, A., Itti, L.: State-of-the-Art in Visual Attention Modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(1), 185–207 (2013)

    Article  MathSciNet  Google Scholar 

  2. Borji, A., Tavakoli, H.R., Sihite D.N., Itti, L.: Analysis of Scores, Datasets, and Models in Visual Saliency Prediction. In: Proc. IEEE ICCV, pp. 921–928 (December 2013)

    Google Scholar 

  3. Bruce, N., Tsotsos, J.: Attention based on information maximization. J. Vision 7(9), 950–950 (2007)

    Article  Google Scholar 

  4. Contreras-Reyes, J.E., Arellano-Valle, R.B.: Küllback-Leibler divergence measure for multivariate skew-normal distributions. Entropy 14(9), 1606–1626 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  5. Fawcett, T.: An introduction to ROC analysis. Pattern Recognition Letters 27, 861–874 (2006)

    Article  Google Scholar 

  6. Hayhoe, M., Ballard, D.: Eye Movements in Natural Behavior. Trends in Cognitive Sciences 9, 188–194 (2005)

    Article  Google Scholar 

  7. Holzbach, A., Cheng, G.: A scalable and efficient method for salient region detection using sampled template collation. In: Proc. IEEE ICIP (2014)

    Google Scholar 

  8. Jiang, M., Xu, J., Zhao, Q.: Saliency in crowd. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part VII. LNCS, vol. 8695, pp. 17–32. Springer, Heidelberg (2014)

    Chapter  Google Scholar 

  9. Judd, T., Durand, F., Torralba, A.: A Benchmark of Computational Models of Saliency to Predict Human Fixations. MIT Technical Report (2012). http://saliency.mit.edu/

  10. Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to Predict Where Humans Look. In: Proc. IEEE ICCV, pp. 2106–2113 (2009)

    Google Scholar 

  11. Kadir, T., Brady, M.: Saliency, Scale and Image Description. J. Vision 45(2), 83–105 (2001)

    Google Scholar 

  12. Kachurka, V., Madani, K., Sabourin, C., Golovko, V.: A statistical approach to human-like visual attention and saliency detection for robot vision: application to wildland fires’ detection. In: Golovko, V., Imada, A. (eds.) ICNNAI 2014. CCIS, vol. 440, pp. 124–135. Springer, Heidelberg (2014)

    Chapter  Google Scholar 

  13. Kienzle, W., Franz, M.O., Schölkopf, B., Wichmann, F.A.: Center-Surround Patterns Emerge as Optimal Predictors for Human Saccade Targets. J. Vision 9, 1–15 (2009)

    Article  Google Scholar 

  14. Koehler, K., Guo, F., Zhang, S., Eckstein, M.P.: What do saliency models predict? J. Vision 14(3), 1–27 (2014)

    Article  Google Scholar 

  15. Liu, T., Sun, J., Zheng, N.N., Shum, H.Y.: Learning to Detect a Salient Object. In: Proc. IEEE ICCV, pp. 1–8 (2007)

    Google Scholar 

  16. Navalpakkam, V., Itti, L.: An integrated model of top-down and bottom-up attention for optimizing detection speed. In: Proc. IEEE CVPR, pp. 2049–2056 (2006)

    Google Scholar 

  17. Rajashekar, U., van der Linde, I., Bovik, A.C., Cormack, L.K.: GAFFE: A Gaze-Attentive Fixation Finding Engine. IEEE Trans. Image Processing 17(4), 564–573 (2008)

    Article  Google Scholar 

  18. Ramík, D.M.: Contribution to complex visual information processing and autonomous knowledge extraction: application to autonomous robotics. Ph.D. dissertation, Université Paris-Est, Pub. No. 2012PEST1100 (2012)

    Google Scholar 

  19. Ramík, D.M., Sabourin, C., Madani, K.: Hybrid salient object extraction approach with automatic estimation of visual attention scale. In: Proc. IEEE SITIS (2011)

    Google Scholar 

  20. Riche, N., Duvinage, M., Mancas, M., Gosselin, B., Dutoit, T.: Saliency and human fixations: state-of-the-art and study of comparison metrics. In: Proc. IEEE ICCV, pp. 1153–1160, December 2013

    Google Scholar 

  21. Riche, N., Mancas, M., Duvinage, M., Mibulumukini, M., Gosselin B., Dutoit, T.: RARE2012: A multi-scale rarity-based saliency detection with its comparative statistical analysis. Signal Processing: Image Communication (2013), doi:10.1016/j.image.2013.03.009, issn:0923–5965

  22. Shen, C., Zhao, Q.: Webpage saliency. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part VII. LNCS, vol. 8695, pp. 33–46. Springer, Heidelberg (2014)

    Chapter  Google Scholar 

  23. Ramanathan, S., Katti, H., Sebe, N., Kankanhalli, M., Chua, T.-S.: An eye fixation database for saliency detection in images. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 30–43. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  24. Tatler, B.W.: The Central Fixation Bias in Scene Viewing: Selecting an Optimal Viewing Position Independently of Motor Bases and Image Feature Distributions. J. Vision 14, 1–17 (2007)

    Google Scholar 

  25. Triesch, J., Ballard, D.H., Hayhoe, M.M., Sullivan, B.T.: What You See Is What You Need. J. Vision 3, 86–94 (2003)

    Article  Google Scholar 

  26. Vig, E., Dorr, M., Cox. D.: Large-Scale optimization of hierarchical features for saliency prediction in natural images computer vision and pattern recognition. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 2798–2805 (2014)

    Google Scholar 

  27. Võ, M.L.-H., Smith, T.J., Mital, P.K., Henderson, J.M.: Do the eyes really have it? Dynamic allocation of attention when viewing moving faces? J. Vision 12(13), 3 (2012)

    Article  Google Scholar 

  28. Zhang, J., Sclaroff, S.: Saliency detection: a boolean map approach. In: Proc. IEEE ICCV, pp. 153–160 (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Viachaslau Kachurka .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Kachurka, V., Madani, K., Sabourin, C., Golovko, V. (2015). From Human Eye Fixation to Human-like Autonomous Artificial Vision. In: Rojas, I., Joya, G., Catala, A. (eds) Advances in Computational Intelligence. IWANN 2015. Lecture Notes in Computer Science(), vol 9094. Springer, Cham. https://doi.org/10.1007/978-3-319-19258-1_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-19258-1_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-19257-4

  • Online ISBN: 978-3-319-19258-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics