Nothing Special   »   [go: up one dir, main page]

Skip to main content

A Task-Driven Eye Tracking Dataset for Visual Attention Analysis

  • Conference paper
  • First Online:
Advanced Concepts for Intelligent Vision Systems (ACIVS 2015)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 9386))

Abstract

To facilitate the research in visual attention analysis, we design and establish a new task-driven eye tracking dataset of 47 subjects. Inspired by psychological findings that human visual behavior is tightly dependent on the executed tasks, we carefully design specific tasks in accordance with the contents of 111 images covering various semantic categories, such as text, facial expression, texture, pose, and gaze. It results in a dataset of 111 fixation density maps and over 5,000 scanpaths. Moreover, we provide baseline results of thirteen state-of-the-art saliency models. Furthermore, we hold discussions on important clues on how tasks and image contents influence human visual behavior. This task-driven eye tracking dataset with the fixation density maps and scanpaths will be made publicly available.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: Proc. CVPR. IEEE (2009)

    Google Scholar 

  2. Ardizzone, E., Bruno, A., Mazzola, G.: Visual saliency by keypoints distribution analysis. In: Maino, G., Foresti, G.L. (eds.) ICIAP 2011, Part I. LNCS, vol. 6978, pp. 691–699. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  3. Borji, A., Sihite, D.N., Itti, L.: Salient object detection: a benchmark. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part II. LNCS, vol. 7573, pp. 414–429. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  4. Borji, A., Tavakoli, H., Sihite, D., Itti, L.: Analysis of scores, datasets, and models in visual saliency prediction. In: Proc. ICCV. IEEE (2013)

    Google Scholar 

  5. Bruce, N., Tsotsos, J.: Saliency based on information maximization. In: Proc. NIPS (2005)

    Google Scholar 

  6. Cheng, M.M., Zhang, G.X., Mitra, N.J., Huang, X., Hu, S.M.: Global contrast based salient region detectionl. In: Proc. CVPR. IEEE (2011)

    Google Scholar 

  7. Doshi, A., Trivedi, M.: Head and gaze dynamics in visual attention and context learning. In: Proc. CVPR Joint Workshop for Visual and Contextual Learning and Visual Scene Understanding. IEEE (2009)

    Google Scholar 

  8. Duan, L., Wu, C., Miao, J., Qing, L., Fu, Y.: Visual saliency detection by spatially weighted dissimilarity. In: Proc. CVPR. IEEE (2011)

    Google Scholar 

  9. Ehinger, K., Hidalgo-Sotelo, B., Torralba, A., Oliva, A.: Modelling search for people in 900 scenes: a combined source model of eye guidance. Visual Cognition 17, 945–978 (2009)

    Article  Google Scholar 

  10. Erdem, E., Erdem, A.: Visual saliency estimation by nonlinearly integrating features using region covariances. Journal of Vision 13, 11 (2013)

    Article  Google Scholar 

  11. Everingham, M., Gool, L., Williams, C., Winn, J., Zisserman, A.: The pascal visual object classes challenge 2012 (voc2012) results (2012). http://pascallin.ecs.soton.ac.uk/challenges/VOC/voc2012/

  12. Foerster, R., Schneider, W.: Functionally sequenced scanpath similarity method (funcsim): Comparing and evaluating scanpath similarity based on a task’s inherent sequence of functional (action) units. Journal of Eye Movement Research, June 2013

    Google Scholar 

  13. Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Proc. NIPS (2006)

    Google Scholar 

  14. Hou, X., Zhang, L.: Saliency detection: A spectral residual approach. In: Proc. CVPR. IEEE (2007)

    Google Scholar 

  15. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. PAMI 20, 1254–1259 (1998)

    Article  Google Scholar 

  16. Jiang, M., Xu, J., Zhao, Q.: Saliency in crowd. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part VII. LNCS, vol. 8695, pp. 17–32. Springer, Heidelberg (2014)

    Google Scholar 

  17. Johansson, R., Westling, G., Backstrom, A., Flanagan, J.: Eye-hand coordination in object manipulation. Journal of Neuroscience 21, 6917–6932 (2001)

    Google Scholar 

  18. Judd, T., Durand, F., Torralba, A.: A benchmark of computational models of saliency to predict human fixations. MIT Technical report (2012)

    Google Scholar 

  19. Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: Proc. ICCV. IEEE (2009)

    Google Scholar 

  20. Koostra, G., Boer, B., Schomaker, L.R.B.: Predicting eye fixations on complex visual stimuli using local symmetry. Cognitive Computation III, pp. 223–240, March 2011

    Google Scholar 

  21. Liu, H., Xu, D., Huang, Q., Li, W., Xu, M., Lin, S.: Semantically-based human scanpath estimation with hmms. In: Proc. ICCV. IEEE (1998)

    Google Scholar 

  22. Liu, T., Yuan, Z., Sun, J., Wang, J., Zheng, N., Tang, X., Shum, H.: Learning to detect a salient object. IEEE Trans. PAMI 33, 353–367 (2011)

    Article  Google Scholar 

  23. Margolin, R., Zelnik-Manor, L., Tal, A.: Saliency for image manipulation. The Visual Computer 29, 381–392 (2013)

    Article  Google Scholar 

  24. McLachlan, G., Peel, D.: Finite mixture models. John Wiley & Sons, Inc., Hoboken (2000)

    Book  MATH  Google Scholar 

  25. Murray, N., Vanrell, M., Otazu, X., Parraga, C.A.: Saliency estimation using a non-parametric low-level vision model. In: Proc. CVPR. IEEE (2011)

    Google Scholar 

  26. Papadopoulos, D.P., Clarke, A.D.F., Keller, F., Ferrari, V.: Training object class detectors from eye tracking data. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part V. LNCS, vol. 8693, pp. 361–376. Springer, Heidelberg (2014)

    Google Scholar 

  27. Ramanathan, S., Katti, H., Sebe, N., Kankanhalli, M., Chua, T.-S.: An eye fixation database for saliency detection in images. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 30–43. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  28. Salvucci, D., Goldberg, J.: Identifying fixations and saccades in eye-tracking protocols. In: Proc. Symp. ETRA. ACM (2000)

    Google Scholar 

  29. Vikram, T.N., Tscherepanow, M., Wrede, B.: A saliency map based on sampling an image into random rectangular regions of interest. Pattern Recognition 45, 3114–3124 (2012)

    Article  Google Scholar 

  30. Walther, D., Koch, C.: Modeling attention to salient proto-objects. Neural Networks 19, 1395–1407 (2006)

    Article  MATH  Google Scholar 

  31. Wang, W., Chen, C., Wang, Y., Jiang, T., Fang, F., Yao, Y.: Simulating human saccadic scanpaths on natural images. In: Proc. CVPR. IEEE (2011)

    Google Scholar 

  32. Yarbus, A.: Eye Movements and Vision. Plenum Press, New York (1967)

    Book  Google Scholar 

  33. Ye, B., Sugano, Y., Sato, Y.: Influence of stimulus and viewing task types on a learning-based visual saliency model. In: Proc. Symp. ETRA. ACM (2014)

    Google Scholar 

  34. Zhang, L., Tong, M.H., Marks, T.K., Shan, H., Cottrell, G.W.: Sun: A bayesian framework for saliency using natural statistics. Journal of Vision 8, 7 (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yingyue Xu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Xu, Y., Hong, X., He, Q., Zhao, G., Pietikäinen, M. (2015). A Task-Driven Eye Tracking Dataset for Visual Attention Analysis. In: Battiato, S., Blanc-Talon, J., Gallo, G., Philips, W., Popescu, D., Scheunders, P. (eds) Advanced Concepts for Intelligent Vision Systems. ACIVS 2015. Lecture Notes in Computer Science(), vol 9386. Springer, Cham. https://doi.org/10.1007/978-3-319-25903-1_55

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-25903-1_55

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-25902-4

  • Online ISBN: 978-3-319-25903-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics