Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

Exploring the Effects of Scanpath Feature Engineering for Supervised Image Classification Models

Published: 18 May 2023 Publication History

Abstract

Image classification models are becoming a popular method of analysis for scanpath classification. To implement these models, gaze data must first be reconfigured into a 2D image. However, this step gets relatively little attention in the literature as focus is mostly placed on model configuration. As standard model architectures have become more accessible to the wider eye-tracking community, we highlight the importance of carefully choosing feature representations within scanpath images as they may heavily affect classification accuracy. To illustrate this point, we create thirteen sets of scanpath designs incorporating different eye-tracking feature representations from data recorded during a task-based viewing experiment. We evaluate each scanpath design by passing the sets of images through a standard pre-trained deep learning model as well as a SVM image classifier. Results from our primary experiment show an average accuracy improvement of 25 percentage points between the best-performing set and one baseline set.

Supplemental Material

MP4 File
Presentation video

References

[1]
Zeyad AT Ahmed and Mukti E Jadhav. 2020. Convolutional Neural Network for Prediction of Autism based on Eye-tracking Scanpaths. International Journal of Psychosocial Rehabilitation 24, 05 (2020).
[2]
Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, and Anil Anthony Bharath. 2017. Deep reinforcement learning: A brief survey. IEEE Signal Processing Magazine 34, 6 (2017), 26--38.
[3]
Adham Atyabi, Frederick Shic, Jiajun Jiang, Claire E Foster, Erin Barney, Minah Kim, Beibin Li, Pamela Ventola, and Chung Hao Chen. 2022. Stratification of Children with Autism Spectrum Disorder through fusion of temporal information in eye-gaze scan-paths. ACM Transactions on Knowledge Discovery from Data (TKDD) (2022).
[4]
Imon Banerjee, Yuan Ling, Matthew C Chen, Sadid A Hasan, Curtis P Langlotz, Nathaniel Moradzadeh, Brian Chapman, Timothy Amrhein, David Mong, Daniel L Rubin, et al. 2019. Comparative effectiveness of convolutional neural network (CNN) and recurrent neural network (RNN) architectures for radiology text report classification. Artificial intelligence in medicine 97 (2019), 79--88.
[5]
Michael Barz and Daniel Sonntag. 2016. Gaze-guided object classification using deep neural networks for attentionbased computing. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. 253--256.
[6]
Nilavra Bhattacharya, Somnath Rakshit, Jacek Gwizdka, and Paul Kogut. 2020. Relevance prediction from eyemovements using semi-interpretable convolutional neural networks. In Proceedings of the 2020 conference on human information interaction and retrieval. 223--233.
[7]
Jonathan FG Boisvert and Neil DB Bruce. 2016. Predicting task from eye movements: On the importance of spatial distribution, dynamics, and image features. Neurocomputing 207 (2016), 653--668.
[8]
Christian Braunagel, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2017. Ready for take-over? A new driver assistance system for an automated classification of driver take-over readiness. IEEE Intelligent Transportation Systems Magazine 9, 4 (2017), 10--22.
[9]
Sean Anthony Byrne, Adam Peter Frederick Reynolds, Carolina Biliotti, Falco J Bargagli-Stoffi, Luca Polonio, and Massimo Riccaboni. 2023. Predicting choice behaviour in economic games using gaze data encoded as scanpath images. Scientific Reports 13, 1 (2023), 4722.
[10]
Roberto Caldara and Sébastien Miellet. 2011. iMap: a novel method for statistical fixation mapping of eye movement data. Behavior Research Methods 43, 3 (2011), 864--878.
[11]
Nora Castner, Thomas C Kuebler, Katharina Scheiter, Juliane Richter, Therese Eder, Fabian Huettig, Constanze Keutel, and Enkelejda Kasneci. 2020. Deep Semantic Gaze Embedding and Scanpath Comparison for Expertise Classification during OPT Viewing. In ACM Symposium on Eye Tracking Research and Applications (Stuttgart, Germany) (ETRA '20 Full Papers). Association for Computing Machinery, New York, NY, USA, Article 18, 10 pages. 9781450371339 https://doi.org/10.1145/3379155.3391320
[12]
Moran Cerf, Jonathan Harel, Alex Huth, Wolfgang Einhäuser, and Christof Koch. 2008. Decoding what people see from where they look: Predicting visual stimuli from scanpaths. In International Workshop on Attention in Cognitive Systems. Springer, 15--26.
[13]
Shi Chen and Qi Zhao. 2019. Attention-based autism spectrum disorder screening with privileged modality. In Proceedings of the IEEE International Conference on Computer Vision. 1181--1190.
[14]
François Chollet et al. 2015. Keras. https://keras.io.
[15]
Yandre MG Costa, Luiz S Oliveira, and Carlos N Silla Jr. 2017. An evaluation of convolutional neural networks for music classification using spectrograms. Applied soft computing 52 (2017), 28--38.
[16]
Miguel Costa-Gomes, Vincent P. Crawford, and Bruno Broseta. 2001. Cognition and Behavior in Normal-Form Games: An Experimental Study. Econometrica 69, 5 (2001), 1193--1235. https://doi.org/10.1111/1468-0262.00239 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/1468-0262.00239
[17]
Antoine Coutrot, Janet H Hsiao, and Antoni B Chan. 2018. Scanpath modeling and classification with hidden Markov models. Behavior Research Methods 50, 1 (2018), 362--379.
[18]
David P Crabb, Nicholas D Smith, and Haogang Zhu. 2014. What's on TV? Detecting age-related neurodegenerative eye disease using eye movement scanpaths. Frontiers in Aging Neuroscience 6 (2014), 312.
[19]
Edwin Dalmaijer, Sebastiaan Mathôt, and Stefan Stigchel. 2013. PyGaze: An open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments. Behavior Research Methods 46 (11 2013). https://doi.org/10. 3758/s13428-013-0422--2
[20]
Giovanna Devetag, Sibilla Di Guida, and Luca Polonio. 2016. An eye-tracking study of feature-based choice in one-shot games. Experimental Economics 19, 1 (2016), 177--201.
[21]
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, DirkWeissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2020. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. https://doi.org/10.48550/ARXIV.2010.11929
[22]
Cao Vu Dung, Hidehiko Sekiya, Suichi Hirano, Takayuki Okatani, and Chitoshi Miki. 2019. A vision-based method for crack detection in gusset plate welded joints of steel bridges using deep convolutional neural networks. Automation in Construction 102 (2019), 217--229.
[23]
Mahmoud Elbattah, Romuald Carette, Gilles Dequen, Jean-Luc Guérin, and Federica Cilia. 2019. Learning clusters in autism spectrum disorder: Image-based clustering of eye-tracking scanpaths with deep autoencoder. In 2019 41st Annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, 1417--1420.
[24]
Ramin Fahimi and Neil DB Bruce. 2021. On metrics for measuring scanpath similarity. Behavior Research Methods 53, 2 (2021), 609--628.
[25]
Camilo Fosco, Anelise Newman, Pat Sukhum, Yun Bin Zhang, Nanxuan Zhao, Aude Oliva, and Zoya Bylinskii. 2020. How much time do you have? modeling multi-duration saliency. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4473--4482.
[26]
Drew Fudenberg and David K Levine. 2016. Whither game theory? Towards a theory of learning in games. Journal of Economic Perspectives 30, 4 (2016), 151--70.
[27]
Wolfgang Fuhl, Efe Bozkir, Benedikt Hosp, Nora Castner, David Geisler, Thiago C Santini, and Enkelejda Kasneci. 2019. Encodji: encoding gaze data into emoji space for an amusing scanpath classification approach. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications. 1--4.
[28]
Wolfgang Fuhl, TC Kübler, Katrin Sippel, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2015. Arbitrarily shaped areas of interest based on gaze density gradient. In European Conference on Eye Movements, Vol. 1. 5.
[29]
Wolfgang Fuhl, Thomas Kuebler, Thiago Santini, and Enkelejda Kasneci. 2018. Automatic generation of saliency-based areas of interest for the visualization and analysis of eye-tracking data. In Proceedings of the Conference on Vision, Modeling, and Visualization. 47--54.
[30]
David Geisler, Daniel Weber, Nora Castner, and Enkelejda Kasneci. 2020. Exploiting the GBVS for Saliency Aware Gaze Heatmaps. In ACM Symposium on Eye Tracking Research and Applications (Stuttgart, Germany) (ETRA '20 Short Papers). Association for Computing Machinery, New York, NY, USA, Article 24, 5 pages. 9781450371346 https://doi.org/10.1145/3379156.3391367
[31]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep Residual Learning for Image Recognition. https://doi.org/10.48550/ARXIV.1512.03385
[32]
Helene Hembrooke, Matt Feusner, and Geri Gay. 2006. Averaging scan patterns and what they can tell us. In Proceedings of the ACM Symposium on Eye Tracking Research & Applications. 41--41.
[33]
John Heminghous and Andrew T Duchowski. 2006. iComp: a tool for scanpath visualization and comparison. In Proceedings of the 3rd Symposium on Applied Perception in Graphics and Visualization. 152--152.
[34]
Benedikt Hosp, Florian Schultz, Enkelejda Kasneci, and Oliver Höner. 2021. Expertise classification of soccer goalkeepers in highly dynamic decision tasks: a deep learning approach for temporal and spatial feature recognition of fixation image patch sequences. Frontiers in Sports and Active Living (2021), 183.
[35]
Benedikt Hosp, Myat Su Yin, Peter Haddawy, Ratthapoom Watcharopas, Paphon Sa-Ngasoongsong, and Enkelejda Kasneci. 2021. Differentiating Surgeons' Expertise solely by Eye Movement Features. In Companion Publication of the 2021 International Conference on Multimodal Interaction. 371--375.
[36]
J. D. Hunter. 2007. Matplotlib: A 2D graphics environment. Computing in Science & Engineering 9, 3 (2007), 90--95. https://doi.org/10.1109/MCSE.2007.55
[37]
Poika Isokoski, Jari Kangas, and Päivi Majaranta. 2018. Useful approaches to exploratory analysis of gaze data: enhanced heatmaps, cluster maps, and transition maps. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications. 1--9.
[38]
Dipendra Jha, Logan Ward, Arindam Paul, Wei-keng Liao, Alok Choudhary, Chris Wolverton, and Ankit Agrawal. 2018. Elemnet: Deep learning the chemistry of materials from only elemental composition. Scientific reports 8, 1 (2018), 1--13.
[39]
Ming Jiang, Shengsheng Huang, Juanyong Duan, and Qi Zhao. 2015. SALICON: Saliency in Context. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[40]
Justin Johnson and Taghi Khoshgoftaar. 2019. Survey on deep learning with class imbalance. Journal of Big Data 6 (03 2019), 27. https://doi.org/10.1186/s40537-019-0192--5
[41]
Tae Joon Jun, Hoang Minh Nguyen, Daeyoun Kang, Dohyeun Kim, Daeyoung Kim, and Young-Hak Kim. 2018. ECG arrhythmia classification using a 2-D convolutional neural network. https://doi.org/10.48550/ARXIV.1804.06812
[42]
Juraj Kacur, Jaroslav Polec, Eva Smolejova, and Anton Heretik. 2020. An analysis of eye-tracking features and modelling methods for free-viewed standard stimulus: application for schizophrenia detection. IEEE Journal of Biomedical and Health Informatics 24, 11 (2020), 3055--3065.
[43]
Daniel T. Knoepfle, Colin F. Camerer, and Joseph Tao yi Wang. 2009. Studying Learning in Games Using Eye-Tracking. Journal of the European Economic Association 7, 2/3 (2009), 388--398. 15424766, 15424774 http://www.jstor.org/stable/ 40282757
[44]
Michal Krol and Magdalena Krol. 2017. A novel approach to studying strategic decisions with eye-tracking and machine learning. Judgment and Decision Making 12, 6 (2017), 596.
[45]
Ayush Kumar, Prantik Howlader, Rafael Garcia, DanielWeiskopf, and Klaus Mueller. 2020. Challenges in interpretability of neural networks for eye movement data. In ACM Symposium on Eye Tracking Research and Applications. 1--5.
[46]
Kuno Kurzhals. 2021. Image-based projection labeling for mobile eye tracking. In ACM Symposium on Eye Tracking Research and Applications. 1--12.
[47]
Olivier Le Meur and Thierry Baccino. 2013. Methods for comparing scanpaths and saliency maps: strengths and weaknesses. Behavior research methods 45, 1 (2013), 251--266.
[48]
Xiaomin Li and Colin Camerer. 2020. Predictable Effects of Bottom-up Visual Salience in Experimental Decisions and Games. Available at SSRN 3308886 (2020).
[49]
Xiaomin Li and Colin Camerer. 2021. Hidden Markov Modeling of the Cognitive Process in Strategic Thinking. Available at SSRN 3838911 (2021).
[50]
Alexander Lotz and Sarah Weissenberger. 2018. Predicting take-over times of truck drivers in conditional autonomous driving. In International Conference on Applied Human Factors and Ergonomics. Springer, 329--338.
[51]
Davide Marchiori, Sibilla Di Guida, and Luca Polonio. 2021. Plasticity of strategic sophistication in interactive decision-making. Journal of Economic Theory 196 (2021), 105291.
[52]
Amitha Mathew, P Amudha, and S Sivakumari. 2020. Deep learning techniques: an overview. In International conference on advanced machine learning technologies and applications. Springer, 599--608.
[53]
Jojo Moolayil, Jojo Moolayil, and Suresh John. 2019. Learn Keras for deep neural networks. Springer.
[54]
John F Nash Jr. 1950. Equilibrium points in n-person games. Proceedings of the national academy of sciences 36, 1 (1950), 48--49.
[55]
Nabil Ouerhani, Heinz Hügli, René Müri, and Roman Von Wartburg. 2003. Empirical validation of the saliency-based model of visual attention. In Electronic Letters on Computer Vision and Image Analysis. 13--23.
[56]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. https://doi.org/10.48550/ARXIV.1912.01703
[57]
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12 (2011), 2825--2830.
[58]
Luca Polonio and Giorgio Coricelli. 2019. Testing the level of consistency between choices and beliefs in games using eye-tracking. Games and Economic Behavior 113 (2019), 566--586.
[59]
Luca Polonio, Sibilla Di Guida, and Giorgio Coricelli. 2015. Strategic sophistication and attention in games: An eye-tracking study. Games and Economic Behavior 94 (2015), 80--96.
[60]
KN Praveena and R Mahalakshmi. 2022. Classification of Autism Spectrum Disorder and Typically Developed Children for Eye Gaze Image Dataset using Convolutional Neural Network. International Journal of Advanced Computer Science and Applications 13, 3 (2022).
[61]
Claudio M. Privitera and Lawrence W. Stark. 2000. Algorithms for defining visual regions-of-interest: Comparison with eye fixations. IEEE Transactions on Pattern Analysis and Machine Intelligence 22, 9 (2000), 970--982.
[62]
Maithra Raghu, Chiyuan Zhang, Jon Kleinberg, and Samy Bengio. 2019. Transfusion: Understanding transfer learning for medical imaging. Advances in neural information processing systems 32 (2019).
[63]
Umesh Rajashekar, Ian Van Der Linde, Alan C Bovik, and Lawrence K Cormack. 2008. GAFFE: A gaze-attentive fixation finding engine. IEEE Transactions on Image Processing 17, 4 (2008), 564--573.
[64]
Tara Rawat and Vineeta Khemchandani. 2017. Feature engineering (FE) tools and techniques for better classification performance. International Journal of Innovations in Engineering and Technology 8, 2 (2017), 169--179.
[65]
Monika Roopak, Gui Yun Tian, and Jonathon Chambers. 2019. Deep learning models for cyber security in IoT networks. In 2019 IEEE 9th annual computing and communication workshop and conference (CCWC). IEEE, 0452--0457.
[66]
Hosnieh Sattar, Mario Fritz, and Andreas Bulling. 2020. Deep gaze pooling: Inferring and visually decoding search intents from human gaze fixations. Neurocomputing 387 (2020), 369--382.
[67]
Devarshi Shah, Jin Wang, and Q Peter He. 2020. Feature engineering in big data analytics for IoT-enabled smart manufacturing--Comparison between deep learning and statistical learning. Computers & Chemical Engineering 141 (2020), 106970.
[68]
Hoo-Chang Shin, Holger R Roth, Mingchen Gao, Le Lu, Ziyue Xu, Isabella Nogues, Jianhua Yao, Daniel Mollura, and Ronald M Summers. 2016. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging 35, 5 (2016), 1285--1298.
[69]
Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. arXiv:1312.6034 [cs.CV]
[70]
Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations.
[71]
Shane D Sims and Cristina Conati. 2020. A neural architecture for detecting user confusion in eye-tracking data. In Proceedings of the 2020 International Conference on Multimodal Interaction. 15--23.
[72]
Mikhail Startsev and Michael Dorr. 2019. Classifying autism spectrum disorder based on scanpaths and saliency. In 2019 IEEE International Conference on Multimedia & Expo Workshops (ICMEW). IEEE, 633--636.
[73]
Florian Strohm, Ekta Sood, Sven Mayer, Philipp Müller, Mihai Bâce, and Andreas Bulling. 2021. Neural Photofit: Gaze-based Mental Image Reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 245--254.
[74]
Yusuke Sugano, Yasunori Ozaki, Hiroshi Kasai, Keisuke Ogaki, and Yoichi Sato. 2014. Image preference estimation with a data-driven approach: A comparative study between gaze and image features. Journal of Eye Movement Research 7, 3 (2014).
[75]
Y. Tao and M. Shyu. 2019. SP-ASDNet: CNN-LSTM Based ASD Classification Model using Observer ScanPaths. In 2019 IEEE International Conference on Multimedia Expo Workshops (ICMEW). 641--646. https://doi.org/10.1109/ICMEW.2019. 00124
[76]
Po-He Tseng, Ian GM Cameron, Giovanna Pari, James N Reynolds, Douglas P Munoz, and Laurent Itti. 2013. Highthroughput classification of clinical populations from natural viewing eye movements. Journal of Neurology 260, 1 (2013), 275--284.
[77]
Pranav Venuprasad, Li Xu, Enoch Huang, Andrew Gilman, Leanne Chukoskie Ph. D, and Pamela Cosman. 2020. Analyzing gaze behavior using object detection and unsupervised clustering. In ACM Symposium on Eye Tracking Research and Applications. 1--9.
[78]
Lisa-Marie Vortmann, Jannes Knychalla, Sonja Annerer-Walcher, Mathias Benedek, and Felix Putze. 2021. Imaging Time Series of Eye Tracking Data to Classify Attentional States. Frontiers in Neuroscience 15 (2021), 664490.
[79]
Stephen Anthony Waite, Arkadij Grigorian, Robert G Alexander, Stephen Louis Macknik, Marisa Carrasco, David Heeger, and Susana Martinez-Conde. 2019. Analysis of perceptual expertise in radiology--Current knowledge and a new perspective. Frontiers in Human Neuroscience 13 (2019), 213.
[80]
Shoujin Wang, Wei Liu, Jia Wu, Longbing Cao, Qinxue Meng, and Paul J. Kennedy. 2016. Training deep neural networks on imbalanced data sets. In 2016 International Joint Conference on Neural Networks (IJCNN). 4368--4374. https://doi.org/10.1109/IJCNN.2016.7727770
[81]
Zhiguang Wang and Tim Oates. 2015. Encoding time series as images for visual inspection and classification using tiled convolutional neural networks. In Workshops at the twenty-ninth AAAI conference on artificial intelligence.
[82]
Karl Weiss, Taghi M Khoshgoftaar, and DingDing Wang. 2016. A survey of transfer learning. Journal of Big data 3, 1 (2016), 1--40.
[83]
Julian Wolf, Stephan Hess, David Bachmann, Quentin Lohmeyer, and Mirko Meboldt. 2018. Automating areas of interest analysis in mobile eye tracking experiments based on machine learning. Journal of Eye Movement Research 11, 6 (2018).
[84]
David S Wooding. 2002. Eye movements of large populations: II. Deriving regions of interest, coverage, and similarity using fixation maps. Behavior Research Methods, Instruments, & Computers 34, 4 (2002), 518--528.
[85]
Yuehan Yin, Yahya Alqahtani, Jinjuan Heidi Feng, Joyram Chakraborty, and Michael P McGuire. 2021. Classification of eye tracking data in visual information processing tasks using convolutional neural networks and feature engineering. SN Computer Science 2, 2 (2021), 1--26.
[86]
Yitan Zhu, Thomas Brettin, Fangfang Xia, Alexander Partin, Maulik Shukla, Hyunseung Yoo, Yvonne A Evrard, James H Doroshow, and Rick L Stevens. 2021. Converting tabular data into images for deep learning with convolutional neural networks. Scientific reports 11, 1 (2021), 1--11.

Cited By

View all
  • (2024)A review of machine learning in scanpath analysis for passive gaze-based interactionFrontiers in Artificial Intelligence10.3389/frai.2024.13917457Online publication date: 5-Jun-2024
  • (2024)Exploring Communication Dynamics: Eye-tracking Analysis in Pair Programming of Computer Science EducationProceedings of the 2024 Symposium on Eye Tracking Research and Applications10.1145/3649902.3653942(1-7)Online publication date: 4-Jun-2024
  • (2024)From Lenses to Living Rooms: A Policy Brief on Eye Tracking in XR Before the Impending Boom2024 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR)10.1109/AIxVR59861.2024.00020(90-96)Online publication date: 17-Jan-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 7, Issue ETRA
ETRA
May 2023
234 pages
EISSN:2573-0142
DOI:10.1145/3597645
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 May 2023
Published in PACMHCI Volume 7, Issue ETRA

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. computer vision
  2. eye movements and cognition
  3. feature engineering
  4. image processing
  5. machine learning
  6. scanpaths
  7. signal processing
  8. visual search behavior

Qualifiers

  • Research-article

Data Availability

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)273
  • Downloads (Last 6 weeks)20
Reflects downloads up to 16 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)A review of machine learning in scanpath analysis for passive gaze-based interactionFrontiers in Artificial Intelligence10.3389/frai.2024.13917457Online publication date: 5-Jun-2024
  • (2024)Exploring Communication Dynamics: Eye-tracking Analysis in Pair Programming of Computer Science EducationProceedings of the 2024 Symposium on Eye Tracking Research and Applications10.1145/3649902.3653942(1-7)Online publication date: 4-Jun-2024
  • (2024)From Lenses to Living Rooms: A Policy Brief on Eye Tracking in XR Before the Impending Boom2024 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR)10.1109/AIxVR59861.2024.00020(90-96)Online publication date: 17-Jan-2024
  • (2024)A Trainable Feature Extractor Module for Deep Neural Networks and Scanpath ClassificationPattern Recognition10.1007/978-3-031-78201-5_19(292-304)Online publication date: 2-Dec-2024
  • (2023)Leveraging gaze for potential error prediction in AI-support systems: An exploratory analysis of interaction with a simulated robotCompanion Publication of the 25th International Conference on Multimodal Interaction10.1145/3610661.3617163(56-60)Online publication date: 9-Oct-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media