Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3588015.3588412acmconferencesArticle/Chapter ViewAbstractPublication PagesetraConference Proceedingsconference-collections
research-article
Open access

Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models

Published: 30 May 2023 Publication History

Abstract

Recent work in XAI for eye tracking data has evaluated the suitability of feature attribution methods to explain the output of deep neural sequence models for the task of oculomotric biometric identification. These methods provide saliency maps to highlight important input features of a specific eye gaze sequence. However, to date, its localization analysis has been lacking a quantitative approach across entire datasets. In this work, we employ established gaze event detection algorithms for fixations and saccades and quantitatively evaluate the impact of these events by determining their concept influence. Input features that belong to saccades are shown to be substantially more important than features that belong to fixations. By dissecting saccade events into sub-events, we are able to show that gaze samples that are close to the saccadic peak velocity are most influential. We further investigate the effect of event properties like saccadic amplitude or fixational dispersion on the resulting concept influence.

Supplemental Material

MP4 File
Presentation video - short version
PDF File
Appendix

References

[1]
Mariano Alcañiz, Irene Alice Chicchi-Giglioli, Lucía A. Carrasco-Ribelles, Javier Marín-Morales, Maria Eleonora Minissi, Gonzalo Teruel-García, Marian Sirera, and Luis Abad. 2021. Eye gaze as a biomarker in the recognition of autism spectrum disorder using virtual reality and machine learning: A proof of concept for diagnosis. Autism Research 15, 1 (nov 2021), 131–145. https://doi.org/10.1002/aur.2636
[2]
Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. 2017. Towards better understanding of gradient-based attribution methods for Deep Neural Networks. arXiv preprint arXiv:1711.06104 (2017).
[3]
Christopher J. Anders, David Neumann, Wojciech Samek, Klaus-Robert Müller, and Sebastian Lapuschkin. 2021. Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy. arXiv preprint arXiv:2106.13200 (2021).
[4]
Richard Andersson, Linnea Larsson, Kenneth Holmqvist, Martin Stridh, and Marcus Nyström. 2016. One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms. Behavior Research Methods 49, 2 (2016), 616–637.
[5]
Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. 2015. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLOS ONE 10, 7 (2015), 1–46.
[6]
Shuwen Deng, Paul Prasse, David R. Reich, Sabine Dziemian, Maja Stegenwallner-Schütz, Daniel Krakowczyk, Silvia Makowski, Nicolas Langer, Tobias Scheffer, and Lena A. Jäger. 2023. Detection Of ADHD Based On Eye Movements During Natural Viewing. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2022, Grenoble, France, September 19–23, 2022, Proceedings, Part VI (Grenoble, France). Springer-Verlag, Berlin, Heidelberg, 403–418. https://doi.org/10.1007/978-3-031-26422-1_25
[7]
R. Engbert and R. Kliegl. 2003. Microsaccades uncover the orientation of covert attention. Vision Research 43 (2003), 1035–1045.
[8]
Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller. 2019. Deep learning for time series classification: a review. Data Mining and Knowledge Discovery 33, 4 (mar 2019), 917–963. https://doi.org/10.1007/s10618-019-00619-1
[9]
Henry Griffith, Dillon Lohr, Evgeny Abdulin, and Oleg Komogortsev. 2021. GazeBase, a large-scale, multi-stimulus, longitudinal eye movement dataset. Scientific Data 8 (2021).
[10]
Anna Hedström, Leander Weber, Daniel Krakowczyk, Dilyara Bareeva, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, and Marina M.-C. Höhne. 2023. Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond. Journal of Machine Learning Research 24, 34 (2023), 1–11. http://jmlr.org/papers/v24/22-0142.html
[11]
Corey D. Holland and Oleg V. Komogortsev. 2013. Complex eye movement pattern biometrics: Analyzing fixations and saccades. In 2013 International Conference on Biometrics (IJCB). 1–8.
[12]
Kenneth Holmqvist, Marcus Nyström, Richard Andersson, Richard Dewhurst, Halszka Jarodzka, and Joost Van de Weijer. 2011. Eye tracking: A comprehensive guide to methods and measures. Oxford University Press.
[13]
Lena A. Jäger, Silvia Makowski, Paul Prasse, Sascha Liehr, Maximilian Seidler, and Tobias Scheffer. 2019. Deep Eyedentification: Biometric Identification Using Micro-Movements of the Eye. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2019, Würzburg, Germany, September 16–20, 2019, Proceedings, Part II (Würzburg, Germany). Springer-Verlag, Berlin, Heidelberg, 299–314. https://doi.org/10.1007/978-3-030-46147-8_18
[14]
Ming Jiang and Qi Zhao. 2017. Learning visual attention to identify people with autism spectrum disorder. In Proceedings of IEEE International Conference on Computer Vision (ICCV). 3267–3276.
[15]
Lena A. Jäger, Thomas Kern, and Patrick Haller. 2021. Potsdam Textbook Corpus (PoTeC): Eye tracking data from experts and non-experts reading scientific texts.
[16]
Alexandra P. Key, Courtney E. Venker, and Micheal P. Sandbank. 2020. Psychophysiological and Eye-Tracking Markers of Speech and Language Processing in Neurodevelopmental Disorders: New Options for Difficult-to-Test Populations. American Journal on Intellectual and Developmental Disabilities 125, 6 (nov 2020), 465–474. https://doi.org/10.1352/1944-7558-125.6.465
[17]
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. 2017. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). (2017). https://doi.org/10.48550/ARXIV.1711.11279
[18]
Maximilian Kohlbrenner, Alexander Bauer, Shinichi Nakajima, Alexander Binder, Wojciech Samek, and Sebastian Lapuschkin. 2020. Towards Best Practice in Explaining Neural Network Decisions with LRP. International Joint Conference on Neural Networks (2020), 1–7.
[19]
Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson. 2020. Captum: A unified and generic model interpretability library for PyTorch. arXiv preprint arXiv:2009.07896 (2020).
[20]
Daniel Krakowczyk, David Robert Reich, Paul Prasse, Sebastian Lapuschkin, Lena Ann Jäger, and Tobias Scheffer. 2022. Selection of XAI Methods Matters: Evaluation of Feature Attribution Methods for Oculomotoric Biometric Identification. In NeuRIPS 2022 Workshop on Gaze Meets ML. https://openreview.net/forum?id=GOLdDAP2AtI
[21]
Daniel G. Krakowczyk, David R. Reich, Jakob Chwastek, Deborah N. Jakobi, Paul Prasse, Assunta Süss, Oleksii Turuta, Paweł Kasprowski, and Lena A. Jäger. 2023. pymovements: A Python Package for Processing Eye Movement Data. In 2023 Symposium on Eye Tracking Research and Applications (Tubingen, Germany) (ETRA ’23). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3588015.3590134
[22]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems, F. Pereira, C.J. Burges, L. Bottou, and K.Q. Weinberger (Eds.). Vol. 25. Curran Associates, Inc.https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf
[23]
Ayush Kumar, Prantik Howlader, Rafael Garcia, Daniel Weiskopf, and Klaus Mueller. 2020. Challenges in Interpretability of Neural Networks for Eye Movement Data. In ACM Symposium on Eye Tracking Research and Applications (Stuttgart, Germany) (ETRA ’20 Short Papers). Association for Computing Machinery, New York, NY, USA, Article 12, 5 pages. https://doi.org/10.1145/3379156.3391361
[24]
Dillon Lohr, Henry Griffith, and Oleg V Komogortsev. 2021. Eye Know You: Metric Learning for End-to-end Biometric Authentication Using Eye Movements from a Longitudinal Dataset. https://doi.org/10.48550/ARXIV.2104.10489
[25]
Dillon Lohr and Oleg V Komogortsev. 2022. Eye Know You Too: Toward Viable End-to-End Eye Movement Biometrics for User Authentication. IEEE Transactions on Information Forensics and Security 17 (2022), 3151–3164.
[26]
Silvia Makowski, Lena A. Jäger, Paul Prasse, and Tobias Scheffer. 2020. JuDo1000 Eye Tracking Data Set. 10 pages.
[27]
Silvia Makowski, Paul Prasse, David R. Reich, Daniel Krakowczyk, Lena A. Jäger, and Tobias Scheffer. 2021. DeepEyedentificationLive: Oculomotoric Biometric Identification and Presentation-Attack Detection Using Deep Neural Networks. IEEE Transactions on Biometrics, Behavior, and Identity Science 3, 4 (2021), 506–518.
[28]
Susana Martinez-Conde, Stephen L. Macknik, and David H. Hubel. 2004. The role of fixational eye movements in visual perception. Nature Reviews Neuroscience 5 (2004), 229–240.
[29]
Susana Martinez-Conde, Stephen L. Macknik, Xoana G. Troncoso, and Thomas A. Dyar. 2006. Microsaccades Counteract Visual Fading during Fixation. Neuron 49 (2006), 297–305.
[30]
Christoph Molnar. 2022. Interpretable Machine Learning (2 ed.). https://christophm.github.io/interpretable-ml-book
[31]
Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Wojciech Samek, and Klaus-Robert Müller. 2019. Layer-Wise Relevance Propagation: An Overview. Springer International Publishing, 193–209.
[32]
Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Müller. 2017. Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognition 65 (2017), 211–222.
[33]
Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. 2018. Methods for interpreting and understanding deep neural networks. Digital Signal Processing 73 (2018), 1–15.
[34]
Ali Bou Nassif, Ismail Shahin, Imtinan Attili, Mohammad Azzeh, and Khaled Shaalan. 2019. Speech Recognition Using Deep Neural Networks: A Systematic Review. IEEE Access 7 (2019), 19143–19165. https://doi.org/10.1109/ACCESS.2019.2896880
[35]
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12 (2011), 2825–2830.
[36]
Peter Raatikainen, Jarkko Hautala, Otto Loberg, Tommi Kärkkäinen, Paavo Leppänen, and Paavo Nieminen. 2021. Detection of developmental dyslexia with machine learning using eye movement data. Array 12 (2021), 100087.
[37]
Keith Rayner, Jane Ashby, Alexander Pollatsek, and Erik D. Reichle. 2004. The Effects of Frequency and Predictability on Eye Fixations in Reading: Implications for the E-Z Reader Model.Journal of Experimental Psychology: Human Perception and Performance 30, 4 (2004), 720–732. https://doi.org/10.1037/0096-1523.30.4.720
[38]
Keith Rayner and Alexander Pollatsek. 1983. Is visual information integrated across saccades?Perception & Psychophysics 34, 1 (jan 1983), 39–48. https://doi.org/10.3758/bf03205894
[39]
Ioannis Rigas, Lee Friedman, and Oleg Komogortsev. 2018. Study of an extensive set of eye movement features: Extraction methods and statistical analysis. Journal of Eye Movement Research 11, 1 (2018).
[40]
Ioannis Rigas, Oleg Komogortsev, and Reza Shadmehr. 2016. Biometric recognition via eye movements: Saccadic vigor and acceleration cues. ACM Transactions on Applied Perception 13, 2 (2016), 1–21.
[41]
Dario D. Salvucci and Joseph H. Goldberg. 2000. Identifying Fixations and Saccades in Eye-Tracking Protocols. In ETRA ’00 (Palm Beach Gardens, Florida, USA) (ETRA ’00). Association for Computing Machinery, New York, NY, USA, 71–78. https://doi.org/10.1145/355017.355028
[42]
Abraham. Savitzky and M. J. E. Golay. 1964. Smoothing and Differentiation of Data by Simplified Least Squares Procedures.Analytical Chemistry 36, 8 (1964), 1627–1639.
[43]
Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning Important Features Through Propagating Activation Differences. In Proceedings of the 34th International Conference on Machine Learning (ICML)(Proceedings of Machine Learning Research, Vol. 70), Doina Precup and Yee Whye Teh (Eds.). PMLR, 3145–3153.
[44]
Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, and Anshul Kundaje. 2016. Not Just a Black Box: Learning Important Features Through Propagating Activation Differences. arXiv preprint arXiv:1605.01713 (2016).
[45]
Mikhail Startsev and Raimondas Zemblys. 2022. Evaluating Eye Movement Event Detection: A Review of the State of the Art. Behavior Research Methods (jun 2022). https://doi.org/10.3758/s13428-021-01763-7
[46]
Pascal Sturmfels, Scott Lundberg, and Su-In Lee. 2020. Visualizing the Impact of Feature Attribution Baselines. Distill 5, 1 (2020).
[47]
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic Attribution for Deep Networks. In Proceedings of the 34th International Conference on Machine Learning (ICML)(Proceedings of Machine Learning Research, Vol. 70), Doina Precup and Yee Whye Teh (Eds.). PMLR, 3319–3328.
[48]
Jonas Theiner, Eric Müller-Budack, and Ralph Ewerth. 2022. Interpretable Semantic Photo Geolocation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). 750–760.
[49]
Athanasios Voulodimos, Nikolaos Doulamis, Anastasios Doulamis, Eftychios Protopapadakis, and Diego Andina. 2018. Deep Learning for Computer Vision: A Brief Review. Intell. Neuroscience 2018 (jan 2018), 13 pages. https://doi.org/10.1155/2018/7068349
[50]
Jonathan R. Williford, Brandon B. May, and Jeffrey Byrne. 2020. Explainable Face Recognition. https://doi.org/10.48550/ARXIV.2008.00916

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ETRA '23: Proceedings of the 2023 Symposium on Eye Tracking Research and Applications
May 2023
441 pages
ISBN:9798400701504
DOI:10.1145/3588015
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 May 2023

Check for updates

Author Tags

  1. concept influence
  2. explainability
  3. eye movements
  4. time-series
  5. xai

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Data Availability

Funding Sources

  • German Federal Ministry of Education and Research (BMBF)

Conference

ETRA '23

Acceptance Rates

Overall Acceptance Rate 69 of 137 submissions, 50%

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 661
    Total Downloads
  • Downloads (Last 12 months)468
  • Downloads (Last 6 weeks)120
Reflects downloads up to 29 Nov 2024

Other Metrics

Citations

Cited By

View all

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media