Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Comparing and Combining Interaction Data and Eye-tracking Data for the Real-time Prediction of User Cognitive Abilities in Visualization Tasks

Published: 30 May 2020 Publication History

Abstract

Previous work has shown that some user cognitive abilities relevant for processing information visualizations can be predicted from eye-tracking data. Performing this type of user modeling is important for devising visualizations that can detect a user's abilities and adapt accordingly during the interaction. In this article, we extend previous user modeling work by investigating for the first time interaction data as an alternative source to predict cognitive abilities during visualization processing when it is not feasible to collect eye-tracking data. We present an extensive comparison of user models based solely on eye-tracking data, on interaction data, as well as on a combination of the two. Although we found that eye-tracking data generate the most accurate predictions, results show that interaction data can still outperform a majority-class baseline, meaning that adaptation for interactive visualizations could be enabled even when it is not feasible to perform eye tracking, using solely interaction data. Furthermore, we found that interaction data can predict several cognitive abilities with better accuracy at the very beginning of the task than eye-tracking data, which are valuable for delivering adaptation early in the task. We also extend previous work by examining the value of multimodal classifiers combining interaction data and eye-tracking data, with promising results for some of our target user cognitive abilities. Next, we contribute to previous work by extending the type of visualizations considered and the set of cognitive abilities that can be predicted from either eye-tracking data and interaction data. Finally, we evaluate how noise in gaze data impacts prediction accuracy and find that retaining rather noisy gaze datapoints can yield equal or even better predictions than discarding them, a novel and important contribution for devising adaptive visualizations in real settings where eye-tracking data are typically noisier than in laboratory settings.

References

[1]
Deepak Agarwal, Bee-Chung Chen, and Xuanhui Wang. 2012. Multi-faceted ranking of news articles using post-read actions. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management. ACM, 694--703.
[2]
Jae-wook Ahn and Peter Brusilovsky. 2013. Adaptive visualization for exploratory information retrieval. Inf. Process. Manag. 49, 5 (2013), 1139--1164.
[3]
Stephen Akuma, Chrisina Jayne, Rahat Iqbal, and Faiyaz Doctor. 2015. Inferring users’ interest on web documents through their implicit behaviour. In Proceedings of the International Conference on Engineering Applications of Neural Networks. Springer, 315--324.
[4]
Ashwaq Alhargan, Neil Cooke, and Tareq Binjammaz. 2017. Multimodal affect recognition in an interactive gaming environment using eye tracking and speech signals. In Proceedings of the 19th International Conference on Multimodal Interaction. ACM, 479--486.
[5]
Bryce Allen. 2000. Individual differences and the conundrums of user-centered design: Two experiments. J. Assoc. Inf. Sci. Technol. 51, 6 (2000), 508--520.
[6]
Gary L. Allen, Christy R. Miller Cowan, and Helen Power. 2006. Acquiring information from simple weather maps: Influences of domain-specific knowledge and general visual--spatial abilities. Learn. Indiv. Diff. 16, 4 (2006), 337--349.
[7]
Azzah Al-Maskari and Mark Sanderson. 2011. The effect of user characteristics on search effectiveness in information retrieval. Inf. Process. Manag. 47, 5 (Sep. 2011), 719--729.
[8]
Juan Miguel L. Andres, Ryan S. Baker, Dragan Gašević, George Siemens, Scott A. Crossley, and Srećko Joksimović. 2018. Studying MOOC completion at scale using the MOOC replication framework. In Proceedings of the 8th International Conference on Learning Analytics and Knowledge. ACM, 71--78.
[9]
Ryan S. Baker, Sujith Gowda, Michael Wixon, Jessica Kalka, Angela Wagner, Aatish Salvi, Vincent Aleven, Gail Kusbit, Jaclyn Ocumpaugh, and Lisa Rossi. 2012. Sensor-free automated detection of affect in a cognitive tutor for algebra. In Proceedings of the 5th International Conference on Educational Data Mining. International Educational Data Mining Society, 126--133.
[10]
Jackson Beatty. 1982. Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychol. Bull. 91, 2 (1982), 276--292.
[11]
Roman Bednarik, Shahram Eivazi, and Hana Vrzakova. 2013. A computational approach for prediction of problem-solving behavior using support vector machines and eye-tracking data. In Eye Gaze in Intelligent User Interfaces. Springer, London, 111--134.
[12]
Robert Bixler and Sidney D'Mello. 2015. Automatic gaze-based detection of mind wandering with metacognitive awareness. In Proceedings of the 23rd International Conference on User Modeling, Adaptation and Personalization. Springer, 31--43.
[13]
Magdalena Borys, Mikhail Tokovarov, Martyna Wawrzyk, Kinga Wesolowska, Malgorzata Plechawska-Wójcik, Roman Dmytruk, and Monika Kaczorowska. 2017. An analysis of eye-tracking and electroencephalography data for cognitive load measurement during arithmetic tasks. In Proceedings of the 10th International Symposium on Advanced Topics in Electrical Engineering. IEEE, 287--292.
[14]
Nigel Bosch, Sidney D'Mello, Ryan Baker, Jaclyn Ocumpaugh, Valerie Shute, Matthew Ventura, Lubin Wang, and Weinan Zhao. 2015. Automatic detection of learning-centered affective states in the wild. In Proceedings of the 20th International Conference on Intelligent User Interfaces. ACM, 379--388.
[15]
Jeremy Boy, Ronald A. Rensink, Enrico Bertini, and Jean-Daniel Fekete. 2014. A principled way of assessing visualization literacy. IEEE Trans. Vis. Comput. Graph. 20, 12 (2014), 1963--1972.
[16]
Kathy Brennan, Diane Kelly, and Jaime Arguello. 2014. The effect of cognitive abilities on information search for tasks of varying levels of complexity. In Proceedings of the 5th Information Interaction in Context Symposium. ACM, 165--174.
[17]
Eli T. Brown, Alvitta Ottley, Hang Zhao, Quan Lin, Richard Souvenir, Alex Endert, and Ronald Chang. 2014. Finding waldo: Learning about users from their interactions. IEEE Trans. Vis. Comput. Graph. 20, 12 (2014), 1663--1672.
[18]
Michael Burch, Natalia Konevtsova, Julian Heinrich, Markus Hoeferlin, and Daniel Weiskopf. 2011. Evaluation of traditional, orthogonal, and radial tree diagrams by an eye tracking study. IEEE Trans. Vis. Comput. Graph. 17, 12 (2011), 2440--2448.
[19]
Stuart K. Card, Thomas P. Moran, and Allen Newell. 1980. The Keystroke-level model for user performance time with interactive systems. Commun. ACM 23, 7 (1980), 396--410.
[20]
Giuseppe Carenini, Cristina Conati, Enamul Hoque, Ben Steichen, Dereck Toker, and James T. Enns. 2014. Highlighting interventions and user differences: Informing adaptive information visualization support. In Proceedings of the Conference on Human Factors in Computing Systems. ACM, 1835--1844.
[21]
Nicholas J. Cepeda, Katharine A. Blackwell, and Yuko Munakata. 2013. Speed isn't everything: Complex processing speed measures mask individual differences and developmental changes in executive control. Dev. Sci. 16, 2 (2013), 269--286.
[22]
Chaomei Chen. 2000. Individual differences in a spatial-semantic virtual environment. J. Assoc. Inf. Sci. Technol. 51, 6 (2000), 529--542.
[23]
Chaomei Chen and Mary Czerwinski. 1997. Spatial ability and visual navigation: An empirical study. New Rev. Hypermedia Multimedia 3, 1 (1997), 67--89.
[24]
Fang Chen, Natalie Ruiz, Eric Choi, Julien Epps, M. Asif Khawaja, Ronnie Taib, Bo Yin, and Yang Wang. 2012. Multimodal behavior and interaction as indicators of cognitive load. ACM Trans. Interact. Intell. Syst. 2, 4, 22 (2012), 36 pages.
[25]
Jessie Y. Chen and Peter I. Terrence. 2009. Effects of imperfect automation and individual differences on concurrent performance of military and robotics tasks in a simulated multitasking environment. Ergonomics 52, 8 (2009), 907--920.
[26]
Siyuan Chen, Julien Epps, Natalie Ruiz, and Fang Chen. 2011. Eye activity as a measure of human mental effort in HCI. In Proceedings of the 16th International Conference on Intelligent User Interfaces. ACM, 315--318.
[27]
Arzu Çöltekin, Rebecca Francelet, Kai-Florian Richter, John Thoresen, and Sara Irina Fabrikant. 2018. The effects of visual realism, spatial abilities, and competition on performance in map-based route learning in men. Cartogr. Geogr. Inf. Sci. 45, 4 (2018), 339--353.
[28]
Emanuele Coluccia, Andrea Bosco, and Maria A. Brandimonte. 2007. The role of visuo-spatial working memory in map learning: New findings from a map drawing paradigm. Psychol. Res. 71, 3 (May 2007), 359--372.
[29]
Cristina Conati, Giuseppe Carenini, Ben Steichen, and Dereck Toker. 2014. Evaluating the Impact of User Characteristics and Different Layouts on an Interactive Visualization for Decision Making. Comput. Graph. Forum 33, 3 (Jun. 2014), 371--380.
[30]
Cristina Conati, Giuseppe Carenini, Dereck Toker, and Sébastien Lallé. 2015. Towards user-adaptive information visualization. In Proceedings of the 29th Conference on Artificial Intelligence. AAAI Press, 4100--4106.
[31]
Cristina Conati, Sébastien Lallé, Md. Abed Rahman, and Dereck Toker. 2017. Further results on predicting cognitive abilities for adaptive visualizations. In Proceedings of the 26th International Joint Conference on Artificial Intelligence. AAAI Press, 1568--1574.
[32]
Cristina Conati and Heather Maclaren. 2008. Exploring the role of individual differences in information visualization. In Proceedings of the Working Conference on Advanced Visual Interfaces. ACM, 199--206.
[33]
Albert T. Corbett and John R. Anderson. 1995. Knowledge tracing: Modeling the acquisition of procedural knowledge. User Model. User-Adapt. Interact. 4 (1995), 253--278.
[34]
Sidney D'Mello and Arthur Graesser. 2007. Mind and body: Dialogue and posture for affect detection in learning environments. In Proceedings of the 13th International Conference on Artificial Intelligence in Education. IOS Press, 161--168.
[35]
Ricard E. Downing, Joi L. Moore, and Steven W. Brown. 2005. The effects and interaction of spatial visualization and domain expertise on information seeking. Comput. Hum. Behav. 21, 2 (Mar. 2005), 195--209.
[36]
Ruth B. Ekstrom, John W. French, and Harry H. Harman. 1979. Cognitive factors: Their identification and replication. Multivar. Behav. Res. Monogr. 79, 2 (1979), 3--84.
[37]
Ruth B. Ekstrom, John W. French, Harry H. Harman, and Diran Dermen. 1976. Manual for Kit of Factor Referenced Cognitive Tests. Educational Testing Service, Princeton, NJ.
[38]
Randall W. Engle. 2002. Working memory capacity as executive attention. Curr. Direct. Psychol. Sci. 11, 1 (Feb. 2002), 19--23.
[39]
Wenzheng Feng, Jie Tang, and Tracy Xiao Liu. 2019. Understanding dropouts in MOOCs. In Proceedings of the 33rd Conference on Artificial Intelligence. AAAI Press, 517--524.
[40]
Andy Field. 2012. Discovering Statistics Using IBM SPSS Statistics (4th ed.). Sage, London.
[41]
Keisuke Fukuda and Edward K. Vogel. 2009. Human variation in overriding attentional capture. J. Neurosci. 29, 27 (Jul. 2009), 8726--8733.
[42]
Matthew Gingerich and Cristina Conati. 2015. Constructing models of user and task characteristics from eye gaze data for user-adaptive information highlighting. In Proceedings of the 29th Conference on Artificial Intelligence. AAAI Press, 1728--1734.
[43]
Joseph Goldberg and Jonathan Helfman. 2011. Eye tracking for visualization evaluation: Reading values on linear versus radial graphs. Inf. Vis. 10, 3 (Jul. 2011), 182--195.
[44]
David Gotz and Zhen Wen. 2009. Behavior-driven visualization recommendation. In Proceedings of the 14th International Conference on Intelligent User Interfaces. ACM, New York, NY, 315--324.
[45]
Eric Granholm and Stuart R. Steinhauer. 2004. Pupillometric measures of cognitive and emotional processes. Int. J. Psychophysiol. 52, 1 (2004), 1--6.
[46]
Beate Grawemeyer. 2006. Evaluation of ERST: An external representation selection tutor. In Proceedings of the 4th International Conference on Diagrammatic Representation and Inference. Springer, 154--167.
[47]
Eija Haapalainen, SeungJun Kim, Jodi F. Forlizzi, and Anind K. Dey. 2010. Psycho-physiological measures for assessing cognitive load. In Proceedings of the 12th International Conference on Ubiquitous Computing. ACM, 301--310.
[48]
Kenneth Holmqvist. 2011. Eye tracking: A comprehensive guide to methods and measures. Oxford University Press, Oxford, UK.
[49]
Dandan Huang, Melanie Tory, Bon Adriel Aseniero, Lyn Bartram, Scott Bateman, Sheelagh Carpendale, Anthony Tang, and Robert Woodbury. 2015. Personal visualization and personal visual analytics. IEEE Trans. Vis. Comput. Graph. 21, 3 (2015), 420--433.
[50]
Yanxiang Huang, Bin Cui, Jie Jiang, Kunqian Hong, Wenyu Zhang, and Yiran Xie. 2016. Real-time video recommendation exploration. In Proceedings of the 2016 International Conference on Management of Data. ACM, 35--46.
[51]
Thomas Huk, Mattias Steinke, and Christian Floto. 2003. The influence Of visual spatial ability on the attitude of users towards high-quality 3d-animations in hypermedia learning environments. In Proceedings of the E-Learn World Conference. Association for the Advancement of Computing in Education, Phoenix, AZ, 1038--1041.
[52]
Stephen Stephen Hutt, Kristina Krasich, Caitlin Mills, Nigel Bosch, Shelby White, James R. Brockmole, and Sidney D'Mello. 2019. Automated gaze-based mind wandering detection during computerized learning in classrooms. User Model. User-Adapt. Interact. 29, 4 (2019), 821--867.
[53]
Stephen Hutt, Caitlin Mills, Nigel Bosch, Kristina Krasich, James Brockmole, and Sidney D'Mello. 2017. “Out of the fr-eye-ing pan”: Towards gaze-based models of attention during learning with technology in the classroom. In Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization. ACM, 94--103.
[54]
Shamsi T. Iqbal, Piotr D. Adamczyk, Xianjun Sam Zheng, and Brian P. Bailey. 2005. Towards an index of opportunity: Understanding changes in mental workload during task execution. In Proceedings of the Conference on Human Factors in Computing Systems. ACM, 311--320.
[55]
Young-Min Jang, Rammohan Mallipeddi, and Minho Lee. 2014. Identification of human implicit visual search intention based on eye movement and pupillary analysis. User Model. User-Adapt. Interact. 24, 4 (2014), 315--344.
[56]
Natasha Jaques, Cristina Conati, Jason M. Harley, and Roger Azevedo. 2014. Predicting affect from gaze data during interaction with an intelligent tutoring system. In Proceedings of the 12th International Conference on Intelligent Tutoring Systems. Springer, 29--38.
[57]
Stephen Joy, Deborah Fein, and Edith Kaplan. 2003. Decoding digit symbol: Speed, memory, and visual scanning. Assessment 10, 1 (2003), 56--65.
[58]
Daniel Kahneman and Jackson Beatty. 1966. Pupil diameter and load on memory. Science 154, 3756 (1966), 1583--1585.
[59]
Samad Kardan and Cristina Conati. 2012. Exploring gaze data for determining user learning with an interactive simulation. In Proceedings of the 20th International Conference on User Modeling, Adaptation, and Personalization. Springer, 126--138.
[60]
Samad Kardan and Cristina Conati. 2013. Comparing and combining eye gaze and interface actions for determining user learning with an interactive simulation. In Proceedings of the 21st International Conference on User Modeling, Adaptation and Personalization. Springer, 215--227.
[61]
Judy Kay and Richard C. Thomas. 1995. Studying long-term system use. Commun. ACM 38, 7 (1995), 61--69.
[62]
Preeti Khanna and Mukundan Sasikumar. 2010. Recognizing emotions from keyboard stroke pattern. Int. J. Comput. Appl. 11, 9 (2010), 1--5.
[63]
Kyung-Sun Kim and Bryce Allen. 2002. Cognitive and task influences on web searching behavior. J. Assoc. Inf. Sci. Technol. 53, 2 (2002), 109--119.
[64]
Max Kuhn. 2008. Building predictive models in R using the caret package. J. Stat. Softw. 28, 5 (2008), 1--26.
[65]
Kuno Kurzhals, Brian Fisher, Michael Burch, and Daniel Weiskopf. 2014. Evaluating visual analytics with eye tracking. In Proceedings of the 5th Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization. ACM, 61--69.
[66]
Dana Lahat, Tülay Adali, and Christian Jutten. 2015. Multimodal data fusion: An overview of methods, challenges, and prospects. Proc. IEEE 103, 9 (2015), 1449--1477.
[67]
Sébastien Lallé and Cristina Conati. 2019. The role of user differences in customization: A case study in personalization for infovis-based content. In Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, 329--339.
[68]
Sébastien Lallé, Cristina Conati, and Roger Azevedo. 2018. Prediction of student achievement goals and emotion valence during interaction with pedagogical agents. In Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems. IFAAMAS, 1222--1231.
[69]
Sébastien Lallé, Cristina Conati, and Giuseppe Carenini. 2016. Predicting confusion in information visualization from eye tracking and interaction data. In Proceedings on the 25th International Joint Conference on Artificial Intelligence. AAAI Press, 2529--2535.
[70]
Sébastien Lallé, Cristina Conati, and Giuseppe Carenini. 2016. Prediction of individual learning curves across information visualizations. User Model. User-Adapt. Interact. 26, 4 (2016), 307--345.
[71]
Sébastien Lallé, Cristina Conati, and Giuseppe Carenini. 2017. Impact of individual differences on user experience with a visualization interface for public engagement. In Proceedings of the 2nd International Workshop on Human Aspects in Adaptive and Personalized Interactive Environments. ACM, 247--252.
[72]
Sébastien Lallé, Cristina Conati, and Giuseppe Carenini. 2017. Impact of individual differences on user experience with a real-world visualization interface for public engagement. In Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization. ACM, New York, NY, 369--370.
[73]
Jiajia Li, Grace Ngai, Hong Va Leong, and Stephen CF Chan. 2016. Multimodal human attention detection for reading from facial expression, eye gaze, and mouse dynamics. ACM Appl. Comput. Rev. 16, 3 (2016), 37--49.
[74]
Yee Mei Lim, Aladdin Ayesh, and Martin Stacey. 2015. Using mouse and keyboard dynamics to detect cognitive stress during mental arithmetic. In Intelligent Systems in Science and Information 2014. Springer, 335--350.
[75]
Chao Liu, Ryen W. White, and Susan Dumais. 2010. Understanding web browsing behaviors through weibull analysis of dwell time. In Proceedings of the 33rd International Conference on Research and Development in Information Retrieval. ACM, 379--386.
[76]
Yan Liu, Pei-Yun Hsueh, Jennifer Lai, Mirweis Sangin, Marc-Antoine Nussli, and Pierre Dillenbourg. 2009. Who is the expert? Analyzing gaze data to predict expertise level in collaborative applications. In Proceedings of the International Conference on Multimedia and Expo. IEEE, 898--901.
[77]
Pascual Martínez-Gómez and Akiko Aizawa. 2014. Recognition of understanding level and language skill using measurements of reading behavior. In Proceedings of the 19th International Conference on Intelligent User Interfaces. ACM, 95--104.
[78]
Mohamed Mouine and Guy Lapalme. 2012. Using clustering to personalize visualization. In Proceedings of the 16th International Conference on Information Visualization. IEEE, 258--263.
[79]
Kasia Muldner, Winslow Burleson, and Kurt VanLehn. 2010. “Yes!”: Using tutor and sensor data to predict moments of delight during instructional activities. In Proceedings of the 18th International Conference on User Modeling, Adaptation, and Personalization. Springer, 159--170.
[80]
Samy Abu Naser. 2012. Predicting learners performance using artificial neural networks in linear programming intelligent tutoring system. Int. J. Artif. Intell. Appl. 3, 2 (2012), 65.
[81]
Kawa Nazemi, Wilhelm Retz, Jörn Kohlhammer, and Arjan Kuijper. 2014. User similarity and deviation analysis for adaptive visualizations. In Human Interface and the Management of Information: Information and Knowledge Design and Evaluation. Springer, 64--75.
[82]
Klaus Oberauer and Simon Eichenberger. 2013. Visual working memory declines when more features must be remembered for each object. Mem. Cogn. 41, 8 (2013), 1212--1227.
[83]
Kristien Ooms, Philippe De Maeyer, and Veerle Fack. 2014. Study of the attentive behavior of novice and expert map users using eye tracking. Cartogr. Geogr. Inf. Sci. 41, 1 (2014), 37--54.
[84]
Jason W. Osborne. 2014. Best Practices in Logistic Regression. Sage.
[85]
Alvitta Ottley, Evan M. Peck, Lane T. Harrison, Daniel Afergan, Caroline Ziemkiewicz, Holly A. Taylor, Paul KJ Han, and Remco Chang. 2016. Improving bayesian reasoning: The effects of phrasing, visualization, and spatial ability. IEEE Trans. Vis. Comput. Graph. 22, 1 (2016), 529--538.
[86]
Alvitta Ottley, Evan M. Peck, Lane T. Harrison, Daniel Afergan, Caroline Ziemkiewicz, Holly A. Taylor, Paul KJ Han, and Remco Chang. 2016. Improving bayesian reasoning: The effects of phrasing, visualization, and spatial ability. IEEE Trans. Vis. Comput. Graph. 22, 1 (2016), 529--538.
[87]
Alvitta Ottley, Huahai Yang, and Remco Chang. 2015. Personality as a predictor of user strategy: How locus of control affects search strategies on tree visualizations. In Proceedings of the 33rd Annual Conference on Human Factors in Computing Systems. ACM, 3251--3254.
[88]
Sharon Oviatt, Björn Schuller, Philip Cohen, Daniel Sonntag, and Gerasimos Potamianos. 2017. The Handbook of Multimodal-Multisensor Interfaces: Foundations, User Modeling, and Common Modality Combinations. Morgan 8 Claypool.
[89]
Prateek Panwar, Adam Bradley, and Christopher Collins. 2018. Providing contextual assistance in response to frustration in visual analytics tasks. In Proceedings of the Workshop on Machine Learning from User Interaction for Visualization and Analytics. IEEE, 1--7.
[90]
Avar Pental. 2015. Patterns of confusion: using mouse logs to predict user's emotional state. In Proceedings of the Workshop on Personalization Approaches in Learning Environments. ACM, 40--45.
[91]
Puripant Ruchikachorn and Klaus Mueller. 2015. Learning visualizations by analogy: Promoting visual literacy through visualization morphing. IEEE Trans. Vis. Comput. Graph. 21, 9 (2015), 1028--1044.
[92]
Timothy A. Salthouse. 1993. Speed mediation of adult age differences in cognition. Dev. Psychol. 29, 4 (1993), 722--738.
[93]
Timothy A. Salthouse and Elizabeth J. Meinz. 1995. Aging, inhibition, working memory, and speed. J. Gerontol. B 50, 6 (1995), 297--306.
[94]
Matthias Schneider-Hufschmidt, Thomas Kühme, and Uwe Malinowski (Eds.). 1993. Adaptive User Interfaces: Principles and Practice. North-Holland, Amsterdam.
[95]
Lei Shi, Alexandra Cristea, Malik Awan, Craig Stewart, and Maurice Hendrix. 2013. Towards understanding learning behavior patterns in social adaptive personalized e-learning systems. In Proceedings of the 19th Americas Conference on Information Systems. Association for Information Systems, Chicago, IL, 3678--3688.
[96]
Gino Slanzi, Jorge A. Balazs, and Juan D. Velásquez. 2017. Combining eye tracking, pupil dilation and EEG analysis for predicting web users click intention. Inf. Fus. 35, (2017), 51--57.
[97]
Oleg Špakov. 2012. Comparison of eye movement filters used in HCI. In Proceedings of the Symposium on Eye Tracking Research and Applications. ACM, 281--284.
[98]
Ben Steichen, Cristina Conati, and Giuseppe Carenini. 2014. Inferring visualization task properties, user performance, and user cognitive abilities from eye gaze data. ACM Trans. Interact. Intell. Syst. 4, 2, (2014).
[99]
Kim Sung-Hee, Jeremy Boy, Sukwon Lee, Ji Soo Yi, and Niklas Elmqvist. Towards an open visualization literacy testing platform. In Proceedings of the Workshop on Visualization Literacy. IEEE.
[100]
Kar Yan Tam and Shuk Ying Ho. 2005. Web personalization as a persuasion strategy: An elaboration likelihood model perspective. Inf. Syst. Res. 16, 3 (2005), 271--291.
[101]
Brandon Taylor, Anind Dey, Daniel Siewiorek, and Asim Smailagic. 2015. Using physiological sensors to detect levels of user frustration induced by system delays. In Proceedings of the 2015 International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 517--528.
[102]
Ramón Toala, Filipe Gonçalves, Dalila Durães, and Paulo Novais. 2018. Adaptive and intelligent mentoring to increase user attentiveness in learning activities. In Proceedings of the Ibero-American Conference on Artificial Intelligence. Springer, 145--155.
[103]
Dereck Toker, Cristina Conati, Giuseppe Carenini, and Mona Haraty. 2012. Towards adaptive information visualization: On the influence of user characteristics. In Proceedings of the 20th International Conference on User Modeling, Adaptation, and Personalization. Springer, 274--285.
[104]
Dereck Toker, Cristina Conati, Ben Steichen, and Giuseppe Carenini. 2013. Individual user characteristics and information visualization: Connecting the dots through eye tracking. In Proceedings of the Conference on Human Factors in Computing Systems. ACM, 295--304.
[105]
Dereck Toker, Sébastien Lallé, and Cristina Conati. 2017. Pupillometry and head distance to the screen to predict skill acquisition during information visualization tasks. In Proceedings of the 22nd International Conference on Intelligent User Interfaces. ACM, 221--231.
[106]
Maria C. Velez, Deborah Silver, and Marilyn Tremaine. 2005. Understanding visualization through spatial ability differences. In Proceedings of the IEEE Conference on Visualization. IEEE, 511--518.
[107]
Edward K. Vogel, Geoffrey F. Woodman, and Steven J. Luck. 2001. Storage of features, conjunctions, and objects in visual working memory. J. Exp. Psychol.: Hum. Percept. Perf. 27, 1 (2001), 92--114.
[108]
Jan Wilkening and Sara Fabrikant. 2011. The effect of gender and spatial abilities on map use preferences and performance in road selection tasks. In Proceedings of the International Cartographic Conference. ICA, 232--242.
[109]
Anatoly Yelizarov and Dennis Gamayunov. 2014. Adaptive visualization interface that manages user's cognitive load based on interaction characteristics. In Proceedings of the 7th International Symposium on Visual Information Communication and Interaction. ACM, 1--8.
[110]
Yutaka Yoshida, Hayato Ohwada, Fumio Mizoguchi, and Hirotoshi Iwasaki. 2014. Classifying cognitive load and driving situation with machine learning. Int. J. Mach. Learn. Comput. 4, 3 (2014), 210--215.
[111]
Wei-Long Zheng, Bo-Nan Dong, and Bao-Liang Lu. 2014. Multimodal emotion recognition using EEG and eye tracking data. In Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 5040--5043.
[112]
Caroline Ziemkiewicz, Jordan Crouser, Ashley Rye Yauilla, Sara L. Su, William Ribarsky, and Remco Chang. 2011. How locus of control influences compatibility with visualization style. In Proceedings of the IEEE Conference on Visual Analytics Science and Technology. IEEE, 81--90.

Cited By

View all
  • (2024)Nonlinear Perception Characteristics Analysis of Ocean White Noise Based on Deep Learning AlgorithmsMathematics10.3390/math1218289212:18(2892)Online publication date: 17-Sep-2024
  • (2024)Leveraging Machine Learning to Analyze Semantic User Interactions in Visual AnalyticsInformation10.3390/info1506035115:6(351)Online publication date: 13-Jun-2024
  • (2024)Cognitive state detection with eye tracking in the field: an experience sampling study and its lessons learnedi-com10.1515/icom-2023-003523:1(109-129)Online publication date: 15-Apr-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Interactive Intelligent Systems
ACM Transactions on Interactive Intelligent Systems  Volume 10, Issue 2
June 2020
155 pages
ISSN:2160-6455
EISSN:2160-6463
DOI:10.1145/3403610
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 May 2020
Accepted: 01 November 2019
Revised: 01 October 2019
Received: 01 July 2018
Published in TIIS Volume 10, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. User modeling
  2. classification
  3. cognitive abilities
  4. data quality
  5. eye tracking
  6. information visualization
  7. interaction data
  8. user-adaptive interaction

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

  • MITACS
  • Envision Sustainability Tools Inc

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)191
  • Downloads (Last 6 weeks)24
Reflects downloads up to 03 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Nonlinear Perception Characteristics Analysis of Ocean White Noise Based on Deep Learning AlgorithmsMathematics10.3390/math1218289212:18(2892)Online publication date: 17-Sep-2024
  • (2024)Leveraging Machine Learning to Analyze Semantic User Interactions in Visual AnalyticsInformation10.3390/info1506035115:6(351)Online publication date: 13-Jun-2024
  • (2024)Cognitive state detection with eye tracking in the field: an experience sampling study and its lessons learnedi-com10.1515/icom-2023-003523:1(109-129)Online publication date: 15-Apr-2024
  • (2024)AI-Driven Personalization to Support Human-AI CollaborationCompanion Proceedings of the 16th ACM SIGCHI Symposium on Engineering Interactive Computing Systems10.1145/3660515.3661324(5-6)Online publication date: 24-Jun-2024
  • (2024)Naturalistic Digital Behavior Predicts Cognitive AbilitiesACM Transactions on Computer-Human Interaction10.1145/366034131:3(1-32)Online publication date: 7-May-2024
  • (2024)Using Think-Aloud Data to Understand Relations between Self-Regulation Cycle Characteristics and Student Performance in Intelligent Tutoring SystemsProceedings of the 14th Learning Analytics and Knowledge Conference10.1145/3636555.3636911(529-539)Online publication date: 18-Mar-2024
  • (2024)AI-assisted evaluation of problem-solving performance using eye movement and handwritingJournal of Research on Technology in Education10.1080/15391523.2024.2339474(1-25)Online publication date: 9-May-2024
  • (2024)Recognition of map activities using eye tracking and EEG dataInternational Journal of Geographical Information Science10.1080/13658816.2024.2309188(1-27)Online publication date: 31-Jan-2024
  • (2024)Computational Methods to Infer Human Factors for Adaptation and Personalization Using Eye TrackingA Human-Centered Perspective of Intelligent Personalized Environments and Systems10.1007/978-3-031-55109-3_7(183-204)Online publication date: 1-May-2024
  • (2024)Associating cognitive abilities with naturalistic search behaviorJournal of the Association for Information Science and Technology10.1002/asi.24963Online publication date: 6-Nov-2024
  • Show More Cited By

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media