Abstract
Enhancing computers with the facility to perceive and recognise the user feelings and abilities, as well as aspects related to the task, becomes a key element for the creation of Intelligent Human-Computer Interaction. Many studies have focused on predicting users’ cognitive and affective states and other human factors, such as usability and user experience, to achieve high quality interaction. However, there is a need for another approach that will empower computers to perceive more about the task that is being conducted by the users. This paper presents a study that explores user-driven task-based classification, whereby the classification algorithm used features from visual-based input modalities, i.e. facial expression via webcam, and eye gaze behaviour via eye-tracker. Within the experiments presented herein, the dataset employed by the model comprises four different computer-based tasks. Correspondingly, using a Support Vector Machine-based classifier, the average classification accuracy achieved across 42 subjects is 85.52% when utilising facial-based features as an input feature vector, and an average accuracy of 49.65% when using eye gaze-based features. Furthermore, using a combination of both types of features achieved an average classification accuracy of 87.63%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Asthana, A., Zafeiriou, S., Cheng, S., Pantic, M.: Incremental face alignment in the wild. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1859–1866 (2014)
Balint, L.: Adaptive interfaces for human-computer interaction: a colorful spectrum of present and future options. In: 1995 IEEE International Conference on Systems, Man and Cybernetics, vol. 1 (1995)
Beauvisage, T.: Computer usage in daily life. In: Proceedings of the 27th International Conference on Human Factors in Computing Systems CHI 2009, p. 575 (2009)
Blascheck, T., Kurzhals, K., Raschke, M., Burch, M., Weiskopf, D., Ertl, T.: State-of-the-art of visualization for eye tracking data. In: Eurographics Conference on Visualization (EuroVis), pp. 1–20 (2014)
Chang, C.C., Lin, C.J.: LIBSVM - A Library for Support Vector Machines (2001)
EL-Manzalawy, Y.: WLSVM (2005). http://www.cs.iastate.edu/~yasser/wlsvm/
Iqbal, S.T., Bailey, B.P.: Using eye gaze patterns to identify user tasks. In: The Grace Hopper Celebration of Women in Computing, p. 6 (2004)
Iqbal, S.T., Zheng, X.S., Bailey, B.P.: Task-evoked pupillary response to mental workload in human-computer interaction. In: Extended Abstracts of the 2004 Conference on Human Factors and Computing Systems, CHI 2004, p. 1477 (2004)
Karray, F., Alemzadeh, M., Saleh, J.A., Arab, M.N.: Human-computer interaction: overview on state of the art. Int. J. Smart Sens. Intell. Syst. 1, 137–159 (2008)
Lanatà, A., Valenza, G., Scilingo, E.P.: Eye gaze patterns in emotional pictures. J. Ambient Intell. Humaniz. Comput. 4, 705–715 (2013)
Poole, A., Ball, L.J.: Eye tracking in human-computer interaction and usability research: current status and future prospects. In: Encyclopedia of Human-Computer Interaction, pp. 211–219 (2005)
Saffer, D.: Designing for Interaction: Creating Innovative Applications and Devices Voices that Matter. New Riders, San Francisco (2009)
Samara, A., Galway, L., Bond, R., Wang, H.: Sensing affective states using facial expression analysis. In: García, C.R., Caballero-Gil, P., Burmester, M., Quesada-Arencibia, A. (eds.) UCAmI 2016. LNCS, vol. 10069, pp. 341–352. Springer, Cham (2016). doi:10.1007/978-3-319-48746-5_35
Steichen, B., Conati, C., Carenini, G.: Inferring visualization task properties, user performance, and user cognitive abilities from eye gaze data. TIIS 4(2), 11:1–11:29 (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Samara, A., Galway, L., Bond, R., Wang, H. (2017). Human-Computer Interaction Task Classification via Visual-Based Input Modalities. In: Ochoa, S., Singh, P., Bravo, J. (eds) Ubiquitous Computing and Ambient Intelligence. UCAmI 2017. Lecture Notes in Computer Science(), vol 10586. Springer, Cham. https://doi.org/10.1007/978-3-319-67585-5_62
Download citation
DOI: https://doi.org/10.1007/978-3-319-67585-5_62
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-67584-8
Online ISBN: 978-3-319-67585-5
eBook Packages: Computer ScienceComputer Science (R0)