Abstract
Recently determining of visual focus of attention has gained increased consciousness in computer vision to develop non-verbal communication based system. In this paper, we proposed a computer vision based approach to classify the focus of attention of human in multi-object scenario. In order to determine the current focus of attention head pose is used. To classify the different attentional direction the system is trained supervised machine learning and geometrical analysis techniques. The proposed system is trained with more than 7 h live videos with 9 head poses that contains 435000 frames. The proposed attention classification model achieved 97.00% accuracy on test set with 81000 video frames and visual focus of attention accuracy near to 95.00% with multi-object scenarios.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Sig. Process. Lett. 23(10), 1499–1503 (2016)
King, E.D.: Max-margin object detection, CoRR (2015)
Fanelli, G., Weise T., Gall J., Van Gool, L.: Real time head pose estimation from consumer depth cameras. In: Lecture Notes in Computer Science, vol. 6835. Springer, Heidelberg (2011)
Geng, X., Xia, Y.: Head pose estimation based on multivariate label distribution. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1837–1842 (2014)
Chen, C., Odobez, M.J.: We are not contortionists: coupled adaptive learning for head and body orientation estimation in surveillance video. In: CVPR, pp. 1544–1551 (2012)
Yan, Y., Ricci, E., Subramanian, R., Liu, G., Lanz, O., Sebe, N.: A multi-task learning framework for head pose estimation under target motion. IEEE TPAMI 38, 1070–1083 (2015)
Mukherjee, S.S., Robertson, M.N.: Deep head pose: gaze-direction estimation in multimodal video. IEEE Trans. Multimed. 17(11), 2094–2107 (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognitio. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: CVPR (2016)
Duffner, S., Garcia, C.: Visual focus of attention estimation with unsupervised incremental learning. IEEE Trans. Circ. Syst. Video Technol. 26(12), 2264–2272 (2015)
Sheikhi, S., Odobez, M.J.: Combining dynamic head pose-gaze mapping with the robot conversational state for attention recognition in human-robot interactions. Pattern Recogn. Lett. 66, 81–90 (2015)
Masse, B., Ba, S., Horaud, R.: Tracking gaze and visual focus of attention of people involved in social interaction. IEEE Trans. Pattern Anal. Mach. Intell. 40(11), 2711–2724 (2017)
Lemaignan, S., Garcia, F., Jacq, A., Dillenbourg, P.: From real-time attention assessment to with-me-ness in human-robot interaction. In: International Conference on Human Robot Interaction, pp. 157–164 (2016)
Asteriadis, S., Karpouzis, K., Kollias, S.: Visual focus of attention in non-calibrated environments using gaze estimation. Int. J. Comput. Vis. 107(3), 293–316 (2013)
Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on International Conference on Machine Learning, vol. 37, pp. 448–456 (2015)
Acknowledgments
This work was supported by ICT Division, People’s Republic of Bangladesh.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Afroze, S., Hoque, M.M. (2020). Classification of Attentional Focus Based on Head Pose in Multi-object Scenario. In: Vasant, P., Zelinka, I., Weber, GW. (eds) Intelligent Computing and Optimization. ICO 2019. Advances in Intelligent Systems and Computing, vol 1072. Springer, Cham. https://doi.org/10.1007/978-3-030-33585-4_35
Download citation
DOI: https://doi.org/10.1007/978-3-030-33585-4_35
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-33584-7
Online ISBN: 978-3-030-33585-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)