Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3409251.3411716acmconferencesArticle/Chapter ViewAbstractPublication PagesautomotiveuiConference Proceedingsconference-collections
Work in Progress

Face2Multi-modal: In-vehicle Multi-modal Predictors via Facial Expressions

Published: 21 September 2020 Publication History

Abstract

Towards intelligent Human-Vehicle Interaction systems and innovative Human-Vehicle Interaction designs, in-vehicle drivers’ physiological data has been explored as an essential data source. However, equipping multiple biosensors is considered the limited extent of user-friendliness and impractical during the driving procedure. The lack of a proper approach to access physiological data has hindered wider applications of advanced biosignal-driven designs in practice (e.g. monitoring systems and etc.). Hence, the demand for a user-friendly approach to measuring drivers’ body statuses has become more intense.
In this Work-In-Progress, we present Face2Multi-modal, an In-vehicle multi-modal Data Streams Predictors through facial expressions only. More specifically, we have explored the estimations of Heart Rate, Skin Conductance, and Vehicle Speed of the drivers. We believe Face2Multi-modal provides a user-friendly alternative to acquiring drivers’ physiological status and vehicle status, which could serve as the building block for many current or future personalized Human-Vehicle Interaction designs. More details and updates about the project Face2Multi-modal is online at https://github.com/unnc-ucc/Face2Multimodal/.

References

[1]
J. Choi, K. Kim, D. Kim, H. Choi, and B. Jang. 2018. Driver-adaptive vehicle interaction system for the advanced digital cockpit. In 2018 20th International Conference on Advanced Communication Technology (ICACT). 307–310.
[2]
G. Du, T. Li, C. Li, P. X. Liu, and D. Li. 2020. Vision-Based Fatigue Driving Recognition Method Integrating Heart Rate and Facial Features. IEEE Transactions on Intelligent Transportation Systems (2020), 1–12.
[3]
Mohamed Abul Hassan, Aamir Saeed Malik, David Fofi, Naufal Saad, Babak Karasfi, Yasir Salih Ali, and Fabrice Meriaudeau. 2017. Heart rate estimation using facial video: A review. Biomedical Signal Processing and Control 38 (2017), 346–360.
[4]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.
[5]
Jennifer A Healey and Rosalind W Picard. 2005. Detecting stress during real-world driving tasks using physiological sensors. IEEE Transactions on intelligent transportation systems 6, 2(2005), 156–166.
[6]
Stephan Heuer, Bhavin Chamadiya, Adnene Gharbi, Christophe Kunze, and Manfred Wagner. 2010. Unobtrusive in-vehicle biosignal instrumentation for advanced driver assistance and active safety. In 2010 IEEE EMBS Conference on Biomedical Engineering and Sciences (IECBES). IEEE, 252–256.
[7]
Stefan Hoch, Manfred Schweigert, Frank Althoff, and Gerhard Rigoll. 2007. The BMW SURF project: A contribution to the research on cognitive vehicles. In 2007 IEEE Intelligent Vehicles Symposium. IEEE, 692–697.
[8]
Jie Hu, Li Shen, and Gang Sun. 2018. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 7132–7141.
[9]
Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. 2017. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 4700–4708.
[10]
Xiaobai Li, Jie Chen, Guoying Zhao, and Matti Pietikainen. 2014. Remote Heart Rate Measurement From Face Videos Under Realistic Situations. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[11]
S. Martin, A. Tawari, and M. M. Trivedi. 2014. Toward Privacy-Protecting Safety Systems for Naturalistic Driving Videos. IEEE Transactions on Intelligent Transportation Systems 15, 4(2014), 1811–1822.
[12]
Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Černockỳ, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech communication association.
[13]
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. (2017).
[14]
Xiangjun Peng, Zhentao Huang, and Xu Sun. 2020. Building BROOK: A Multi-modal and Facial Video Database for Human-Vehicle Interaction Research. In the 1st Workshop of Speculative Designs for Emergent Personal Data Trails: Signs, Signals and Signifiers, co-located with the 2020 CHI Conference on Human Factors in Computing Systems, (CHI), Honolulu, HI, USA, April 25-30, 2020. arXiv, 1–9. arXiv:2005.08637https://arxiv.org/abs/2005.08637
[15]
Kari Pulli, Anatoly Baksheev, Kirill Kornyakov, and Victor Eruhimov. 2012. Real-time computer vision with OpenCV. Commun. ACM 55, 6 (2012), 61–69. https://doi.org/10.1145/2184319.2184337
[16]
Antoine Raux, Ian Lane, and Rakesh Gupta. 2016. System and method for multimodal human-vehicle interaction and belief tracking. US Patent 9,286,029.
[17]
Eike A Schmidt, Willhelm E Kincses, Michael Scharuf, Stefan Haufe, Ruth Schubert, and Gabriel Curio. 2007. Assessing drivers’ vigilance state during monotonous driving. (2007).
[18]
Zilin Song, Shuolei Wang, Weikai Kong, Xiangjun Peng, and Xu Sun. 2019. First attempt to build realistic driving scenes using video-to-video synthesis in OpenDS framework. In Adjunct Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2019, Utrecht, The Netherlands, September 21-25, 2019, Christian P. Janssen, Stella F. Donker, Lewis L. Chuang, and Wendy Ju (Eds.). ACM, 387–391. https://doi.org/10.1145/3349263.3351497
[19]
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15, 1 (2014), 1929–1958.
[20]
Xiaohua Sun, Honggao Chen, Jintian Shi, Weiwei Guo, and Jingcheng Li. 2018. From HMI to HRI: Human-Vehicle Interaction Design for Smart Cockpit. In Human-Computer Interaction. Interaction in Context, Masaaki Kurosu(Ed.). Springer International Publishing, Cham, 440–454.
[21]
Yaohua Wang, Zhentao Huang, Rongze Li, Zheng Zhang, and Xu Sun. 2020. A Comparative Study of Speculative Retrieval for Multi-modal Data Trails: Towards User-friendly Human-Vehicle Interactions. In Proceedings of the 2019 6th International Conference on Computing and Artificial Intelligence, ICCAI 2020, Tianjin, China, April 23-26, 2020. ACM.

Cited By

View all
  • (2023)Driver Model for Take-Over-Request in Autonomous VehiclesAdjunct Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization10.1145/3563359.3596994(317-324)Online publication date: 26-Jun-2023
  • (2023)Facial Emotion Recognition Method Based on Canny Edge Detection Using Convolutional Neural Network2023 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS)10.1109/ICCCIS60361.2023.10425079(425-430)Online publication date: 3-Nov-2023
  • (2023)InMyFaceInformation Fusion10.1016/j.inffus.2023.10188699:COnline publication date: 1-Nov-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
AutomotiveUI '20: 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
September 2020
116 pages
ISBN:9781450380669
DOI:10.1145/3409251
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 September 2020

Check for updates

Author Tags

  1. Computer Vision
  2. Ergonomics.
  3. Human-Vehicle Interactions

Qualifiers

  • Work in progress
  • Research
  • Refereed limited

Conference

AutomotiveUI '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 248 of 566 submissions, 44%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)41
  • Downloads (Last 6 weeks)2
Reflects downloads up to 03 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Driver Model for Take-Over-Request in Autonomous VehiclesAdjunct Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization10.1145/3563359.3596994(317-324)Online publication date: 26-Jun-2023
  • (2023)Facial Emotion Recognition Method Based on Canny Edge Detection Using Convolutional Neural Network2023 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS)10.1109/ICCCIS60361.2023.10425079(425-430)Online publication date: 3-Nov-2023
  • (2023)InMyFaceInformation Fusion10.1016/j.inffus.2023.10188699:COnline publication date: 1-Nov-2023
  • (2023)BROOK Dataset: A Playground for Exploiting Data-Driven Techniques in Human-Vehicle Interactive DesignsHCI in Mobility, Transport, and Automotive Systems10.1007/978-3-031-35908-8_14(191-209)Online publication date: 23-Jul-2023
  • (2023)FIGCONs: Exploiting FIne-Grained CONstructs of Facial Expressions for Efficient and Accurate Estimation of In-Vehicle Drivers’ StatisticsHCI in Mobility, Transport, and Automotive Systems10.1007/978-3-031-35908-8_1(3-17)Online publication date: 9-Jul-2023
  • (2023)Characterizing and Optimizing Differentially-Private Techniques for High-Utility, Privacy-Preserving Internet-of-VehiclesHCI in Mobility, Transport, and Automotive Systems10.1007/978-3-031-35678-0_3(31-50)Online publication date: 9-Jul-2023
  • (2022)Towards Implicit Interaction in Highly Automated Vehicles - A Systematic Literature ReviewProceedings of the ACM on Human-Computer Interaction10.1145/35467266:MHCI(1-21)Online publication date: 20-Sep-2022
  • (2022)A Design Space for Human Sensor and Actuator Focused In-Vehicle Interaction Based on a Systematic Literature ReviewProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35346176:2(1-51)Online publication date: 7-Jul-2022
  • (2022)Human–Machine Interaction in Intelligent and Connected Vehicles: A Review of Status Quo, Issues, and OpportunitiesIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2021.312721723:9(13954-13975)Online publication date: Sep-2022
  • (2022)Oneiros-OpenDS: An Interactive and Extensible Toolkit for Agile and Automated Developments of Complicated Driving ScenesHCI in Mobility, Transport, and Automotive Systems10.1007/978-3-031-04987-3_6(88-107)Online publication date: 16-Jun-2022
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media