Gaze Zone Classification for Driving Studies Using YOLOv8 Image Classification
<p>Four images from one of the five drivers in the Lisa2 dataset [<a href="#B36-sensors-24-07254" class="html-bibr">36</a>].</p> "> Figure 2
<p>Photograph of the setup. Two webcams were attached to a laptop controlling data collection and placed on the driver seat. Little round stickers in different colours served to help the participant to fixate on different gaze zones. The position of the sticker for the right window is indicated. Other stickers inside this image are for the speedometer, the centre console, and the right mirror.</p> "> Figure 3
<p>Examples of images of looking and pointing in a different context. A total of 10 different targets were selected around the screen that the webcam was attached to and other parts of the room. Note that in between recording sessions the actor changed the blue jacket for a red jacket.</p> "> Figure 4
<p>Accuracy per model trained on individual drivers for the Lisa2 dataset without glasses. Accuracy is defined as the percentage of predictions that agree with the annotated label (also known as the ’top1’ accuracy).</p> "> Figure 5
<p>Confusion matrices for each combination of the driver during training and the driver used for the test images, based on the validation sets.</p> "> Figure 6
<p>Accuracy per driver on models trained on different numbers of drivers for the Lisa2 dataset without glasses.</p> "> Figure 7
<p>Four images from one of the five drivers in the Lisa2 dataset, now with glasses.</p> "> Figure 8
<p>(<b>a</b>) Accuracy per driver on images with glasses when trained on images without glasses or images with glasses. (<b>b</b>) Accuracy per driver on images with and without glasses when trained on images with and without glasses. Images are from the Lisa2 dataset.</p> "> Figure 9
<p>Examples of images of the male driver, with and without glasses, recorded with our own app.</p> "> Figure 10
<p>(<b>a</b>) Zone classification accuracy for the male and female driver for smaller (320 × 240) and larger (640 × 480) images (both without sunglasses). Each model was trained on that particular combination of driver and image size and then applied to the validation set (seen during training) and test set (not seen during training). (<b>b</b>) Accuracy per driver on a model trained with the same driver on a model trained with the other driver or a model trained on both drivers. Performance is computed across the training, validation, and test sets. (<b>c</b>) Accuracy for the male driver with or without sunglasses on a model trained with or without sunglasses or images with and without sunglasses (’Both’). Performance is computed across the training, validation, and test sets.</p> "> Figure 11
<p>Zone classification accuracy for when an actor was looking or pointing at objects inside a living room. In between recordings, the actor changed from a red to a blue jacket, or vice versa. The change of the jacket reduced accuracy by around 5% (pointing) to 10% (looking) if these images were not included during training (’both’ refers to when both red and blue jacket training images were included).</p> "> Figure 12
<p>Screenshots from the first app that can be used to instruct participants to look at particular gaze zones and to collect images from the webcam, to extract frames, and structure the images into the folders for image classification. Note that a section of the window is shown in both images for better visibility.</p> "> Figure 12 Cont.
<p>Screenshots from the first app that can be used to instruct participants to look at particular gaze zones and to collect images from the webcam, to extract frames, and structure the images into the folders for image classification. Note that a section of the window is shown in both images for better visibility.</p> "> Figure 13
<p>Screenshots from the second app that can be used to train the models and to generate the required file structure and annotations for object detection. Note that we did not use the object detection functionality in the present tests, because it is computationally more expensive and the image classification reached a near-perfect performance. Each image shows a section of the original screen for better visibility.</p> "> Figure 13 Cont.
<p>Screenshots from the second app that can be used to train the models and to generate the required file structure and annotations for object detection. Note that we did not use the object detection functionality in the present tests, because it is computationally more expensive and the image classification reached a near-perfect performance. Each image shows a section of the original screen for better visibility.</p> ">
Abstract
:1. Introduction
2. Related Work
2.1. Annotation Methods
2.2. Datasets
2.3. Previous Modelling Approaches
3. Methods
3.1. Datasets
3.2. Model Training
4. Results
4.1. Lisa2
4.2. Own Driver Dataset
4.3. Own Living Room Dataset
5. Apps
6. Discussion
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Sajid Hasan, A.; Jalayer, M.; Heitmann, E.; Weiss, J. Distracted driving crashes: A review on data collection, analysis, and crash prevention methods. Transp. Res. Rec. 2022, 2676, 423–434. [Google Scholar] [CrossRef]
- Klauer, S.G.; Guo, F.; Simons-Morton, B.G.; Ouimet, M.C.; Lee, S.E.; Dingus, T.A. Distracted driving and risk of road crashes among novice and experienced drivers. N. Engl. J. Med. 2014, 370, 54–59. [Google Scholar] [CrossRef] [PubMed]
- Dingus, T.A.; Klauer, S.G.; Neale, V.L.; Petersen, A.; Lee, S.E.; Sudweeks, J.; Perez, M.A.; Hankey, J.; Ramsey, D.; Gupta, S.; et al. The 100-Car Naturalistic Driving Study, Phase II-Results of the 100-Car Field Experiment; National Technical Information Service: Springfield, Virginia, 2006. [Google Scholar]
- Hanowski, R.J.; Perez, M.A.; Dingus, T.A. Driver distraction in long-haul truck drivers. Transp. Res. Part F Traffic Psychol. Behav. 2005, 8, 441–458. [Google Scholar] [CrossRef]
- Cades, D.M.; Crump, C.; Lester, B.D.; Young, D. Driver distraction and advanced vehicle assistive systems (ADAS): Investigating effects on driver behavior. In Proceedings of the Advances in Human Aspects of Transportation: Proceedings of the AHFE 2016 International Conference on Human Factors in Transportation, Walt Disney World®, Orlando, FL, USA, 27–31 July 2016; Springer: New York, NY, USA, 2017; pp. 1015–1022. [Google Scholar]
- Hungund, A.P.; Pai, G.; Pradhan, A.K. Systematic review of research on driver distraction in the context of advanced driver assistance systems. Transp. Res. Rec. 2021, 2675, 756–765. [Google Scholar] [CrossRef]
- Xu, Q.; Guo, T.y.; Shao, F.; Jiang, X.j. Division of area of fixation interest for real vehicle driving tests. Math. Probl. Eng. 2017, 2017, 3674374. [Google Scholar] [CrossRef]
- Vehlen, A.; Standard, W.; Domes, G. How to choose the size of facial areas of interest in interactive eye tracking. PLoS ONE 2022, 17, e0263594. [Google Scholar] [CrossRef]
- Vlakveld, W.; Doumen, M.; van der Kint, S. Driving and gaze behavior while texting when the smartphone is placed in a mount: A simulator study. Transp. Res. Part F Traffic Psychol. Behav. 2021, 76, 26–37. [Google Scholar] [CrossRef]
- Desmet, C.; Diependaele, K. An eye-tracking study on the road examining the effects of handsfree phoning on visual attention. Transp. Res. Part F Traffic Psychol. Behav. 2019, 60, 549–559. [Google Scholar] [CrossRef]
- Ledezma, A.; Zamora, V.; Sipele, Ó.; Sesmero, M.P.; Sanchis, A. Implementing a gaze tracking algorithm for improving advanced driver assistance systems. Electronics 2021, 10, 1480. [Google Scholar] [CrossRef]
- Yang, Y.; Liu, C.; Chang, F.; Lu, Y.; Liu, H. Driver gaze zone estimation via head pose fusion assisted supervision and eye region weighted encoding. IEEE Trans. Consum. Electron. 2021, 67, 275–284. [Google Scholar] [CrossRef]
- Lavalliere, M.; Laurendeau, D.; Simoneau, M.; Teasdale, N. Changing lanes in a simulator: Effects of aging on the control of the vehicle and visual inspection of mirrors and blind spot. Traffic Inj. Prev. 2011, 12, 191–200. [Google Scholar] [CrossRef] [PubMed]
- Pan, Y.; Zhang, Q.; Zhang, Y.; Ge, X.; Gao, X.; Yang, S.; Xu, J. Lane-change intention prediction using eye-tracking technology: A systematic review. Appl. Ergon. 2022, 103, 103775. [Google Scholar] [CrossRef] [PubMed]
- Tijerina, L.; Garrott, W.R.; Stoltzfus, D.; Parmer, E. Eye glance behavior of van and passenger car drivers during lane change decision phase. Transp. Res. Rec. 2005, 1937, 37–43. [Google Scholar] [CrossRef]
- Vasli, B.; Martin, S.; Trivedi, M.M. On driver gaze estimation: Explorations and fusion of geometric and data driven approaches. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 655–660. [Google Scholar]
- Fridman, L.; Lee, J.; Reimer, B.; Victor, T. ‘Owl’and ‘Lizard’: Patterns of head pose and eye pose in driver gaze classification. IET Comput. Vis. 2016, 10, 308–314. [Google Scholar] [CrossRef]
- Choi, I.H.; Hong, S.K.; Kim, Y.G. Real-time categorization of driver’s gaze zone using the deep learning techniques. In Proceedings of the 2016 International Conference on Big Data and smart Computing (BigComp), IEEE, Hong Kong, China, 18–20 January 2016; pp. 143–148. [Google Scholar]
- Barnard, Y.; Utesch, F.; van Nes, N.; Eenink, R.; Baumann, M. The study design of UDRIVE: The naturalistic driving study across Europe for cars, trucks and scooters. Eur. Transp. Res. Rev. 2016, 8, 14. [Google Scholar] [CrossRef]
- Eenink, R.; Barnard, Y.; Baumann, M.; Augros, X.; Utesch, F. UDRIVE: The European naturalistic driving study. In Proceedings of the Transport Research Arena, IFSTTAR, Paris, France, 14–16 April 2014. [Google Scholar]
- van Nes, N.; Bärgman, J.; Christoph, M.; van Schagen, I. The potential of naturalistic driving for in-depth understanding of driver behavior: UDRIVE results and beyond. Saf. Sci. 2019, 119, 11–20. [Google Scholar] [CrossRef]
- Guyonvarch, L.; Lecuyer, E.; Buffat, S. Evaluation of safety critical event triggers in the UDrive data. Saf. Sci. 2020, 132, 104937. [Google Scholar] [CrossRef]
- Seppelt, B.D.; Seaman, S.; Lee, J.; Angell, L.S.; Mehler, B.; Reimer, B. Glass half-full: On-road glance metrics differentiate crashes from near-crashes in the 100-Car data. Accid. Anal. Prev. 2017, 107, 48–62. [Google Scholar] [CrossRef]
- Peng, Y.; Boyle, L.N.; Hallmark, S.L. Driver’s lane keeping ability with eyes off road: Insights from a naturalistic study. Accid. Anal. Prev. 2013, 50, 628–634. [Google Scholar] [CrossRef]
- Tivesten, E.; Dozza, M. Driving context and visual-manual phone tasks influence glance behavior in naturalistic driving. Transp. Res. Part F Traffic Psychol. Behav. 2014, 26, 258–272. [Google Scholar] [CrossRef]
- Jansen, R.J.; van der Kint, S.T.; Hermens, F. Does agreement mean accuracy? Evaluating glance annotation in naturalistic driving data. Behav. Res. Methods 2021, 53, 430–446. [Google Scholar] [CrossRef] [PubMed]
- Titz, J.; Scholz, A.; Sedlmeier, P. Comparing eye trackers by correlating their eye-metric data. Behav. Res. Methods 2018, 50, 1853–1863. [Google Scholar] [CrossRef]
- Lara-Alvarez, C.; Gonzalez-Herrera, F. Testing multiple polynomial models for eye-tracker calibration. Behav. Res. Methods 2020, 52, 2506–2514. [Google Scholar] [CrossRef] [PubMed]
- Shih, S.W.; Wu, Y.T.; Liu, J. A calibration-free gaze tracking technique. In Proceedings of the 15th International Conference on Pattern Recognition, ICPR-2000, IEEE, Barcelona, Spain, 3–8 September 2000; Volume 4, pp. 201–204. [Google Scholar]
- Klefenz, F.; Husar, P.; Krenzer, D.; Hess, A. Real-time calibration-free autonomous eye tracker. In Proceedings of the 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, Dallas, TX, USA, 14–19 March 2010; pp. 762–765. [Google Scholar]
- Chuang, M.C.; Bala, R.; Bernal, E.A.; Paul, P.; Burry, A. Estimating gaze direction of vehicle drivers using a smartphone camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, OH, USA, 23–28 June 2014; pp. 165–170. [Google Scholar]
- Jocher, G.; Chaurasia, A.; Qiu, J. Ultralytics YOLOv8. 2023. Available online: https://github.com/ultralytics/ultralytics (accessed on 10 November 2024).
- Hermens, F. Automatic object detection for behavioural research using YOLOv8. Behav. Res. Methods 2024, 56, 7307–7330. [Google Scholar] [CrossRef]
- Ultralytics. Ultralytics YOLO11. 2024. Available online: https://docs.ultralytics.com/models/yolo11/ (accessed on 4 November 2024).
- Hermens, F.; Walker, R. The influence of social and symbolic cues on observers’ gaze behaviour. Br. J. Psychol. 2016, 107, 484–502. [Google Scholar] [CrossRef]
- Rangesh, A.; Zhang, B.; Trivedi, M.M. Driver gaze estimation in the real world: Overcoming the eyeglass challenge. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020. [Google Scholar]
- Sharma, P.K.; Chakraborty, P. A review of driver gaze estimation and application in gaze behavior understanding. Eng. Appl. Artif. Intell. 2024, 133, 108117. [Google Scholar] [CrossRef]
- Rahman, M.S.; Venkatachalapathy, A.; Sharma, A.; Wang, J.; Gursoy, S.V.; Anastasiu, D.; Wang, S. Synthetic distracted driving (syndd1) dataset for analyzing distracted behaviors and various gaze zones of a driver. Data Brief 2023, 46, 108793. [Google Scholar] [CrossRef]
- Lee, S.J.; Jo, J.; Jung, H.G.; Park, K.R.; Kim, J. Real-time gaze estimator based on driver’s head orientation for forward collision warning system. IEEE Trans. Intell. Transp. Syst. 2011, 12, 254–267. [Google Scholar] [CrossRef]
- Wang, Y.; Zhao, T.; Ding, X.; Bian, J.; Fu, X. Head pose-free eye gaze prediction for driver attention study. In Proceedings of the 2017 IEEE International Conference on Big Data and Smart Computing (BigComp), Jeju Island, Republic of Korea, 13–16 February 2017; pp. 42–46. [Google Scholar]
- Jha, S.; Busso, C. Analyzing the relationship between head pose and gaze to model driver visual attention. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 2157–2162. [Google Scholar]
- Shah, S.M.; Sun, Z.; Zaman, K.; Hussain, A.; Shoaib, M.; Pei, L. A driver gaze estimation method based on deep learning. Sensors 2022, 22, 3959. [Google Scholar] [CrossRef]
- Ghosh, S.; Dhall, A.; Sharma, G.; Gupta, S.; Sebe, N. Speak2label: Using domain knowledge for creating a large scale driver gaze zone estimation dataset. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 2896–2905. [Google Scholar]
- Vora, S.; Rangesh, A.; Trivedi, M.M. On generalizing driver gaze zone estimation using convolutional neural networks. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 849–854. [Google Scholar]
- Wang, J.; Li, W.; Li, F.; Zhang, J.; Wu, Z.; Zhong, Z.; Sebe, N. 100-driver: A large-scale, diverse dataset for distracted driver classification. IEEE Trans. Intell. Transp. Syst. 2023, 24, 7061–7072. [Google Scholar] [CrossRef]
- Kübler, T.C.; Fuhl, W.; Wagner, E.; Kasneci, E. 55 rides: Attention annotated head and gaze data during naturalistic driving. In Proceedings of the ACM Symposium on Eye Tracking Research and Applications, Stuttgart, Germany, 25–29 May 2021; pp. 1–8. [Google Scholar]
- Camberg, S.; Hüllermeier, E. An Extensive Analysis of Different Approaches to Driver Gaze Classification. IEEE Trans. Intell. Transp. Syst. 2024, 25, 16435–16448. [Google Scholar] [CrossRef]
- Fridman, L.; Langhans, P.; Lee, J.; Reimer, B. Driver gaze region estimation without use of eye movement. IEEE Intell. Syst. 2016, 31, 49–56. [Google Scholar] [CrossRef]
- Vora, S.; Rangesh, A.; Trivedi, M.M. Driver gaze zone estimation using convolutional neural networks: A general framework and ablative analysis. IEEE Trans. Intell. Veh. 2018, 3, 254–265. [Google Scholar] [CrossRef]
- Martin, S.; Vora, S.; Yuen, K.; Trivedi, M.M. Dynamics of driver’s gaze: Explorations in behavior modeling and maneuver prediction. IEEE Trans. Intell. Veh. 2018, 3, 141–150. [Google Scholar] [CrossRef]
- Wang, Y.; Yuan, G.; Mi, Z.; Peng, J.; Ding, X.; Liang, Z.; Fu, X. Continuous driver’s gaze zone estimation using rgb-d camera. Sensors 2019, 19, 1287. [Google Scholar] [CrossRef]
- Ribeiro, R.F.; Costa, P.D. Driver gaze zone dataset with depth data. In Proceedings of the 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019), Lille, France, 14–18 May 2019; pp. 1–5. [Google Scholar]
- Nuevo, J.; Bergasa, L.M.; Jiménez, P. RSMAT: Robust simultaneous modeling and tracking. Pattern Recognit. Lett. 2010, 31, 2455–2463. [Google Scholar] [CrossRef]
- Rong, Y.; Akata, Z.; Kasneci, E. Driver intention anticipation based on in-cabin and driving scene monitoring. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, 20–23 September 2020; pp. 1–8. [Google Scholar]
- Cheng, Y.; Zhu, Y.; Wang, Z.; Hao, H.; Liu, Y.; Cheng, S.; Wang, X.; Chang, H.J. What Do You See in Vehicle? Comprehensive Vision Solution for In-Vehicle Gaze Estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 1556–1565. [Google Scholar]
- Kasahara, I.; Stent, S.; Park, H.S. Look Both Ways: Self-supervising Driver Gaze Estimation and Road Scene Saliency. In Proceedings of the Computer Vision—ECCV 2022: 17th European Conference, Tel Aviv, Israel, 23–27 October 2022; Proceedings, Part XIII; Springer: New York, NY, USA, 2022; pp. 126–142. [Google Scholar]
- Diaz-Chito, K.; Hernández-Sabaté, A.; López, A.M. A reduced feature set for driver head pose estimation. Appl. Soft Comput. 2016, 45, 98–107. [Google Scholar] [CrossRef]
- Dari, S.; Kadrileev, N.; Hüllermeier, E. A neural network-based driver gaze classification system with vehicle signals. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), IEEE, Glasgow, UK, 19–24 July 2020; pp. 1–7. [Google Scholar]
- Ghosh, S.; Hayat, M.; Dhall, A.; Knibbe, J. Mtgls: Multi-task gaze estimation with limited supervision. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 3223–3234. [Google Scholar]
- Tawari, A.; Chen, K.H.; Trivedi, M.M. Where is the driver looking: Analysis of head, eye and iris for robust gaze zone estimation. In Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), Qingdao, China, 8–11 October 2014; pp. 988–994. [Google Scholar]
- Tawari, A.; Trivedi, M.M. Robust and continuous estimation of driver gaze zone by dynamic analysis of multiple face videos. In Proceedings of the 2014 IEEE Intelligent Vehicles Symposium Proceedings, Ypsilanti, MI, USA, 8–11 June 2014; pp. 344–349. [Google Scholar]
- Vicente, F.; Huang, Z.; Xiong, X.; De la Torre, F.; Zhang, W.; Levi, D. Driver gaze tracking and eyes off the road detection system. IEEE Trans. Intell. Transp. Syst. 2015, 16, 2014–2027. [Google Scholar] [CrossRef]
- Naqvi, R.A.; Arsalan, M.; Batchuluun, G.; Yoon, H.S.; Park, K.R. Deep learning-based gaze detection system for automobile drivers using a NIR camera sensor. Sensors 2018, 18, 456. [Google Scholar] [CrossRef]
- Banerjee, S.; Joshi, A.; Turcot, J.; Reimer, B.; Mishra, T. Driver glance classification in-the-wild: Towards generalization across domains and subjects. In Proceedings of the 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), Jodhpur, India, 15–18 December 2021; pp. 1–8. [Google Scholar]
- Yoon, H.S.; Baek, N.R.; Truong, N.Q.; Park, K.R. Driver gaze detection based on deep residual networks using the combined single image of dual near-infrared cameras. IEEE Access 2019, 7, 93448–93461. [Google Scholar] [CrossRef]
- Bany Muhammad, M.; Yeasin, M. Eigen-CAM: Visual explanations for deep convolutional neural networks. SN Comput. Sci. 2021, 2, 47. [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- Rangesh, A.; Zhang, B.; Trivedi, M.M. Gaze Preserving CycleGANs for Eyeglass Removal & Persistent Gaze Estimation. arXiv 2020, arXiv:2002.02077. [Google Scholar]
- Ji, Q.; Yang, X. Real time visual cues extraction for monitoring driver vigilance. In Proceedings of the International Conference on Computer Vision Systems, Vancouver, BC, Canada, 7–14 July 2001; Springer: New York, NY, USA, 2001; pp. 107–124. [Google Scholar]
- Hu, B.; Zheng, Z.; Liu, P.; Yang, W.; Ren, M. Unsupervised eyeglasses removal in the wild. IEEE Trans. Cybern. 2020, 51, 4373–4385. [Google Scholar] [CrossRef] [PubMed]
- AI, V. YOLOv8: A Complete Guide (2025 Update). 2023. Available online: https://viso.ai/deep-learning/yolov8-guide/ (accessed on 4 November 2024).
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Macháček, D.; Dabre, R.; Bojar, O. Turning Whisper into Real-Time Transcription System. In Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics: System Demonstrations, pages 17–24, November 1–4, 2023; Saha, S., Sujaini, H., Eds.; Bali, Indonesia, 2023; pp. 17–24. Available online: https://aclanthology.org/2023.ijcnlp-demo.3.pdf (accessed on 10 November 2024).
- Serengil, S.I. Deepface: A Lightweight Face Recognition and Facial Attribute Analysis Framework (Age, Gender, Emotion, Race) for Python. 2024. Available online: https://github.com/serengil/deepface (accessed on 10 November 2024).
- Dingus, T.A. Estimates of prevalence and risk associated with inattention and distraction based upon in situ naturalistic data. Ann. Adv. Automot. Med. 2014, 58, 60. [Google Scholar]
- Lollett, C.; Hayashi, H.; Kamezaki, M.; Sugano, S. A Robust Driver’s Gaze Zone Classification using a Single Camera for Self-occlusions and Non-aligned Head and Eyes Direction Driving Situations. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 2020; pp. 4302–4308. [Google Scholar]
- Doshi, A.; Trivedi, M.M. Head and eye gaze dynamics during visual attention shifts in complex environments. J. Vis. 2012, 12, 9. [Google Scholar] [CrossRef]
- Zhang, Y. A Review of Image Style Transfer Using Generative Adversarial Networks Techniques. Anal. Metaphys. 2024, 23, 131–142. [Google Scholar]
Zone | Instruction |
---|---|
1 | Look forward |
2 | Look left |
3 | Look right |
4 | Look at the interior mirror |
5 | Look at the left side mirror |
6 | Look at the right side mirror |
7 | Look over your left shoulder |
8 | Look over your right shoulder |
9 | Look straight down at the dashboard |
10 | Look down to the centre console |
11 | Look forward and to the left |
12 | Look forward and to the right |
Zone | Description |
---|---|
1 | Webcam on top of the screen |
2 | Phone left of the screen on the table |
3 | Notebook right of the screen on the table |
4 | Computer left of the screen on the same table |
5 | Door handle right of the screen |
6 | Door stop on the floor right of the screen |
7 | Plant far left of the screen |
8 | Picture above the screen |
9 | Socket left and above the screen |
10 | Cup in front on the screen on the table |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hermens, F.; Anker, W.; Noten, C. Gaze Zone Classification for Driving Studies Using YOLOv8 Image Classification. Sensors 2024, 24, 7254. https://doi.org/10.3390/s24227254
Hermens F, Anker W, Noten C. Gaze Zone Classification for Driving Studies Using YOLOv8 Image Classification. Sensors. 2024; 24(22):7254. https://doi.org/10.3390/s24227254
Chicago/Turabian StyleHermens, Frouke, Wim Anker, and Charmaine Noten. 2024. "Gaze Zone Classification for Driving Studies Using YOLOv8 Image Classification" Sensors 24, no. 22: 7254. https://doi.org/10.3390/s24227254