Towards Automatic Object Detection and Activity Recognition in Indoor Climbing
<p>Framework for object detection and activity inference: data collection using eye-tracking glasses, frame extraction, and small-scale manual annotation, and hold and grasp detection (YOLOv5). Tobii 2 glasses (Stockholm, Sweden) image by Tobii AB.</p> "> Figure 2
<p>Example object classes: hold (<b>left</b>), grasp (<b>middle</b>), and foot grasp (<b>right</b>). The frames illustrate the characteristics of mobile eye tracking in the climbing context—low image quality, low illumination, narrow view, and distortion—that are typical for mobile eye trackers.</p> "> Figure 3
<p>The climber’s view during ascending and before final jump with detected holds (red), grasps (green), and climber’s fixations and saccades (blue). The bounding boxes depict the detected objects (holds; red box) and inferred action (grasp; green box) with the detection confidence.</p> "> Figure 4
<p>Fixation count during route preview, climbing, and final touch. Fixation count (blue) indicates the moments of increased focus (lower count) and visual exploration (higher count) along with grasps. Automatically detected grasps (grey) are aligned with manually coded grasps (purple) that were visible in eye-tracker’s field of view. The grasps in red were annotated from the previous frames as the climbers grasped the holds without looking at them. Taken together, eye movements and grasps show moments of ascend and immobility and corresponding focus and/or visual exploration.</p> "> Figure 5
<p>Comparison of automatic grasp detections (blue) and manually coded grasps (purple and red) of two high-skilled climbers. Purple bars denote the grasps that were captured in the video frame, while red bars denote the grasps occurring outside of the scene camera’s field of view. Although grasps were performed out of view, detections captured the grasping hands or feet in the following frames.</p> "> Figure 6
<p>Time series of climbing and eye-tracking metrics of one participant at the beginning (dark blue), middle (blue), and end (light blue) of the climbing. Metrics indicate experienced difficulty, for example, the main crux of the route was presented in the first third, which is apparent in the peak value of grasp duration, fixation count, and total fixation duration.</p> "> Figure 7
<p>Grasping duration (<b>left</b>) and total fixation duration (<b>right</b>) of four expert climbers at the start (1), middle (2), and end (3) of the climbing route. While all expert climbers solved the routes approximately at the same pace, their grasping and total fixation durations either decreased or increased over time, suggesting different climbing and visual strategies.</p> ">
Abstract
:1. Introduction
2. Background
2.1. Mobile Eye Tracking in Climbing Performance
2.2. Deep Learning in Performance Analysis of Climbing
3. Materials and Method
3.1. Case Study in Indoor Climbing
3.2. Data Processing and Small-Scale Manual Coding
3.3. Downstream Task: Object Detection and Activity Recognition in Indoor Climbing
3.4. Performance Analysis in Indoor Climbing
4. Results
4.1. RQ1: Automatic and Manual Detection of Holds and Grasps
4.2. RQ2: Visual Attention in Climbing Performance
4.3. RQ3: Grasping in Climbing Performance
5. Discussion
Limitations and Future Research
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Hüttermann, S.; Noël, B.; Memmert, D. Eye tracking in high-performance sports: Evaluation of its application in expert athletes. Int. J. Comput. Sci. Sport 2018, 17, 182–203. [Google Scholar] [CrossRef]
- Giles, L.V.; Rhodes, E.C.; Taunton, J.E. The Physiology of Rock Climbing. Sports Med. 2006, 36, 529–545. [Google Scholar] [CrossRef] [PubMed]
- Button, C.; Orth, D.; Davids, K.; Seifert, L. The influence of hold regularity on perceptual-motor behaviour in indoor climbing. Eur. J. Sport Sci. 2018, 18, 1090–1099. [Google Scholar] [CrossRef] [PubMed]
- Saul, D.; Steinmetz, G.; Lehmann, W.; Schilling, A.F. Determinants for success in climbing: A systematic review. J. Exerc. Sci. Fit. 2019, 17, 91–100. [Google Scholar] [CrossRef] [PubMed]
- Shiro, K.; Egawa, K.; Rekimoto, J.; Miyaki, T. Interposer: Visualizing interpolated movements for bouldering training. In Proceedings of the Conference on Human Factors in Computing Systems, Glasgow, Scotland, 4–9 May 2019; pp. 1–6. [Google Scholar] [CrossRef]
- Ivanova, I.; Andric, M.; Janes, A.; Ricci, F.; Zini, F. Climbing Activity Recognition and Measurement with Sensor Data Analysis. In Proceedings of the Companion Publication of the 2020 International Conference on Multimodal Interaction, Online, 20–29 October 2020; pp. 245–249. [Google Scholar] [CrossRef]
- Sasaki, K.; Shiro, K.; Rekimoto, J. ExemPoser: Predicting Poses of Experts as Examples for Beginners in Climbing Using a Neural Network. In Proceedings of the ACM International Conference Proceeding Series, Kaiserslautern, Germany, 6 June 2020. [Google Scholar] [CrossRef]
- Breen, M.; Reed, T.; Nishitani, Y.; Jones, M.; Breen, H.M.; Breen, M.S. Wearable and Non-Invasive Sensors for Rock Climbing Applications: Science-Based Training and Performance Optimization. Sensors 2023, 23, 5080. [Google Scholar] [CrossRef] [PubMed]
- Chen, S.X.; Benet-Martínez, V.; Bond, M.H. Bicultural Identity, Bilingualism, and Psychological Adjustment in Multicultural Societies: Immigration-Based and Globalization-Based Acculturation. J. Pers. 2008, 76, 803–838. [Google Scholar] [CrossRef]
- Schmidt, A.; Orth, D.; Seifert, L. Collection of Visual Data in Climbing Experiments for Addressing the Role of Multi-modal Exploration in Motor Learning Efficiency. In Advanced Concepts for Intelligent Vision Systems; Blanc-Talon, C., Distante, W., Philips, D., Popescu, P., Scheunders, Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; Volume 10016, pp. 674–684. Available online: http://link.springer.com/10.1007/978-3-319-48680-2_59 (accessed on 23 September 2024).
- Ladha, C.; Hammerla, N.Y.; Olivier, P.; Plötz, T. ClimbAX: Skill assessment for climbing enthusiasts. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Zurich, Switzerland, 8–12 September 2013; pp. 235–244. [Google Scholar] [CrossRef]
- Schmidt, R.A.; Lee, T.D.; Winstein, C.; Wulf, G.; Zelaznik, H.N. Motor Control and Learning: A Behavioral Emphasis; Human Kinetics: Champaign, IL, USA, 2018. [Google Scholar]
- Otte, F.W.; Davids, K.; Millar, S.-K.; Klatt, S. When and How to Provide Feedback and Instructions to Athletes?—How Sport Psychology and Pedagogy Insights Can Improve Coaching Interventions to Enhance Self-Regulation in Training. Front. Psychol. 2020, 11, 1444. [Google Scholar] [CrossRef]
- Richter, J.; Beltrán, R.; Köstermeyer, G.; Heinkel, U. Human Climbing and Bouldering Motion Analysis: A Survey on Sensors, Motion Capture, Analysis Algorithms, Recent Advances and Applications. In Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Valletta, Malta, 27–29 February 2020; pp. 751–758. [Google Scholar] [CrossRef]
- Mencarini, E.; Rapp, A.; Tirabeni, L.; Zancanaro, M. Designing Wearable Systems for Sports: A Review of Trends and Opportunities in Human–Computer Interaction. IEEE Trans. Human-Machine Syst. 2019, 49, 314–325. [Google Scholar] [CrossRef]
- Kosmalla, F.; Daiber, F.; Krüger, A. ClimbSense: Automatic Climbing Route Recognition using Wrist-worn Inertia Measurement Units. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing, Systems, Seoul, Republic of Korea, 18–23 April 2015; pp. 2033–2042. [Google Scholar] [CrossRef]
- Seifert, L.; Orth, D.; Boulanger, J.; Dovgalecs, V.; Hérault, R.; Davids, K. Climbing Skill and Complexity of Climbing Wall Design: Assessment of Jerk as a Novel Indicator of Performance Fluency. J. Appl. Biomech. 2014, 30, 619–625. [Google Scholar] [CrossRef]
- Whiting, E.; Ouf, N.; Makatura, L.; Mousas, C.; Shu, Z.; Kavan, L. Environment-Scale Fabrication: Replicating Outdoor Climbing Experiences. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 1794–1804. [Google Scholar] [CrossRef]
- Pandurevic, D.; Draga, P.; Sutor, A.; Hochradel, K. Analysis of Competition and Training Videos of Speed Climbing Athletes Using Feature and Human Body Keypoint Detection Algorithms. Sensors 2022, 22, 2251. [Google Scholar] [CrossRef]
- Grushko, A.I.; Leonov, S.V. The Usage of Eye-tracking Technologies in Rock-climbing. Procedia Soc. Behav. Sci. 2014, 146, 169–174. [Google Scholar] [CrossRef]
- Hartkop, E.; Wickens, C.D.; Keller, J.; McLaughlin, A.C. Foraging for Handholds: Attentional Scanning Varies by Expertise in Rock Climbing. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2019, 63, 1948–1952. [Google Scholar] [CrossRef]
- Seifert, L.; Dicks, M.; Wittmann, F.; Wolf, P. The influence of skill and task complexity on perception of nested affordances. Attention, Perception, Psychophys. 2021, 83, 3240–3249. [Google Scholar] [CrossRef] [PubMed]
- Seifert, L.; Hacques, G.; Komar, J. The Ecological Dynamics Framework: An Innovative Approach to Performance in Extreme Environments: A Narrative Review. Int. J. Environ. Res. Public Heal. 2022, 19, 2753. [Google Scholar] [CrossRef]
- Button, C.; Orth, D.; Davids, K.; Seifert, L. 13 Visual-motor skill in climbing. In The Science of Climbing and Mountaineering; Routledge: Oxford, UK, 2016; p. 210. [Google Scholar]
- Whitaker, M.M.; Pointon, G.D.; Tarampi, M.R.; Rand, K.M. Expertise effects on the perceptual and cognitive tasks of indoor rock climbing. Mem. Cogn. 2020, 48, 494–510. [Google Scholar] [CrossRef]
- Mahanama, B.; Jayawardana, Y.; Rengarajan, S.; Jayawardena, G.; Chukoskie, L.; Snider, J.; Jayarathna, S. Eye Movement and Pupil Measures: A Review. Front. Comput. Sci. 2022, 3, 733531. [Google Scholar] [CrossRef]
- Holmqvist, K.; Nyström, M.; Andersson, R.; Dewhurst, R.; Jarodzka, H.; Van de Weijer, J. Eye Tracking: A Comprehensive Guide to Methods and Measures; OUP Oxford: Oxford, UK, 2011. [Google Scholar]
- Li, F.; Xu, G.; Feng, S. Eye Tracking Analytics for Mental States Assessment—A Review. In Proceedings of the 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Melbourne, VIC, Australia, 17–20 October 2021; pp. 2266–2271. [Google Scholar] [CrossRef]
- Torkamani-Azar, M.; Lee, A.; Bednarik, R. Methods and Measures for Mental Stress Assessment in Surgery: A Systematic Review of 20 Years of Literature. IEEE J. Biomed. Heal. Inform. 2022, 26, 4436–4449. [Google Scholar] [CrossRef]
- Tolvanen, O.; Elomaa, A.-P.; Itkonen, M.; Vrzakova, H.; Bednarik, R.; Huotarinen, A. Eye-Tracking Indicators of Workload in Surgery: A Systematic Review. J. Investig. Surg. 2022, 35, 1340–1349. [Google Scholar] [CrossRef]
- Vickers, J.N. Perception, Cognition, and Decision Training: The Quiet Eye in Action; Human Kinetics: Champaign, IL, USA, 2007. [Google Scholar]
- Button, C.; Seifert, L.; Chow, J.Y.; Davids, K.; Araujo, D. Dynamics of Skill Acquisition: An Ecological Dynamics Approach; Human Kinetics Publishers: Champaign, IL, USA, 2020. [Google Scholar]
- Wright, E.; Pinyan, E.C.; Wickens, C.D.; Keller, J.; McLaughlin, A.C. Assessing Dynamic Value for Safety Gear During a Rock Climbing Task. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2018, 62, 1707–1711. [Google Scholar] [CrossRef]
- Hacques, G.; Dicks, M.; Komar, J.; Seifert, L. Visual control during climbing: Variability in practice fosters a proactive gaze pattern. PLoS ONE 2022, 17, e0269794. [Google Scholar] [CrossRef]
- Beltrán, R.B.; Richter, J.; Köstermeyer, G.; Heinkel, U. Climbing Technique Evaluation by Means of Skeleton Video Stream Analysis. Sensors 2023, 23, 8216. [Google Scholar] [CrossRef] [PubMed]
- Kredel, R.; Vater, C.; Klostermann, A.; Hossner, E.-J. Eye-Tracking Technology and the Dynamics of Natural Gaze Behavior in Sports: A Systematic Review of 40 Years of Research. Front. Psychol. 2017, 8, 1845. [Google Scholar] [CrossRef] [PubMed]
- Seifert, L.; Cordier, R.; Orth, D.; Courtine, Y.; Croft, J.L. Role of route previewing strategies on climbing fluency and exploratory movements. PLoS ONE 2017, 12, e0176306. [Google Scholar] [CrossRef] [PubMed]
- Hacques, G.; Komar, J.; Seifert, L. Learning and transfer of perceptual-motor skill: Relationship with gaze and behavioral exploration. Atten. Percept. Psychophys 2021, 83, 2303–2319. [Google Scholar] [CrossRef] [PubMed]
- Marigold, D.S.; Patla, A.E. Gaze fixation patterns for negotiating complex ground terrain. Neuroscience 2007, 144, 302–313. [Google Scholar] [CrossRef]
- Nieuwenhuys, A.; Pijpers, J.R.; Oudejans, R.R.; Bakker, F.C. The Influence of Anxiety on Visual Attention in Climbing. J. Sport Exerc. Psychol. 2008, 30, 171–185. [Google Scholar] [CrossRef]
- Mitchell, J.; Maratos, F.A.; Giles, D.; Taylor, N.; Butterworth, A.; Sheffield, D. The Visual Search Strategies Underpinning Effective Observational Analysis in the Coaching of Climbing Movement. Front. Psychol. 2020, 11, 1025. [Google Scholar] [CrossRef]
- Zhu, Y.; Li, X.; Liu, C.; Zolfaghari, M.; Xiong, Y.; Wu, C.; Zhang, Z.; Tighe, J.; Manmatha, R.; Li, M. A Comprehensive Study of Deep Video Action Recognition. arXiv 2020, arXiv:2012.06567. [Google Scholar]
- Wu, X.; Sahoo, D.; Hoi, S.C.H. Recent advances in deep learning for object detection. Neurocomputing 2020, 396, 39–64. [Google Scholar] [CrossRef]
- Thomas, G.; Gade, R.; Moeslund, T.B.; Carr, P.; Hilton, A. Computer vision for sports: Current applications and research topics. Comput. Vis. Image Underst. 2017, 159, 3–18. [Google Scholar] [CrossRef]
- Naik, B.T.; Hashmi, M.F.; Bokde, N.D. A Comprehensive Review of Computer Vision in Sports: Open Issues, Future Trends and Research Directions. Appl. Sci. 2022, 12, 4429. [Google Scholar] [CrossRef]
- Vasudevan, V.; Gounder, M.S. A Systematic Review on Machine Learning-Based Sports Video Summarization Techniques. In Smart Computer Vision; Kumar, B.V., Sivakumar, P., Surendiran, B., Ding, J., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2023; pp. 1–34. [Google Scholar] [CrossRef]
- Zhao, J.; Li, X.; Liu, C.; Bing, S.; Chen, H.; Snoek, C.G.; Tighe, J. Tuber: Tube-transformer for action detection. arXiv 2021, arXiv:2104.00969. [Google Scholar]
- Terven, J.; Córdova-Esparza, D.-M.; Romero-González, J.-A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
- Hussain, M. YOLOv1 to v8: Unveiling Each Variant–A Comprehensive Review of YOLO. IEEE Access 2024, 12, 42816–42833. [Google Scholar] [CrossRef]
- Şah, M.; Direkoğlu, C. Review and evaluation of player detection methods in field sports. Multimed. Tools Appl. 2021, 82, 13141–13165. [Google Scholar] [CrossRef]
- Khobdeh, S.B.; Yamaghani, M.R.; Sareshkeh, S.K. Basketball action recognition based on the combination of YOLO and a deep fuzzy LSTM network. J. Supercomput. 2023, 80, 3528–3553. [Google Scholar] [CrossRef]
- Zhang, Y.; Chen, Z.; Wei, B. A Sport Athlete Object Tracking Based on Deep Sort and Yolo V4 in Case of Camera Movement. In Proceedings of the 2020 IEEE 6th International Conference on Computer and Communications (ICCC), Chengdu, China, 11–14 December 2020; pp. 1312–1316. [Google Scholar] [CrossRef]
- Cao, Z.; Liao, T.; Song, W.; Chen, Z.; Li, C. Detecting the shuttlecock for a badminton robot: A YOLO based approach. Expert Syst. Appl. 2020, 164, 113833. [Google Scholar] [CrossRef]
- Mercier, J.; Ertz, O.; Bocher, E. Quantifying dwell time with location-based augmented reality: Dynamic AOI analysis on mobile eye tracking data with vision transformer. J. Eye Mov. Res. 2024, 17, 1–22. [Google Scholar] [CrossRef]
- Barz, M.; Bhatti, O.S.; Alam, H.M.T.; Nguyen, D.M.H.; Sonntag, D. Interactive Fixation-to-AOI Mapping for Mobile Eye Tracking Data based on Few-Shot Image Classification. In Proceedings of the 28th International Conference on Intelligent User Interfaces, Sydney, NSW, Australia, 27–31 March 2023; pp. 175–178. [Google Scholar] [CrossRef]
- Tzamaras, H.M.; Wu, H.-L.; Moore, J.Z.; Miller, S.R. Shifting Perspectives: A proposed framework for analyzing head-mounted eye-tracking data with dynamic areas of interest and dynamic scenes. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2023, 67, 953–958. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Kumari, N.; Ruf, V.; Mukhametov, S.; Schmidt, A.; Kuhn, J.; Küchemann, S. Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4. Sensors 2021, 21, 7668. [Google Scholar] [CrossRef] [PubMed]
- Wolf, J.; Hess, S.; Bachmann, D.; Lohmeyer, Q.; Meboldt, M. Automating areas of interest analysis in mobile eye tracking experiments based on machine learning. J. Eye Mov. Res. 2018, 11. [Google Scholar] [CrossRef] [PubMed]
- Blascheck, T.; Kurzhals, K.; Raschke, M.; Burch, M.; Weiskopf, D.; Ertl, T. State-of-the-Art of Visualization for Eye Tracking Data. In Proceedings of the Eurographics Conference on Visualization (EuroVis), Swansea, UK, 9–13 June 2014. [Google Scholar]
- Barz, M.; Sonntag, D. Automatic Visual Attention Detection for Mobile Eye Tracking Using Pre-Trained Computer Vision Models and Human Gaze. Sensors 2021, 21, 4143. [Google Scholar] [CrossRef] [PubMed]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
- Richardson, M.; Petrini, K.; Proulx, M. Climb-o-Vision: A Computer Vision Driven Sensory Substitution Device for Rock Climbing. In Proceedings of the CHI Conference on Human Factors in Computing Systems Extended Abstracts, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–7. [Google Scholar] [CrossRef]
- Nguyen, T.-N.; Seifert, L.; Hacques, G.; Kölbl, M.H.; Chahir, Y. Vision-Based Global Localization of Points of Gaze in Sport Climbing. Int. J. Pattern Recognit. Artif. Intell. 2023, 37, 2355005. [Google Scholar] [CrossRef]
- Wang, C.-Y.; Liao, H.-Y.M.; Wu, Y.-H.; Chen, P.-Y.; Hsieh, J.-W.; Yeh, I.-H. CSPNet: A new backbone that can enhance learning capability of CNN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 390–391. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8759–8768. [Google Scholar]
- Gegenfurtner, A.; Lehtinen, E.; Säljö, R. Expertise Differences in the Comprehension of Visualizations: A Meta-Analysis of Eye-Tracking Research in Professional Domains. Educ. Psychol. Rev. 2011, 23, 523–552. [Google Scholar] [CrossRef]
- Friard, O.; Gamba, M. BORIS: A free, versatile open-source event-logging software for video/audio coding and live observations. Methods Ecol. Evol. 2016, 7, 1325–1330. [Google Scholar] [CrossRef]
- Bakdash, J.Z.; Marusich, L.R. Repeated Measures Correlation. Front. Psychol. 2017, 8, 456. [Google Scholar] [CrossRef]
- Cohen, R.G.; Rosenbaum, D.A. Where grasps are made reveals how grasps are planned: Generation and recall of motor plans. Exp. Brain Res. 2004, 157, 486–495. [Google Scholar] [CrossRef]
- Säfström, D.; Johansson, R.S.; Flanagan, J.R. Gaze behavior when learning to link sequential action phases in a manual task. J. Vis. 2014, 14, 3. [Google Scholar] [CrossRef]
- Mennie, N.; Hayhoe, M.; Sullivan, B. Look-ahead fixations: Anticipatory eye movements in natural tasks. Exp. Brain Res. 2006, 179, 427–442. [Google Scholar] [CrossRef] [PubMed]
- Land, M.F.; Mennie, N.; Rusted, J. The role of vision and eye movements in the control of activities of daily living. Perception 1999, 28, 1311–1328. [Google Scholar] [CrossRef] [PubMed]
- Terrier, R.; Forestier, N.; Berrigan, F.; Germain-Robitaille, M.; Lavallière, M.; Teasdale, N. Effect of terminal accuracy requirements on temporal gaze-hand coordination during fast discrete and reciprocal pointings. J. Neuroeng. Rehabil. 2011, 8, 10. [Google Scholar] [CrossRef] [PubMed]
- Vine, S.J.; Chaytor, R.J.; McGrath, J.S.; Masters, R.S.W.; Wilson, M.R. Gaze training improves the retention and transfer of laparoscopic technical skills in novices. Surg. Endosc. 2013, 27, 3205–3213. [Google Scholar] [CrossRef] [PubMed]
- Morenas, J.; del Campo, V.L.; López-García, S.; Flores, L. Influence of On-Sight and Flash Climbing Styles on Advanced Climbers’ Route Completion for Bouldering. Int. J. Environ. Res. Public Heal. 2021, 18, 12594. [Google Scholar] [CrossRef]
- De Brouwer, A.J.; Flanagan, J.R.; Spering, M. Functional Use of Eye Movements for an Acting System. Trends Cogn. Sci. 2021, 25, 252–263. [Google Scholar] [CrossRef]
- Chen, L.; Xia, C.; Zhao, Z.; Fu, H.; Chen, Y. AI-Driven Sensing Technology: Review. Sensors 2024, 24, 2958. [Google Scholar] [CrossRef]
- Guan, J.; Hao, Y.; Wu, Q.; Li, S.; Fang, Y. A Survey of 6DoF Object Pose Estimation Methods for Different Application Scenarios. Sensors 2024, 24, 1076. [Google Scholar] [CrossRef]
- Ravoor, P.C.; Sudarshan, T.S.B. Deep Learning Methods for Multi-Species Animal Re-identification and Tracking—A Survey. Comput. Sci. Rev. 2020, 38, 100289. [Google Scholar] [CrossRef]
- Ye, M.; Shen, J.; Lin, G.; Xiang, T.; Shao, L.; Hoi, S.C.H. Deep Learning for Person Re-Identification: A Survey and Outlook. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 2872–2893. [Google Scholar] [CrossRef]
- Yadav, A.; Vishwakarma, D.K. Deep learning algorithms for person re-identification: Sate-of-the-art and research challenges. Multimedia Tools Appl. 2023, 83, 22005–22054. [Google Scholar] [CrossRef]
- Tian, Z.; Qu, P.; Li, J.; Sun, Y.; Li, G.; Liang, Z.; Zhang, W. A Survey of Deep Learning-Based Low-Light Image Enhancement. Sensors 2023, 23, 7763. [Google Scholar] [CrossRef] [PubMed]
Grasp Duration | Mean Fixation Duration | Total Fixation Duration | Fixation Count | Fixation Rate | Saccade Rate | |
---|---|---|---|---|---|---|
Grasp Duration | −0.075 | 0.807 | 0.864 | −0.402 | −0.344 | |
(0.571) | (<0.001) | (<0.001) | (0.001) | (0.007) | ||
Mean Fixation Duration | −0.075 | 0.326 | −0.133 | −0.158 | −0.125 | |
(0.571) | (0.011) | (0.311) | (0.228) | (0.342) | ||
Total Fixation Duration | 0.807 | 0.326 | 0.838 | −0.115 | −0.213 | |
(<0.001) | (0.011) | (<0.001) | (0.382) | (0.103) | ||
Fixation Count | 0.864 | −0.133 | 0.838 | 0.008 | −0.091 | |
(<0.001) | (0.311) | (<0.001) | (0.953) | (0.489) | ||
Fixation Rate | −0.402 | −0.158 | −0.115 | 0.008 | 0.506 | |
(0.001) | (0.228) | (0.382) | (0.953) | (<0.001) | ||
Saccade Rate | −0.344 | −0.125 | −0.213 | −0.091 | 0.506 | |
(0.007) | (0.342) | (0.103) | (0.489) | (<0.001) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Vrzáková, H.; Koskinen, J.; Andberg, S.; Lee, A.; Amon, M.J. Towards Automatic Object Detection and Activity Recognition in Indoor Climbing. Sensors 2024, 24, 6479. https://doi.org/10.3390/s24196479
Vrzáková H, Koskinen J, Andberg S, Lee A, Amon MJ. Towards Automatic Object Detection and Activity Recognition in Indoor Climbing. Sensors. 2024; 24(19):6479. https://doi.org/10.3390/s24196479
Chicago/Turabian StyleVrzáková, Hana, Jani Koskinen, Sami Andberg, Ahreum Lee, and Mary Jean Amon. 2024. "Towards Automatic Object Detection and Activity Recognition in Indoor Climbing" Sensors 24, no. 19: 6479. https://doi.org/10.3390/s24196479
APA StyleVrzáková, H., Koskinen, J., Andberg, S., Lee, A., & Amon, M. J. (2024). Towards Automatic Object Detection and Activity Recognition in Indoor Climbing. Sensors, 24(19), 6479. https://doi.org/10.3390/s24196479