Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3529190.3529203acmotherconferencesArticle/Chapter ViewAbstractPublication PagespetraConference Proceedingsconference-collections
research-article

Learn from the Best: Harnessing Expert Skill and Knowledge to Teach Unskilled Workers

Published: 11 July 2022 Publication History

Abstract

Experts make complex skills look easy, but learning from experts is not only a matter of observation, but also feedback and reflection. Whereas industrial tasks in the domains of manufacturing and assembly assume a standardized work procedure supported by precision manufactured parts, several other domains where natural products are processed demand a high degree of background knowledge and skill from workers due to high within-product variability. The potential of using assistants in these domains to transfer this expert knowledge to novice workers has rarely been explored. In this paper, we explore how in the rarely studied domain of food-processing, expert know-how in accomplishing a complex task can be analyzed via state-of-the-art machine learning techniques in a multi-modal manner, so that specific features can be detected and tracked to instruct and provide feedback to beginners. We report on the performance and limitations of our approach to activity tracking and discuss its feasibility. A final review with the expert provided additional insights, which we integrated into our approach. We conclude with a summarized framework for capturing and conveying expert knowledge in the industrial domain.

References

[1]
Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. https://www.tensorflow.org/ Software available from tensorflow.org.
[2]
Behrooz Azadi, Michael Haslgrübler, Georgios Sopidis, Michaela Murauer, Bernhard Anzengruber, and Alois Ferscha. 2019. Feasibility analysis of unsupervised industrial activity recognition based on a frequent micro action. In Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments. ACM, Rhodes Greece, 368–375. https://doi.org/10.1145/3316782.3322749
[3]
Djamila Romaissa Beddiar, Brahim Nini, Mohammad Sabokrou, and Abdenour Hadid. 2020. Vision-based human activity recognition: a survey. Multimedia Tools and Applications 79, 41-42 (Nov. 2020), 30509–30555. https://doi.org/10.1007/s11042-020-09004-3
[4]
Andreas Bächler, Liane Bächler, Sven Autenrieth, Hauke Behrendt, Markus Funk, Georg Krüll, Thomas Hörz, Thomas Heidenreich, Catrin Misselhorn, and Albrecht Schmidt. 2018. Systeme zur Assistenz und Effizienzsteigerung in manuellen Produktionsprozessen der Industrie auf Basis von Projektion und Tiefendatenerkennung. In Zukunft der Arbeit – Eine praxisnahe Betrachtung, Steffen Wischmann and Ernst Andreas Hartmann (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 33–49. https://doi.org/10.1007/978-3-662-49266-6_3
[5]
Gavin C. Cawley. 2012. Over-Fitting in Model Selection and Its Avoidance. In Advances in Intelligent Data Analysis XI, David Hutchison, Takeo Kanade, Josef Kittler, Jon M. Kleinberg, Friedemann Mattern, John C. Mitchell, Moni Naor, Oscar Nierstrasz, C. Pandu Rangan, Bernhard Steffen, Madhu Sudan, Demetri Terzopoulos, Doug Tygar, Moshe Y. Vardi, Gerhard Weikum, Jaakko Hollmén, Frank Klawonn, and Allan Tucker (Eds.). Vol. 7619. Springer Berlin Heidelberg, Berlin, Heidelberg, 1–1. https://doi.org/10.1007/978-3-642-34156-4_1 Series Title: Lecture Notes in Computer Science.
[6]
Cedefop – European Centre for the Development of Vocational Training.2015. Skill shortages and gaps in European enterprises: striking a balance between vocational education and training and the labour market.Publications Office, LU. https://data.europa.eu/doi/10.2801/042499
[7]
Lars Bager Christensen and Morten Pol Engell-Noørregård. 2016. Augmented reality in the slaughterhouse - a future operation facility?Cogent Food & Agriculture 2, 1 (June 2016). https://doi.org/10.1080/23311932.2016.1188678
[8]
Marie-Paule Daniel and Barbara Tversky. 2012. How to put things together. Cognitive Processing 13, 4 (Nov. 2012), 303–319. https://doi.org/10.1007/s10339-012-0521-5
[9]
Hitesh Dhiman, Sascha Martinez, Volker Paelke, and Carsten Röcker. 2018. Head-Mounted Displays in Industrial AR-Applications: Ready for Prime Time?In HCI in Business, Government, and Organizations, Fiona Fui-Hoon Nahand Bo Sophia Xiao (Eds.). Vol. 10923. Springer International Publishing, Cham, 67–78. https://doi.org/10.1007/978-3-319-91716-0_6 Series Title: Lecture Notes in Computer Science.
[10]
Hitesh Dhiman and Carsten Rocker. 2019. Worker Assistance in Smart Production Environments Using Pervasive Technologies. In 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). IEEE, Kyoto, Japan, 95–100. https://doi.org/10.1109/PERCOMW.2019.8730771
[11]
Hitesh Dhiman, Christoph Wächter, Michael Fellmann, and Carsten Röcker. 2022. Intelligent Assistants: Conceptual Dimensions, Contextual Model, and Design Trends. Business & Information Systems Engineering (March 2022). https://doi.org/10.1007/s12599-022-00743-1
[12]
Yrjo Engestrom. 2000. Activity theory as a framework for analyzing and redesigning work. Ergonomics 43, 7 (July 2000), 960–974. https://doi.org/10.1080/001401300409143
[13]
Mark Everingham, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. 2010. The Pascal Visual Object Classes (VOC) Challenge. International Journal of Computer Vision 88, 2 (June 2010), 303–338. https://doi.org/10.1007/s11263-009-0275-4
[14]
Khawaja Moyeezullah Ghori, Rabeeh Ayaz Abbasi, Muhammad Awais, Muhammad Imran, Ata Ullah, and Laszlo Szathmary. 2020. Performance Analysis of Different Types of Machine Learning Classifiers for Non-Technical Loss Detection. IEEE Access 8(2020), 16033–16048. https://doi.org/10.1109/ACCESS.2019.2962510
[15]
Mario Heinz, Hitesh Dhiman, and Carsten Röcker. 2018. A Multi-device Assistive System for Industrial Maintenance Operations. In Machine Learning and Knowledge Extraction, Andreas Holzinger, Peter Kieseberg, A Min Tjoa, and Edgar Weippl (Eds.). Vol. 11015. Springer International Publishing, Cham, 239–247. https://doi.org/10.1007/978-3-319-99740-7_16 Series Title: Lecture Notes in Computer Science.
[16]
Markus Huff, Frank Papenmeier, and Jeffrey M. Zacks. 2012. Visual target detection is impaired at event boundaries. Visual Cognition 20, 7 (Aug. 2012), 848–864. https://doi.org/10.1080/13506285.2012.705359
[17]
Jaeyong Sung, Colin Ponce, Bart Selman, and Ashutosh Saxena. 2012. Unstructured human activity detection from RGBD images. In 2012 IEEE International Conference on Robotics and Automation. IEEE, St Paul, MN, USA, 842–849. https://doi.org/10.1109/ICRA.2012.6224591
[18]
Victor Kaptelinin and Bonnie Nardi. 2018. Activity Theory as a Framework for Human-Technology Interaction Research. Mind, Culture, and Activity 25, 1 (Jan. 2018), 3–5. https://doi.org/10.1080/10749039.2017.1393089
[19]
Michael Kardas and Ed O’Brien. 2018. Easier Seen Than Done: Merely Watching Others Perform Can Foster an Illusion of Skill Acquisition. Psychological Science 29, 4 (April 2018), 521–536. https://doi.org/10.1177/0956797617740646
[20]
Christopher A. Kurby and Jeffrey M. Zacks. 2011. Age differences in the perception of hierarchical structure in events. Memory & Cognition 39, 1 (Jan. 2011), 75–91. https://doi.org/10.3758/s13421-010-0027-2
[21]
Frédéric Li, Kimiaki Shirahama, Muhammad Nisar, Lukas Köping, and Marcin Grzegorzek. 2018. Comparison of Feature Learning Methods for Human Activity Recognition Using Wearable Sensors. Sensors 18, 3 (Feb. 2018), 679. https://doi.org/10.3390/s18020679
[22]
Liming Chen, J. Hoey, C. D. Nugent, D. J. Cook, and Zhiwen Yu. 2012. Sensor-Based Activity Recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 42, 6 (Nov. 2012), 790–808. https://doi.org/10.1109/TSMCC.2012.2198883
[23]
Benedikt G. Mark, Erwin Rauch, and Dominik T. Matt. 2021. Worker assistance systems in manufacturing: A review of the state of the art and future directions. Journal of Manufacturing Systems 59 (April 2021), 228–250. https://doi.org/10.1016/j.jmsy.2021.02.017
[24]
Uwe Monks and Volker Lohweg. 2014. Fast evidence-based information fusion. In 2014 4th International Workshop on Cognitive Information Processing (CIP). IEEE, Copenhagen, Denmark, 1–6. https://doi.org/10.1109/CIP.2014.6844508
[25]
Benedikt Mättig and Veronika Kretschmer. 2019. Einsatz digitaler Assistenzsysteme in der Logistik 4.0. In Handbuch Industrie 4.0, Michael ten Hompel, Birgit Vogel-Heuser, and Thomas Bauernhansl (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 1–25. https://doi.org/10.1007/978-3-662-45537-1_114-1 Series Title: Springer Reference Technik.
[26]
Sebastian Pimminger, Werner Kurschl, Mirjam Augstein, Josef Altmann, and Johann Heinzelreiter. 2019. Low-cost tracking of assembly tasks in industrial environments. In Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments. ACM, Rhodes Greece, 86–93. https://doi.org/10.1145/3316782.3321526
[27]
Guillaume Plouffe and Ana-Maria Cretu. 2016. Static and Dynamic Hand Gesture Recognition in Depth Data Using Dynamic Time Warping. IEEE Transactions on Instrumentation and Measurement 65, 2 (Feb. 2016), 305–316. https://doi.org/10.1109/TIM.2015.2498560
[28]
Timo Pylvänäinen. 2005. Accelerometer Based Gesture Recognition Using Continuous HMMs. In Pattern Recognition and Image Analysis, David Hutchison, Takeo Kanade, Josef Kittler, Jon M. Kleinberg, Friedemann Mattern, John C. Mitchell, Moni Naor, Oscar Nierstrasz, C. Pandu Rangan, Bernhard Steffen, Madhu Sudan, Demetri Terzopoulos, Dough Tygar, Moshe Y. Vardi, Gerhard Weikum, Jorge S. Marques, Nicolás Pérez de la Blanca, and Pedro Pina (Eds.). Vol. 3522. Springer Berlin Heidelberg, Berlin, Heidelberg, 639–646. https://doi.org/10.1007/11492429_77 Series Title: Lecture Notes in Computer Science.
[29]
Gabriel A Radvansky and Jeffrey M Zacks. 2017. Event boundaries in memory and cognition. Current Opinion in Behavioral Sciences 17 (Oct. 2017), 133–140. https://doi.org/10.1016/j.cobeha.2017.08.006
[30]
Joseph Redmon and Ali Farhadi. 2018. YOLOv3: An Incremental Improvement. arXiv:1804.02767 [cs] (April 2018). http://arxiv.org/abs/1804.02767 arXiv:1804.02767.
[31]
Alina Roitberg, Nikhil Somani, Alexander Perzylo, Markus Rickert, and Alois Knoll. 2015. Multimodal Human Activity Recognition for Industrial Manufacturing Processes in Robotic Workcells. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. ACM, Seattle Washington USA, 259–266. https://doi.org/10.1145/2818346.2820738
[32]
Alfred Schöttl. 2019. Auf dem Weg zu intelligenten Assistenzsystemen am Beispiel eines Manipulatorarms. In Digitale Transformation von Dienstleistungen im Gesundheitswesen VI, Mario A. Pfannstiel, Patrick Da-Cruz, and Harald Mehlich (Eds.). Springer Fachmedien Wiesbaden, Wiesbaden, 333–350. https://doi.org/10.1007/978-3-658-25461-2_17
[33]
Georgios Sopidis, Abdelrahman Ahmad, Michael Haslgruebler, Alois Ferscha, and Martin Baresch. 2021. Micro Activities Recognition and Macro Worksteps Classification for Industrial IoT Processes. In 11th International Conference on the Internet of Things. ACM, St.Gallen Switzerland, 185–188. https://doi.org/10.1145/3494322.3494356
[34]
Thomas Stiefmeier, Daniel Roggen, Georg Ogris, Paul Lukowicz, and Gerhard Tr. 2008. Wearable Activity Tracking in Car Manufacturing. IEEE Pervasive Computing 7, 2 (April 2008), 42–50. https://doi.org/10.1109/MPRV.2008.40
[35]
Sonja Stork and Anna Schubö. 2010. Cognition in Manual Assembly. KI - Künstliche Intelligenz 24, 4 (Nov. 2010), 305–309. https://doi.org/10.1007/s13218-010-0054-y
[36]
Zehua Sun, Qiuhong Ke, Hossein Rahmani, Mohammed Bennamoun, Gang Wang, and Jun Liu. 2021. Human Action Recognition from Various Data Modalities: A Review. arXiv:2012.11866 [cs] (July 2021). http://arxiv.org/abs/2012.11866 arXiv:2012.11866.
[37]
Wenjin Tao, Ze-Hao Lai, Ming C. Leu, and Zhaozheng Yin. 2018. Worker Activity Recognition in Smart Manufacturing Using IMU and sEMG Signals with Convolutional Neural Networks. Procedia Manufacturing 26 (2018), 1159–1166. https://doi.org/10.1016/j.promfg.2018.07.152
[38]
Jeffrey M. Zacks and Barbara Tversky. 2003. Structuring information interfaces for procedural learning.Journal of Experimental Psychology: Applied 9, 2 (2003), 88–100. https://doi.org/10.1037/1076-898X.9.2.88
[39]
Jeffrey M. Zacks, Barbara Tversky, and Gowri Iyer. 2001. Perceiving, remembering, and communicating structure in events.Journal of Experimental Psychology: General 130, 1 (2001), 29–58. https://doi.org/10.1037/0096-3445.130.1.29
[40]
Fan Zhang, Valentin Bazarevsky, Andrey Vakunov, Andrei Tkachenka, George Sung, Chuo-Ling Chang, and Matthias Grundmann. 2020. MediaPipe Hands: On-device Real-time Hand Tracking. arXiv:2006.10214 [cs] (June 2020). http://arxiv.org/abs/2006.10214 arXiv:2006.10214.

Cited By

View all
  • (2024)Does the Medium Matter? A Comparison of Augmented Reality Media in Instructing Novices to Perform Complex, Skill-Based Manual Tasks.Proceedings of the ACM on Human-Computer Interaction10.1145/36602498:EICS(1-28)Online publication date: 17-Jun-2024
  • (2022)Trends of Augmented Reality for Agri-Food ApplicationsSensors10.3390/s2221833322:21(8333)Online publication date: 30-Oct-2022
  • (2022)A Machine Learning Framework for Classification of Expert and Non-Experts Radiologists using Eye Gaze Data2022 IEEE 7th International Conference on Recent Advances and Innovations in Engineering (ICRAIE)10.1109/ICRAIE56454.2022.10054277(314-320)Online publication date: 1-Dec-2022
  • Show More Cited By

Index Terms

  1. Learn from the Best: Harnessing Expert Skill and Knowledge to Teach Unskilled Workers
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Other conferences
        PETRA '22: Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments
        June 2022
        704 pages
        ISBN:9781450396318
        DOI:10.1145/3529190
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 11 July 2022

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. activity detection
        2. activity tracking
        3. assistant
        4. expert knowhow
        5. food processing
        6. gaze detection
        7. machine learning
        8. neural networks

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Funding Sources

        • Bundesministerium für Bildung und Forschung

        Conference

        PETRA '22

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)58
        • Downloads (Last 6 weeks)12
        Reflects downloads up to 20 Nov 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Does the Medium Matter? A Comparison of Augmented Reality Media in Instructing Novices to Perform Complex, Skill-Based Manual Tasks.Proceedings of the ACM on Human-Computer Interaction10.1145/36602498:EICS(1-28)Online publication date: 17-Jun-2024
        • (2022)Trends of Augmented Reality for Agri-Food ApplicationsSensors10.3390/s2221833322:21(8333)Online publication date: 30-Oct-2022
        • (2022)A Machine Learning Framework for Classification of Expert and Non-Experts Radiologists using Eye Gaze Data2022 IEEE 7th International Conference on Recent Advances and Innovations in Engineering (ICRAIE)10.1109/ICRAIE56454.2022.10054277(314-320)Online publication date: 1-Dec-2022
        • (2022)Towards AI-Enabled Assistant Design Through Grassroots Modeling: Insights from a Practical Use Case in the Industrial SectorPerspectives in Business Informatics Research10.1007/978-3-031-16947-2_7(96-110)Online publication date: 16-Sep-2022

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media