Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

MyPGI - a methodology to yield personalized gestural interaction

  • Long Paper
  • Published:
Universal Access in the Information Society Aims and scope Submit manuscript

Abstract

People with speech and motor impairments may experience difficulties in interaction and learning, among other situations that can lead to emotional, social, and cognitive problems. Augmentative and alternative communication (AAC) is a research area that involves using non-oral modes as a complement or substitute for spoken language. The AAC supported by computer vision (CV) systems can benefit from recognizing the user’s remaining functional movements as an alternative design approach to interaction. The complete MyPGI, Methodology to yield Personalized Gestural Interaction, is presented. MyPGI guides the design of AAC systems for people with motor and speech difficulties, using CV techniques and machine learning to enable personalized and noninvasive gestural interaction. The MyPGI methodology was used to develop an AAC system, named PGCA (Personal Gesture Communication Assistant), employing a low-cost approach, used in experiments conducted with volunteers, including students with motor and speech difficulties. Experiments, interviews, and usability evaluation were conducted to evaluate the feasibility of the methodology and the system developed. The results suggest the methodology as promising to support the design of AAC systems capable of enabling personalized gestural interaction, also showing benefits of this approach, technical challenges, and means to overcome them. The results also add knowledge about specific challenges and needs of the target audience. The MyPGI methodology, developed after several iterations and evaluations, is capable to support the design of AAC systems that enable personalized gestural interaction. This article presents an overview of the methodological steps performed, results obtained, and future perspectives for the methodology.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data availability statement

The dataset generated during and/or analyzed during the current study is available from the corresponding author on reasonable request.

Notes

  1. https://www.playstation.com/en-us/games/eyetoy-play-2-with-camera-ps2/

  2. http://www.wii.com

  3. https://www.xbox.com/

  4. https://www.leapmotion.com

  5. https://www.parallax.com

References

  1. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., et al.: Tensorflow: a system for large-scale machine learning. In: 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16) vol. 16, pp. 265–283 (2016)

  2. Aced Lopez, S., Corno, F., De Russis, L.: Gnomon: Enabling dynamic one-switch games for children with severe motor disabilities. In: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, pp. 995–1000. ACM (2015)

  3. Ahad, M.A.R., Tan, J.K., Kim, H., Ishikawa, S.: Motion history image: its variants and applications. Mach. Vision Appl. 23(2), 255–281 (2012)

    Google Scholar 

  4. Alpaydin, E.: Introduction to Machine Learning. MIT Press (2020)

  5. Alper, M.: Augmentative, alternative, and assistive: reimagining the history of mobile computing and disability. IEEE Annal. Hist. Comput. 37(1), 96–96 (2015)

    Google Scholar 

  6. Antunes, R.A., Palma, L.B., Coito, F.V., Duarteramos, H., Gil, P.: Intelligent human-computer interface for improving pointing device usability and performance. In: 12th IEEE International Conference on Control and Automation (ICCA), pp. 714–719. IEEE (2016)

  7. Ascari, R.E.O.S., Pereira, R., Silva, L.: Mobile interaction for augmentative and alternative communication a systematic mapping. SBC J. 3D Interact. Syst. 9(2), 105–118 (2018)

    Google Scholar 

  8. Ascari, R.E.O.S., Pereira, R., Silva, L.: Towards a methodology to support augmentative and alternative communication by means of personalized gestural interaction. In: Proceedings of the 17th Brazilian Symposium on Human Factors in Computing Systems, p. 38. ACM (2018)

  9. Ascari, R.E.O.S., Pereira, R., Silva, L.: Personalized gestural interaction applied in a gesture interactive game-based approach for disabled people). In: Proceedings of the 25th International Conference on Intelligent User Interfaces pp. 1–11 (2020)

  10. Ascari, R.E.O.S., Silva, L., Pereira, R.: Personalized interactive gesture recognition Assistive technology. In: Proceedings of the 18th Brazilian Symposium on Human Factors in Computing Systems pp. 1–12 (2019)

  11. Ascari, R.E.O.S., Pereira, R., Silva, L.: Computer vision-based methodology to improve interaction for people with motor and speech impairment. ACM Trans. Access. Comput. (TACCESS) 13(4), 1–33 (2020)

    Google Scholar 

  12. Ascari, R.E.O.S., Silva, L., Pereira, R.: Computer vision applied to improve interaction and communication of people with motor disabilities: a systematic mapping. Technol. Disabil. 33(1), 1–32 (2021)

    Google Scholar 

  13. Ashtiani, B., MacKenzie, I.S.: Blinkwrite2: an improved text entry method using eye blinks. In: Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications, pp. 339–345. ACM (2010)

  14. Ballard, D.H., Brown, C.M.: Computer vision. Prentice Hall (1982)

  15. Betke, M., Gips, J., Fleming, P.: The Camera Mouse: visual tracking of body features to provide computer access for people with severe disabilities. IEEE Trans. Neural Syst. Rehabil. Eng. 10(1), 1–10 (2002)

    Google Scholar 

  16. Bez, M.R.: Scala: Sistema de comunicação alternativa para processos de inclusão em autismo: uma proposta integrada de desenvolvimento em contextos para aplicações móveis e web. Ph.D. thesis, Universidade Federal do Rio Grande do Sul (2014)

  17. Bhattacharya, S., Samanta, D., Basu, A.: Performance models for automatic evaluation of virtual scanning keyboards. IEEE Trans. Neural Syst. Rehabil. Eng. 16(5), 510–519 (2008)

    Google Scholar 

  18. Bian, Z.P., Hou, J., Chau, L.P., Magnenat-Thalmann, N.: Facial position and expression-based human-computer interface for persons with tetraplegia. IEEE J. Biomed. Heal. Inform. 20(3), 915–924 (2016)

    Google Scholar 

  19. Biswas, P., Langdon, P.: A new input system for disabled users involving eye gaze tracker and scanning interface. J. Assist. Technol. 5(2), 58–66 (2011)

    Google Scholar 

  20. Biswas, P., Langdon, P.: A new interaction technique involving eye gaze tracker and scanning system. In: Proceedings of the 2013 Conference on Eye Tracking South Africa, pp. 67–70. ACM (2013)

  21. Brooke, J.: SUS-A quick and dirty usability scale. Usability Eval Ind. 189(194), 4–7 (1996)

    Google Scholar 

  22. Caminha, V.L.P.S.: ADACA - Ambiente digital de aprendizagem para crianças autistas. http://www.lncc.br/~alm/neupsico12/vera.pdf (2018). Acessado em 01/04/2020

  23. Carroll, J.M.: Human computer interaction-brief intro. The Encyclopedia of Human-Computer Interaction, 2nd Ed. (2013)

  24. Chattoraj, S., Vishwakarma, K., Paul, T.: Assistive system for physically disabled people using gesture recognition. In: 2017 IEEE 2nd International Conference on Signal and Image Processing (ICSIP), pp. 60–65. IEEE (2017)

  25. Christou, G., Nardi, L., Cheimonidou, A.Z.: Using video games for the rehabilitation of children with cerebral palsy: a pilot study. In: International Conference on Human-Computer Interaction, pp. 220–225. Springer (2014)

  26. Cohen, L., Manion, L., Morrison, K.: Research methods in education [5 th edn] london: routledge falmer. Teach. High. Educat. 41, 21 (2000)

    Google Scholar 

  27. Commons, W.: CommonsWikimedia. https://commons.wikimedia.org/wiki/File:Turtle_clip_art.svg (2017). Acessado em 12/05/2020

  28. Cristina, S., Camilleri, K.P.: Model-based head pose-free gaze estimation for assistive communication. Comput. Vis. Image Underst. 149, 157–170 (2016)

    Google Scholar 

  29. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 886–893. IEEE (2005)

  30. Das, S.: Classification methods. In: Data Science Using Oracle Data Miner and Oracle R Enterprise, pp. 189–237. Springer (2016)

  31. Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quart. pp. 319–340 (1989)

  32. Davis, J.W., Bobick, A.F.: The representation and recognition of human movement using temporal templates. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 928–934. IEEE (1997)

  33. Day, S.B.S.: OpenGameArt.org. https://opengameart.org (2015). Acessado em 12/05/2020

  34. Efron, B.: Bootstrap methods: another look at the jackknife. In: Breakthroughs in statistics, pp. 569–593. Springer (1992)

  35. Eidam, S., Garstka, J., Peters, G.: Towards regaining mobility through virtual presence for patients with locked-in syndrome. In: Proceedings of the 8th International Conference on Advanced Cognitive Technologies and Applications. Rome, Italy, pp. 120–123 (2016)

  36. Eroshkin, S.Y., Kameneva, N., Kovkov, D., Sukhorukov, A.: Conceptual system in the modern information management. Proced. Comput. Sci. 103, 609–612 (2017)

    Google Scholar 

  37. Everts, I., Sebe, N., Jones, G.A., et al.: Cooperative Object tracking with multiple PTZ cameras. In: International Conference on Image Analysis and Processing (ICIAP 2007), vol. 7, pp. 323–330 (2007)

  38. Fernandes, L., Nunes, R.R., Matos, G., Azevedo, D., Pedrosa, D., Morgado, L., Paredes, H., Barbosa, L., Fonseca, B., Martins, P., et al.: Bringing user experience empirical data to gesture-control and somatic interaction in virtual reality videogames: an exploratory study with a multimodal interaction prototype. In: SciTecIn15 - Conferência Ciências e Tecnologias da Interação 2015 (2015)

  39. Foletto, A.A., d’Ornellas, M.C., Prado, A.L.C.: Serious games for parkinson’s disease fine motor skills rehabilitation using natural interfaces. In: MEDINFO 2017: Precision Healthcare Through Informatics: Proceedings of the 16th World Congress on Medical and Health Informatics, vol. 245, p. 74. IOS Press (2018)

  40. Fu, X., McCane, B., Albert, M., Mills, S.: Action recognition based on principal geodesic analysis. In: 2013 28th International Conference on Image and Vision Computing New Zealand (IVCNZ 2013), pp. 259–264. IEEE (2013)

  41. Fuhl, W.: From perception to action using observed actions to learn gestures. User Model. User-Adapt. Int. pp. 1–16 (2020)

  42. Gao, G.W., Duan, X.Y.: An overview of human-computer interaction based on the camera for disabled people. Adv. Mater. Res. 219, 1317–1320 (2011)

    Google Scholar 

  43. Garcia-Ceja, E., Riegler, M., Kvernberg, A.K., Torresen, J.: User-adaptive models for activity and emotion recognition using deep transfer learning and data augmentation. User Model. User-Adapt. Int. pp. 1–29 (2019)

  44. Geisser, S.: The predictive sample reuse method with applications. J. Am. Stat. Associ. 70(350), 320–328 (1975)

    Google Scholar 

  45. Gomez-Donoso, F., Cazorla, M., Garcia-Garcia, A., Garcia-Rodriguez, J.: Automatic Schaeffer’s gestures recognition system. Expert Syst. 33(5), 480–488 (2016)

    Google Scholar 

  46. Google: Google Image. http://images.google.com (2019). Acessado em 12/11/2019

  47. Han, D., Liu, Q., Fan, W.: A new image classification method using CNN transfer learning and web data augmentation. Expert Syst. Appl. 95, 43–56 (2018)

    Google Scholar 

  48. Hauberg, S., Freifeld, O., Larsen, A.B.L., Fisher, J., Hansen, L.: Dreaming more data: Class-dependent distributions over diffeomorphisms for learned data augmentation. In: Artificial Intelligence and Statistics, pp. 342–350 (2016)

  49. Hemmingsson, H., Ahlsten, G., Wandin, H., Rytterström, P., Borgestig, M.: Eye-Gaze control technology as early intervention for a non-verbal young child with high spinal cord injury: a case report. Technologies 6(1), 12 (2018)

    Google Scholar 

  50. Hewett, T., Baecker, R., Card, S., Carey, T., Gasen, J., Mantei, M., Perlman, G., Strong, G., Verplank, W.: ACM SIGCHI curricula for human-computer interaction. ACM (1992)

  51. Huang, C.P., Hsieh, C.H., Lai, K.T., Huang, W.Y.: Human action recognition using histogram of oriented gradient of motion history image. In: First International Conference on Instrumentation, Measurement, Computer, Communication and Control, pp. 353–356. IEEE (2011)

  52. ISO/IEC: 9241-11. Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs) – Part II: Guidance on Usability. The International Organization for Standardization 45, 9 (1998)

  53. ISO/IEC: 9126-1. Software engineering-product quality - Part 1: Quality model. The International Organization for Standardization 1, 21 (2001)

  54. Jain, L.C., Kacprzyk, J.: New learning paradigms in soft computing, vol. 84. Physica (2013)

  55. Jardini, R.S.R.: “ Método das boquinhas”: alfabetização e reabilitação dos distúrbios da leitura e escrita: livro I: fundamentação teórica. Casa do Psicólogo (2003)

  56. Jiang, H., Duerstock, B.S., Wachs, J.P.: An analytic approach to decipher usable gestures for quadriplegic users. In: 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 3912–3917. IEEE (2014)

  57. Jiang, H., Duerstock, B.S., Wachs, J.P.: Variability analysis on gestures for people with quadriplegia. IEEE Trans. Cybern. 48(1), 346–356 (2016)

    Google Scholar 

  58. Jiang, Z., Lin, Z., Davis, L.: Recognizing human actions by learning and matching shape-motion prototype trees. IEEE Trans. Pattern Anal. Mach. Intell. 34(3), 533–547 (2012)

    Google Scholar 

  59. Ju, A.L., Spasojevic, M.: Smart Jewelry: The Future of Mobile User Interfaces. In: Proceedings of the 2015 Workshop on Future Mobile User Interfaces, pp. 13–15. ACM (2015)

  60. Kane, S.K., Hurst, A., Buehler, E., Carrington, P.A., Williams, M.A.: Collaboratively designing assistive technology. Interactions 21(2), 78–81 (2014)

    Google Scholar 

  61. Khan, A., Baharudin, B., Lee, L.H., Khan, K.: A review of machine learning algorithms for text-documents classification. J. Adv. Inform. Technol 1(1), 4–20 (2010)

    Google Scholar 

  62. Kintsch, A., DePaula, R.: A framework for the adoption of assistive technology. SWAAAC 2002 Support. Learn. Through Assist. 3, 1–10 (2002)

    Google Scholar 

  63. Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proceedings of Fourteenth International Joint Conference on Artificial Intelligence (IJCAI), vol. 14, pp. 1137–1145. Montreal, Canada (1995)

  64. Kurauchi, A., Feng, W., Morimoto, C., Betke, M.: HMAGIC: head movement and gaze input cascaded pointing. In: Proceedings of the 8th ACM International Conference on Pervasive Technologies Related to Assistive Environments, p. 47. ACM (2015)

  65. Laudan, L.: Progress and its problems: Towards a theory of scientific growth, vol. 282. Univ of California Press (1978)

  66. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015)

  67. Leo, M., Furnari, A., Medioni, G.G., Trivedi, M., Farinella, G.M.: Deep learning for assistive computer vision. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)

  68. Littman, M.L.: Reinforcement learning improves behaviour from evaluative feedback. Nature 521(7553), 445–451 (2015)

    Google Scholar 

  69. Liu, Y., Lee, B.S., McKeown, M.J.: Robust eye-based dwell-free typing. Int. J. Human-Comput. Int. 32(9), 682–694 (2016)

    Google Scholar 

  70. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of the 7th International Joint Conference on Artificial Intelligence. Vancouver, BC, Canada (1981)

  71. Mauri, C., Granollers, T., Lorés, J., García, M.: Computer vision interaction for people with severe movement restrictions. Human Technol. 2, 38–54 (2006)

    Google Scholar 

  72. Melo, A.M., Baranauskas, M.C.C.: An Inclusive Approach to Cooperative Evaluation of Web User Interfaces. In: Proceedings of the 8th International Conference on Enterprise Information Systems (ICEIS), pp. 65–70 (2006)

  73. Moeslund, T.B.: Introduction to video and image processing: Building real systems and applications. Springer Science & Business Media (2012)

  74. Mohammed, A.A., Shereen, A.A.: Efficient eye blink detection method for disabled-helping domain. Int. J. Adv. Comput. Sci. Appl. (IJACSA) 5(5) (2014)

  75. Montanini, L., Cippitelli, E., Gambi, E., Spinsante, S.: Low complexity head tracking on portable android devices for real time message composition. J. Multimodal User Int. 9(2), 141–151 (2015)

    Google Scholar 

  76. Moreira, E.A., Baranauskas, M.C.C.: Experiencing and delineating a vocabulary for a tangible environment to support alternative and augmentative communication. In: Proceedings of the 17th Brazilian Symposium on Human Factors in Computing Systems, pp. 1–10 (2018)

  77. Mouton, J.: Understanding social research. Van Schaik Publishers (1996)

  78. Negin, F., Rodriguez, P., Koperski, M., Kerboua, A., Gonzàlez, J., Bourgeois, J., Chapoulie, E., Robert, P., Bremond, F.: PRAXIS: towards automatic cognitive assessment using gesture recognition. Expert Syst. Appl. 16, 21–35 (2018)

    Google Scholar 

  79. Nguyen, H.D., Poo, D.C.C.: Unified Structured Framework for mHealth Analytics: Building an Open and Collaborative Community. In: International Conference on Social Computing and Social Media, pp. 440–450. Springer (2017)

  80. Nielsen, J.: Usability inspection methods. In: Conference Companion on Human factors in Computing Systems, pp. 413–414. ACM (1994)

  81. Oulasvirta, A., Hornbæk, K.: HCI research as problem-solving. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 4956–4967. ACM (2016)

  82. Oviatt, S.: Ten myths of multimodal interaction. Commun. ACM 42(11), 74–81 (1999)

    Google Scholar 

  83. Pal, J., Viswanathan, A., Chandra, P., Nazareth, A., Kameswaran, V., Subramonyam, H., Johri, A., Ackerman, M.S., O’Modhrain, S.: Agency in assistive technology adoption: visual impairment and smartphone use in bangalore. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 5929–5940 (2017)

  84. Pal, S., Mangal, N.K., Khosla, A.: Development of assistive application for patients with communication disability. In: Proceedings of International Conference on Innovations in Green Energy and Healthcare Technologies (IGEHT), pp. 1–4. IEEE (2017)

  85. Pei, L., Ye, M., Xu, P., Zhao, X., Li, T.: Multi-class action recognition based on inverted index of action states. In: IEEE International Conference on Image Processing, pp. 3562–3566 (2013)

  86. Pressman, R., Maxim, B.: Engenharia de Software - 8a Edição. McGraw Hill Brasil (2016)

  87. Ravì, D., Wong, C., Deligianni, F., Berthelot, M., Andreu-Perez, J., Lo, B., Yang, G.Z.: Deep learning for health informatics. IEEE J. Biomed. Health Inform. 21(1), 4–21 (2016)

    Google Scholar 

  88. Rebala, G., Ravi, A., Churiwala, S.: An introduction to machine learning. Springer (2019)

  89. Rivera, L.A., DeSouza, G.N.: Haptic and Gesture-Based Assistive Technologies for People with Motor Disabilities. In: Assist. Technol. Comput. Access Motor Disabil pp. 1–27. (2014)

  90. Rodrigo, J., Corral, D.: ARASAAC: portal aragonés de la comunicación aumentativa y alternativa. Software, herramientas y materiales para la comunicación e inclusión. Informática na Educação: Teoria & Prática 16(2) (2013)

  91. Roy, D.M., Panayi, M., Erenshteyn, R., Foulds, R., Fawcus, R.: Gestural human-machine interaction for people with severe speech and motor impairment due to cerebral palsy. In: Conference Companion on Human Factors in Computing Systems, pp. 313–314. ACM (1994)

  92. Roy, D.M., Panayi, M., Foulds, R., Erenshteyn, R., Harwin, W.S., Fawcus, R.: The enhancement of interaction for people with severe speech and physical impairment through the computer recognition of gesture and manipulation. Presen. Teleoperat. & Virtual Environ. 3(3), 227–235 (1994)

    Google Scholar 

  93. Rozado, D., Niu, J., Lochner, M.: Fast human-computer interaction by combining gaze pointing and face gestures. ACM Trans. Access. Comput. (TACCESS) 10(3), 10 (2017)

    Google Scholar 

  94. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput Vision 115(3), 211–252 (2015)

    MathSciNet  Google Scholar 

  95. Salcedo-Sanz, S., Rojo-Álvarez, J.L., Martínez-Ramón, M., Camps-Valls, G.: Support vector machines in engineering: an overview. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 4(3), 234–267 (2014)

    Google Scholar 

  96. Sameshima, F.S., Rodrigues, I.B., Deliberato, D.: A parceria entre especialistas, professor e família no processo de implementação da comunicação alternativa: uma condição necessária. In: Anais do V Congresso Brasileiro Multidisciplinar de Educação Especial, pp. 379–388 (2009)

  97. Santana, V.F., Almeida, L.D.A., Baranauskas, M.C.C.: Websites atendendo a requisitos de acessibilidade e usabilidade. leanpub. https://leanpub.com/warau (2018). Acessado em 26/06/2020

  98. Sartoretto, M.L., Bersch, R.: Comunicção Alternativa. http://www.assistiva.com.br/ca.html (2014). Acessado em 01/04/2020

  99. Shah, D., Philip, T.J.: An Assistive Bot for Healthcare Using Deep Learning: Conversation-as-a-Service. In: Progress in Advanced Computing and Intelligent Engineering, pp. 109–118. Springer (2019)

  100. Sharma, S., Varkey, B., Achary, K., Hakulinen, J., Turunen, M., Heimonen, T., Srivastava, S., Rajput, N.: Designing gesture-based applications for individuals with developmental disabilities: guidelines from user studies in india. ACM Trans. Access. Comput. (TACCESS) 11(1), 1–27 (2018)

    Google Scholar 

  101. Silva, C.M.d.: Alfabetização e Deficiência Intelectual: uma estratégia diferenciada. Semana Pedagógica (2016)

  102. Steinfeld, E., Maisel, J.: Universal design: Creating inclusive environments. John Wiley & Sons (2012)

  103. Story, M.F., Mueller, J.L., Mace, R.L.: The Universal Design File: Designing for People of all Ages and Abilities. North Carolina State University, Raleigh, NC (1998)

    Google Scholar 

  104. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)

  105. Tanner, M.A., Wong, W.H.: The calculation of posterior distributions by data augmentation. J. Am. Stat. Assoc. 82(398), 528–540 (1987)

    MathSciNet  Google Scholar 

  106. Torrey, L., Shavlik, J.: Transfer learning. In: Handbook of research on machine learning applications and trends: algorithms, methods, and techniques, pp. 242–264. IGI Global (2010)

  107. Triantafyllidis, A.K., Tsanas, A.: Applications of machine learning in real-life digital health interventions: review of the literature. J. Med. Int. Res. 21(4), e11286 (2019)

    Google Scholar 

  108. Tsai, D.M., Chiu, W.Y., Lee, M.H.: Optical flow-motion history image (OF-MHI) for action recognition. Sign. Image Video Process. 9(8), 1897–1906 (2015)

    Google Scholar 

  109. Venkatesh, V., Bala, H.: Technology acceptance model 3 and a research agenda on interventions. Decis. Sci. 39(2), 273–315 (2008)

    Google Scholar 

  110. Vidakis, N., Konstantinos, K., Triantafyllidis, G.: A Multimodal Interaction Framework for Blended Learning. In: Interactivity, Game Creation, Design, Learning, and Innovation, pp. 205–211. Springer (2016)

  111. Von Tetzchner, S., Jensen, M.H.: Augmentative and alternative communication. Whurr Publishers Ltda (1996)

  112. Wan, J., Athitsos, V., Jangyodsuk, P., Escalante, H.J., Ruan, Q., Guyon, I.: CSMMI: Class-specific maximization of mutual information for action and gesture recognition. IEEE Trans. Image Process. 23(7), 3152–3165 (2014)

    MathSciNet  Google Scholar 

  113. Wilson, A.D.: Robust computer vision-based detection of pinching for one and two-handed gesture input. In: Proceedings of the 19th Annual ACM Aymposium on User Interface Software and Technology, pp. 255–258. ACM (2006)

  114. Zhang, J., Shao, K., Luo, X.: Small sample image recognition using improved convolutional neural network. J. Vis. Commun. Image Represent. 55, 640–647 (2018)

    Google Scholar 

  115. Zhang, X., Kulkarni, H., Morris, M.R.: Smartphone-Based Gaze Gesture Communication for People with Motor Disabilities. In: Proceedings of the CHI Conference on Human Factors in Computing Systems, pp. 2878–2889. ACM (2017)

Download references

Acknowledgements

The authors thank CAPES and CNPq for supporting this research and especially thank the participating institutions, the volunteers, teachers, and students who participated in the experiments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rúbia Eliza de Oliveira Schultz Ascari.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

de Oliveira Schultz Ascari, R.E., Silva, L. & Pereira, R. MyPGI - a methodology to yield personalized gestural interaction. Univ Access Inf Soc 23, 795–820 (2024). https://doi.org/10.1007/s10209-022-00965-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10209-022-00965-w

Keywords

Mathematics Subject Classification

Navigation