Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3462244.3479941acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article
Public Access

Characterizing Children's Motion Qualities: Implications for the Design of Motion Applications for Children

Published: 18 October 2021 Publication History

Abstract

The goal of this paper is to understand differences between children's and adults’ motions in order to improve future motion recognition algorithms for children. Motion-based applications are becoming increasingly popular among children (e.g., games). These applications often rely on accurate recognition of users’ motions to create meaningful interactive experiences. Motion recognition systems are usually trained on adults’ motions. However, prior work has shown that children move differently from adults. Therefore, these systems will likely perform poorly on children's motions, negatively impacting their interactive experiences. Although prior work has established that there are perceivable differences between child and adult motion, these differences are yet to be quantified. If we can quantify these differences, then we can gain new insights about how children perform motions (i.e., their motion qualities). We present 24 articulation features (11 of which we newly developed) that describe motions quantitatively; we then evaluate them on a subset of child and adult motions from the publicly available Kinder-Gator dataset to reveal differences; motions in this dataset are represented as postures, each of which is defined by 3D positions of 20 joints tracked by a Kinect at a specific time instance. Our results showed that children perform motions that are quantifiably faster, more intense, less smooth, and less coordinated as compared to adults. Based on our results, we propose guidelines for improving motion recognition algorithms and designing motion applications for children.

Supplementary Material

Equations for Computing the Joint-Level Features (p229-FeatureComputations.pdf)
MP4 File (ICMI21-fp1200.mp4)
Presentation Video - This video presents our talk on characterizing children's motion qualities to propose design guidelines for improving the accuracy of motion recognition systems on children's motions.

References

[1]
Saad Ali and Mubarak Shah. 2010. Human action recognition in videos using kinematic features and multiple instance learning. IEEE Transactions on Pattern Analysis and Machine Intelligence 32, 2: 288–303. https://doi.org/10.1109/TPAMI.2008.284
[2]
Aishat Aloba, Gianne Flores, Julia Woodward, Alex Shaw, Amanda Castonguay, Isabella Cuba, Yuzhu Dong, Eakta Jain, and Lisa Anthony. 2018. Kinder-Gator: The UF kinect database of child and adult motion. In Eurographics (Short Papers), 13–16. https://doi.org/10.2312/egs.20181033
[3]
Aishat Aloba, Annie Luc, Julia Woodward, Yuzhu Dong, Rong Zhang, Eakta Jain, and Lisa Anthony. 2019. Quantifying Differences between Child and Adult Motion based on Gait Features. In International Conference on Human-Computer Interaction (HCII ’19), 385–402.
[4]
Aishat Aloba, Julia Woodward, and Lisa Anthony. 2020. FilterJoint: Toward an Understanding of Whole-Body Gesture Articulation. In International Conference on Multimodal Interaction (ICMI ’20), 213–221. https://doi.org/10.1145/3382507.3418822
[5]
Lisa Anthony, Radu-Daniel Vatavu, and Jacob O. Wobbrock. 2013. Understanding the consistency of users’ pen and finger stroke gesture articulation. In Proceedings of the Graphics Interface Conference (GI ’13), 87–94. https://doi.org/10.5555/2532129.2532145
[6]
Leslie Bishko. 2014. Animation principles and laban movement analysis - movement frameworks for creatinc empathic character performances. Nonverbal Communication in Virtual Worlds: Understanding and Designing Expressive Characters: 177–203.
[7]
Aaron F. Bobick and James W. Davis. 2001. The recognition of human movement using temporal templates. IEEE Transactions on Pattern Analysis and Machine Intelligence: 257–267. http://dx.doi.org/10.1109/34.910878
[8]
William Bruce. 1910. Exercise in Education and Medicine. The Lancet 175, 4521: 1165–1166. https://doi.org/10.1016/S0140-6736(01)14222-4
[9]
Frances Cleland-Donnelly, Suzanne S Mueller, and David L Gallahue. 2017. Developmental Physical Education for All Children: Theory Into Practice. Human Kinetics.
[10]
Diane J. Cook and Sajal K. Das. 2007. How smart are our environments? An updated look at the state of the art. Pervasive and Mobile Computing 3, 2: 53–73. https://doi.org/10.1016/j.pmcj.2006.12.001
[11]
Shannon E. Gray and Caroline F. Finch. 2015. The causes of injuries sustained at fitness facilities presenting to Victorian emergency departments - identifying the main culprits. Injury Epidemiology 2, 1: 1–8. https://doi.org/10.1186/s40621-015-0037-4
[12]
John A. Hartigan and Manchek A. Wong. 2006. Algorithm AS 136: A K-Means clustering algorithm. Journal of the Royal Statistical Society Series C (Applied Statistics) 29, 1: 100–108. https://doi.org/10.2307/2346830
[13]
Mathew W. Hill, Maximilian M. Wdowski, Adam Pennell, David F. Stodden, and Michael J. Duncan. 2019. Dynamic postural control in children: Do the arms lend the legs a helping hand? Frontiers in Physiology 9: 1932–1932. https://doi.org/10.3389/fphys.2018.01932
[14]
Zaher Hinbarji, Rami Albatal, and Cathal Gurrin. 2015. Dynamic user authentication based on mouse movements curves. In International Conference on Multimedia Modeling (MMM ’15), 111–122. https://doi.org/10.1007/978-3-319-14442-9_10
[15]
Eakta Jain, Lisa Anthony, Aishat Aloba, Amanda Castonguay, Isabella Cuba, Alex Shaw, and Julia Woodward. 2016. Is the Motion of a Child Perceivably Different from the Motion of an Adult? ACM Transactions on Applied Perception 13, 4: 1–17. https://doi.org/10.1145/2947616
[16]
Rudolf Laban and Lisa Ullman. 1971. The Mastery of Movement.
[17]
Andrew Macvean and Judy Robertson. 2012. iFitQuest: a school based study of a mobile location-aware exergame for adolescents. Proceedings of the International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’12): 359. https://doi.org/10.1145/2371574.2371630
[18]
Ilias El Makrini, Kelly Merckaert, Dirk Lefeber, and Bram Vanderborght. 2017. Design of a collaborative architecture for human-robot assembly tasks. In IEEE International Conference on Intelligent Robots and Systems (IROS ’17), 1624–1629. https://doi.org/10.1109/IROS.2017.8205971
[19]
Microsoft. Kinect for Windows. Retrieved January 31, 2019 from https://developer.microsoft.com/en-us/windows/kinect
[20]
Microsoft. Azure Kinect DK. Retrieved May 25, 2021 from https://azure.microsoft.com/en-us/services/kinect-dk/
[21]
Jasmir Nijhar, Nadia Bianchi-Berthouze, and Gemma Boguslawski. 2012. Does Movement Recognition Precision affect the Player Experience in Exertion Games? In International Conference on Intelligent Technologies for Interactive Entertainment (INTETAIN ’12), 73–82. https://doi.org/10.1007/978-3-642-30214-5_9
[22]
V. Gregory Payne and Larry D. Isaacs. 2011. Human Motor Development: A LifeSpan Approach. https://doi.org/10.4324/9781315213040
[23]
R C. Shaefer. 1987. Body Alignment, Postures, and Gait. In Clinical biomechanics: musculoskeletal actions and reactions. Wiliams & Wilkins. Retrieved May 24, 2021 from https://www.chiro.org/ACAPress/Body_Alignment.html
[24]
Alex Shaw and Lisa Anthony. 2016. Analyzing the Articulation Features of Children's Touchscreen Gestures. In ACM International Conference on Multimodal Interaction (ICMI ’16), 333–340. https://doi.org/10.1145/2993148.2993179
[25]
Manuel Silverio-Fernández, Suresh Renukappa, and Subashini Suresh. 2018. What is a smart device? - a conceptualisation within the paradigm of the internet of things. Visualization in Engineering 6, 3: 1–10. https://doi.org/10.1186/s40327-018-0063-8
[26]
Radu-Daniel Vatavu. 2017. Beyond Features for Recognition: Human-Readable Measures to Understand Users’ Whole-Body Gesture Performance. International Journal of Human Computer Interaction 33, 9: 713–730. http://dx.doi.org/10.1080/10447318.2017.1278897
[27]
Radu-Daniel Vatavu, Lisa Anthony, and Jacob O. Wobbrock. 2013. Relative accuracy measures for stroke gestures. In International Conference on Multimodal Interaction (ICMI ’13), 279–286. https://doi.org/10.1145/2522848.2522875
[28]
Radu-Daniel Vatavu, Lisa Anthony, and Jacob O Wobbrock. 2012. Gestures as point clouds: A $P recognizer for user interface prototypes. In ACM International Conference on Multimedia Interaction (ICMI ’12), 273–280. https://doi.org/10.1145/2388676.2388732
[29]
Radu Daniel Vatavu. 2017. Improving gesture recognition accuracy on touch screens for users with low vision. In SIGCHI Conference on Human Factors in Computing Systems (CHI ’17), 4667–4679. https://doi.org/10.1145/3025453.3025941
[30]
Evi Verbecque, Luc Vereeck, and Ann Hallemans. 2016. Postural sway in children: A literature review. Gait & Posture 49: 402–410. https://doi.org/10.1016/J.GAITPOST.2016.08.003
[31]
Federico Visi, Esther Coorevits, Rodrigo Schramm, and Eduardo Reck Miranda. 2017. Musical instruments, body movement, space, and motion data: Music as an emergent multimodal choreography. Human Technology 13, 1: 58–81. https://doi.org/10.17011/ht/urn.201705272518
[32]
Diane Watson, Regan L. Mandryk, and Kevin G. Stanley. 2013. The design and evaluation of a classroom exergame. In Proceedings of the International Conference on Gameful Design, Research, and Applications (Gamification ’13), 34–41. https://doi.org/10.1145/2583008.2583013
[33]
Daniel Weinland, Remi Ronfard, and Edmond Boyer. 2006. Free viewpoint action recognition using motion history volumes. Computer Vision and Image Understanding 104, 2–3: 249–257. https://doi.org/10.1016/j.cviu.2006.07.013
[34]
Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins. 2011. The Aligned Rank Transform for nonparametric factorial analyses using only ANOVA procedures. In Conference on Human Factors in Computing Systems (CHI ’11), 143–146. https://doi.org/10.1145/1978942.1978963
[35]
Tzu Tsung Wong. 2015. Performance evaluation of classification algorithms by k-fold and leave-one-out cross validation. Pattern Recognition 48, 9: 2839–2846. https://doi.org/10.1016/j.patcog.2015.03.009

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '21: Proceedings of the 2021 International Conference on Multimodal Interaction
October 2021
876 pages
ISBN:9781450384810
DOI:10.1145/3462244
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 October 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. articulation features
  2. children
  3. global-level features
  4. joint-level features
  5. motion
  6. motion applications
  7. motion recognition

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

ICMI '21
Sponsor:
ICMI '21: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
October 18 - 22, 2021
QC, Montréal, Canada

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 205
    Total Downloads
  • Downloads (Last 12 months)87
  • Downloads (Last 6 weeks)7
Reflects downloads up to 16 Nov 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media