Abstract
From an image of a person, we can easily guess the 3D coordinates of the body parts. This is because we have acquired a 3D mental model from observing humans and interacting with them. This capacity easily achievable for humans is not systematic when it comes to computers. In this paper, we describe an approach that aims at estimating poses from video with the objective of reproducing the observed movements by a virtual avatar. We propose the fragmentation of submitted videos into series of RGB frames to process individually. We aim two main objectives in our work. First, we achieve the extraction of initial 2D joints coordinates using a method that predicts joint locations by part affinities (PAFs). Then we infer 3D joints coordinates based on a human full 3D mesh reconstruction approach supplemented by the previously estimated 2D coordinates. Secondly, we explore the reconstruction of a virtual avatar using the extracted 3D coordinates with the prospect to transfer human movements towards the animated avatar. This would allow to extract the behavioral dynamics of a human, allowing to detect some health problems, for instance in Alzheimer. Our approach consists of multiple subsequent stages that show better results in the estimation and extraction than similar solution due to this supplement of 2D coordinates. With the final extracted coordinates, we apply a transfer of the positions (per frame) to the skeleton of a virtual avatar in order to reproduce the movements extracted from the video.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Newell, A., Yang, K., Deng, J.: Stacked Hourglass Networks for Human Pose Estimation. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 483–499. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_29
Toshev, A., Szegedy, C.: DeepPose: human pose estimation via deep neural networks. In: Proceedings of the IEEE Conference on CVPR, pp. 1653–1660. IEEE Computer Society, Las Vegas-USA (2014)
He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: ICCV, pp. 2980–2988. IEEE Computer Society, Venice-Italy (2017)
Fang, H., Xie, S., Tai, Y.-W., Lu, C.: RMPE: regional multi-person pose estimation. In: ICCV, pp. 2353–2362. Venice-Italy (2017)
Pishchulin, L., et al.: Deepcut: Joint subset partition and labeling for multi person pose estimation. In: IEEE Conference on CVPR, pp. 4929–4937. USA (2016)
Cao, Z., Simon, T., Wei, S., Sheikh, Y.: Realtime multi-person 2d pose estimation using part affinity fields. In: Proceedings of the IEEE Conference on CVPR, pp. 7291–7299. Hawaii (2017)
Agarwal, A., Triggs, B.: Recovering 3d human pose from monocular images. TPAMI 28(1), 44–58 (2006)
Zhou, S., Fu, H., Liu, L., Cohen-Or, D., Han, X.: Parametric reshaping of human bodies in images. In: SIGGRAPH ’10, pp. 1–10. Association for Computing Machinery, USA (2010)
Loper, M., Mahmood, N., Romero, J., Pons-Moll, G., Black, M.J.: SMPL: a skinned multi-person linear model. ACM Trans. Graphics 34(6), 1–16 (2015)
Kanazawa, A., Black, M.J., Jacobs, D.W., Malik, J.: End-to-end recovery of human shape and pose. In: Proceedings of the IEEE Conference on CVPR, pp. 7122–7131. IEEE Computer Society, Salt Lake City-USA (2018)
Tung, H.-Y., Tung, H.-W., Yumer, E., Fragkiadaki, K.: Self-supervised learning of motion capture. In: Proceedings of the 31st International Conference on NIPS, pp. 5242–5252. Curran Associates Inc, Long Beach-USA (2017)
Pavlakos, G., Zhu, L., Zhou, X., Daniilidis, K.: Learning to estimate 3D human pose and shape from a single-color image. In: Proceedings of the IEEE Conference on CVPR, pp. 459–468. IEEE Computer Society, Salt Lake City-USA (2018)
Omran, M., Lassner, C., Pons-Moll, G., Gehler, P., Schiele, B.: Neural body fitting: Unifying deep learning and model based human pose and shape estimation. In: International Conference on 3DV, pp. 484–494. IEEE Computer Society, Verona-Italy (2018)
Alldieck, T., Magnor, M., Xu, W., Theobalt, C., Pons-Moll, G.: Video based reconstruction of 3d people model. In: Proceedings of the IEEE Conference on CVPR, pp. 8387–8397. IEEE Computer Society, Salt Lake City-USA (2018)
Papandreou, G., Zhu, T., Chen, L.-C., Gidaris, S., Tompson, J., Murphy, K.: PersonLab: Person Pose Estimation and Instance Segmentation with a Bottom-Up, Part-Based, Geometric Embedding Model. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 282–299. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01264-9_17
Acknowledgment
We acknowledge NSERC-CRD (National Science and Engineering Research Council Cooperative Research Development), Prompt, and BMU (Beam Me Up) for funding this work.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Dare, K., Ben Abdessalem, H., Frasson, C. (2021). Extraction of 3D Pose in Video for Building Virtual Learning Avatars. In: Cristea, A.I., Troussas, C. (eds) Intelligent Tutoring Systems. ITS 2021. Lecture Notes in Computer Science(), vol 12677. Springer, Cham. https://doi.org/10.1007/978-3-030-80421-3_56
Download citation
DOI: https://doi.org/10.1007/978-3-030-80421-3_56
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-80420-6
Online ISBN: 978-3-030-80421-3
eBook Packages: Computer ScienceComputer Science (R0)