Nothing Special   »   [go: up one dir, main page]

skip to main content
article

Performance-Driven Hybrid Full-Body Character Control for Navigation and Interaction in Virtual Environments

Published: 01 June 2017 Publication History

Abstract

This paper presents a hybrid character control interface that provides the ability to synthesize in real-time a variety of actions based on the user's performance capture. The proposed methodology enables three different performance interaction modules: the performance animation control that enables the direct mapping of the user's pose to the character, the motion controller that synthesizes the desired motion of the character based on an activity recognition methodology, and the hybrid control that lies within the performance animation and the motion controller. With the methodology presented, the user will have the freedom to interact within the virtual environment, as well as the ability to manipulate the character and to synthesize a variety of actions that cannot be performed directly by him/her, but which the system synthesizes. Therefore, the user is able to interact with the virtual environment in a more sophisticated fashion. This paper presents examples of different scenarios based on the three different full-body character control methodologies.

References

[1]
Microsoft Kinect Motion Capture System, from http://www.microsoft.com/en-us/kinectforwindows/. Accessed 12 2016.
[2]
Assus Xtion Motion Capture Device, from http://www.asus.com/Multimedia/Xtion_PRO/. Accessed 12 2016.
[3]
England, D. (2011). Whole body interaction. London: Springer.
[4]
Van Welbergen, H., Van Basten, B. J., Egges, A., Ruttkay, Z. M., & Overmars, M. H. (2010). Real time animation of virtual humans: A trade-off between naturalness and control. Computer Graphics Forum, 29(8), 2530---2554.
[5]
Multon, F., France, L., Cani-Gascuel, M. P., & Debunne, G. (1999). Computer animation of human walking: A survey. The Journal of Visualization and Computer Animation, 10(1), 39---54.
[6]
Sarris, N., & Strintzis, M. G. (2005). 3D modeling and animation: Synthesis and analysis techniques for the human body. Hershey: IGI Global.
[7]
McCann, J., & Pollard, N. (2007). Responsive characters from motion fragments. ACM Transactions on Graphics, 26(3), 6.
[8]
Oshita, M. (2010). Generating animation from natural language texts and semantic analysis for motion search and scheduling. The Visual Computer, 26(5), 339---352.
[9]
Mousas, C., & Anagnostopoulos, C.-N. (2015). CHASE: Character animation scripting environment. In Virtual Reality Interaction and Physical Simulation, pp. 55---62
[10]
Mousas, C., & Anagnostopoulos, C.-N. (2015). Character animation scripting environment. Encyclopedia of computer graphics and games. Berlin: Springer.
[11]
Levine, S., Theobalt, C., & Koltun, V. (2009). Real-time prosody-driven synthesis of body language. ACM Transactions on Graphics, 28(5), 17.
[12]
Thorne, M., Burke, D., & van de Panne, M. (2007). Motion doodles: An interface for sketching character motion. In ACM SIGGRAPH 2007 courses, p. 24.
[13]
Davis, J., Agrawala, M., Chuang, E., Popović, Z., Salesin, D. (2003). A sketching interface for articulated figure animation. In Proceedings of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 320---328.
[14]
Raunhardt, D., & Boulic, R. (2011). Immersive singularity-free full-body interactions with reduced marker set. Computer Animation and Virtual Worlds, 22(5), 407---419.
[15]
Liu, H., Wei, X., Chai, J., Ha, I., Rhee, T. (2011) Realtime human motion control with a small number of inertial sensors. In Symposium on Interactive 3D Graphics and Games, pp. 133---140.
[16]
Bleiweiss, A., Eshar, D., Kutliroff, G., Lerner, A., Oshrat, Y., & Yanai, Y. (2010). Enhanced interactive gaming by blending full-body tracking and gesture animation. In ACM SIGGRAPH ASIA 2010 Sketches, p. 34.
[17]
Ouzounis, C., Mousas, C., Anagnostopoulos, C.-N., & Newbury, P. (2015). Using personalized finger gestures for navigating virtual characters. In Virtual Reality Interaction and Physical Simulation, pp. 5---14.
[18]
Kovar, L., Gleicher, M., & Pighin, F. (2002). Motion graphs. ACM Transactions on Graphics, 21(3), 473---482.
[19]
Mukai, T., & Kuriyama, S. (2005). Geostatistical motion interpolation. ACM Transactions on Graphics, 24(3), 1062---1070.
[20]
Kovar, L., & Gleicher, M. (2003). Flexible automatic motion blending with registration curves. In Proceedings of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 214---224.
[21]
Park, S. I., Shin, H. J., & Shin, S. Y. (2002). On-line locomotion generation based on motion blending. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 105---111.
[22]
van Basten, B., & Egges, A. (2012). Motion transplantation techniques: A survey. IEEE Computer Graphics and Applications, 32(3), 16---23.
[23]
Mousas, C., Newbury, P., & Anagnostopoulos, C. N. (2013). Splicing of concurrent upper-body motion spaces with locomotion. Procedia Computer Science, 25, 348---359.
[24]
Mousas, C., & Newbury, P. (2012). Real-time motion synthesis for multiple goal-directed tasks using motion layers. In Virtual Reality Interaction and Physical Simulation, pp. 79---85.
[25]
Witkin, A., & Popović, Z. (1995). Motion warping. In Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, pp. 105---108.
[26]
Gleicher, M. (1998). Retargetting motion to new characters. In Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, pp. 33---42.
[27]
Feng, A. W., Xu, Y., & Shapiro, A. (2012). An example-based motion synthesis technique for locomotion and object manipulation. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, pp. 95---102.
[28]
Shin, H. J., Lee, J., Shin, S. Y., & Gleicher, M. (2001). Computer puppetry: An importance-based approach. ACM Transactions on Graphics, 20(2), 67---94.
[29]
Sturman, D. J. (1998). Computer puppetry. IEEE Computer Graphics and Applications, 18(1), 4---38.
[30]
Unzueta, L., Peinado, M., Boulic, R., & Suescun, A. (2008). Full-body performance animation with sequential inverse kinematics. Graphical models, 70(5), 87---104.
[31]
Slyper, R., Hodgins, J. K. (2008). Action capture with accelerometers. In Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 193---199.
[32]
Numaguchi, N., Nakazawa, A., Shiratori, T., & Hodgins, J. K. (2011). A puppet interface for retrieval of motion capture data. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 157---166.
[33]
Yin, K., & Pai, D. K. (2003). Footsee: An interactive animation system. In Proceedings of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 329---338.
[34]
Misumi, H., Fujimura, W., Kosaka, T., Hattori, M., & Shirai, A. (2011). GAMIC: Exaggerated real time character animation control method for full-body gesture interaction systems. In SIGGRAPH Posters, p. 5.
[35]
Tautges, J., Zinke, A., Krüger, B., Baumann, J., Weber, A., Helten, T., et al. (2011). Motion reconstruction using sparse accelerometer data. ACM Transactions on Graphics, 30(3), 18.
[36]
Mousas, C., Newbury, P., & Anagnostopoulos, C.-N. (2014). Data-driven motion reconstruction using local regression models. In Artificial Intelligence Applications and Innovations, pp. 364---374.
[37]
Mousas, C., Newbury, P., & Anagnostopoulos, C.-N. (2014). Evaluating the covariance matrix constraints for data-driven statistical human motion reconstruction. In Spring Conference on Computer Graphics, pp. 99---106.
[38]
Mousas, C., Newbury, P., & Anagnostopoulos, C.-N. (2014). Efficient hand-over motion reconstruction. In International Conference on Computer Graphics, Visualization and Computer Vision, pp. 111---120.
[39]
Shiratori, T., & Hodgins, J. K. (2008). Accelerometer-based user interfaces for the control of a physically simulated character. ACM Transactions on Graphics, 27(5), 123.
[40]
Wei, X., Zhang, P., & Chai, J. (2012). Accurate realtime full-body motion capture using a single depth camera. ACM Transactions on Graphics, 31(6), 188.
[41]
Shiratori, T., Park, H. S., Sigal, L., Sheikh, Y., & Hodgins, J. K. (2011). Motion capture from body-mounted cameras. ACM Transactions on Graphics, 30(4), 31.
[42]
Min, J., & Chai, J. (2012). Motion graphs++: A compact generative model for semantic motion analysis and synthesis. ACM Transactions on Graphics, 31(6), 153.
[43]
Chai, J., & Hodgins, J. K. (2005). Performance animation from low-dimensional control signals. ACM Transactions on Graphics, 24(3), 686---696.
[44]
Ishigaki, S., White, T., Zordan, V. B., & Liu, C. K. (2009). Performance-based control interface for character animation. ACM Transactions on Graphics, 28(3), 61.
[45]
Seol, Y., O'Sullivan, C., & Lee, J. (2013). Creature features: Online motion puppetry for non-human characters. In Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 213---221.
[46]
Halmos, P. R. (1960). Naive set theory. Berlin: Springer.
[47]
Powerset Algorithm, from http://rosettacode.org/wiki/Power_set. Accessed 12 2016.
[48]
Samet, H. (2006). Foundations of multidimensional and metric data structures. Los Altos: Morgan Kaufmann.
[49]
Shotton, J., Sharp, T., Kipman, A., Fitzgibbon, A., Finocchio, M., Blake, A., Cook, M., & Moore, R. (2011). Real-time human pose recognition in parts from single depth images. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1297---1304.
[50]
CMU Graphics Lab Motion Capture Database, from http://mocap.cs.cmu.edu/. Accessed 12 2016.
[51]
Kallmann, M. (2008). Analytical inverse kinematics with body posture control. Computer Animation and Virtual Worlds, 19(2), 79---91.
[52]
He, Z., & Jin, L. (2009). Activity recognition from acceleration data based on discrete consine transform and SVM. In IEEE International Conference on Systems, Man and Cybernetics, pp. 5041---5044.
[53]
Min, J., Chen, Y. L., & Chai, J. (2009). Interactive generation of human animation with deformable motion models. ACM Transactions on Graphics, 29(1), 9.
[54]
Shoulson, A., Marshak, N., Kapadia, M., & Badler, N. I. (2013). ADAPT: The agent development and prototyping testbed. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, pp. 9---18.
[55]
Thiebaux, M., Marsella, S., Marshall, A. N., & Kallmann, M. (2008). Smartbody: Behavior realization for embodied conversational agents. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Vol. 1, pp. 151---158.
[56]
Kapadia, M., Singh, S., Reinman, G., & Faloutsos, P. (2011). A behavior-authoring framework for multiactor simulations. IEEE Computer Graphics and Applications, 31(6), 45---55.
[57]
Liang, X., Hoyet, L., Geng, W., & Multon, F. (2010). Responsive action generation by physically-based motion retrieval and adaptation. In Motion in Games, pp. 313---324.
[58]
Al-Asqhar, R. A., Komura, T., & Choi, M. G. (2013). Relationship descriptors for interactive motion adaptation. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 45---53.
[59]
Dam, P., Braz, P., Raposo, A. (2013). A study of nnavigation and selection techniques in virtual environments using microsoft kinect®. In International Conference on Virtual, Augmented and Mixed Reality, pp. 139---148.
[60]
Mousas, C. (2017). Towards developing an easy-to-use scripting environment for animating virtual characters. arXiv preprint arXiv:1702.03246.

Cited By

View all
  • (2021)Evaluating virtual reality locomotion interfaces on collision avoidance task with a virtual characterThe Visual Computer: International Journal of Computer Graphics10.1007/s00371-021-02202-637:9-11(2823-2839)Online publication date: 1-Sep-2021
  • (2019)Camera localization for a human-pose in 3D space using a single 2D human-pose image with landmarksMultimedia Tools and Applications10.1007/s11042-018-6789-478:3(3587-3608)Online publication date: 1-Feb-2019
  • (2018)Step asideProceedings of the 24th ACM Symposium on Virtual Reality Software and Technology10.1145/3281505.3281536(1-5)Online publication date: 28-Nov-2018
  • Show More Cited By
  1. Performance-Driven Hybrid Full-Body Character Control for Navigation and Interaction in Virtual Environments

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image 3D Research
    3D Research  Volume 8, Issue 2
    June 2017
    145 pages

    Publisher

    Springer-Verlag

    Berlin, Heidelberg

    Publication History

    Published: 01 June 2017

    Author Tags

    1. Character animation
    2. Hybrid controller
    3. Navigation
    4. Object manipulation
    5. Virtual reality interaction

    Qualifiers

    • Article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 19 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2021)Evaluating virtual reality locomotion interfaces on collision avoidance task with a virtual characterThe Visual Computer: International Journal of Computer Graphics10.1007/s00371-021-02202-637:9-11(2823-2839)Online publication date: 1-Sep-2021
    • (2019)Camera localization for a human-pose in 3D space using a single 2D human-pose image with landmarksMultimedia Tools and Applications10.1007/s11042-018-6789-478:3(3587-3608)Online publication date: 1-Feb-2019
    • (2018)Step asideProceedings of the 24th ACM Symposium on Virtual Reality Software and Technology10.1145/3281505.3281536(1-5)Online publication date: 28-Nov-2018
    • (2018)Master of Puppets3D Research10.1007/s13319-018-0158-y9:1(1-14)Online publication date: 1-Mar-2018

    View Options

    View options

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media