Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

VideoPoseVR: Authoring Virtual Reality Character Animations with Online Videos

Published: 14 November 2022 Publication History

Abstract

We present VideoPoseVR, a video-based animation authoring workflow using online videos to author character animations in VR. It leverages the state-of-the-art deep learning approach to reconstruct 3D motions from online videos, caption the motions, and store them in a motion dataset. Creators can import the videos, search in the dataset, modify the motion timeline, and combine multiple motions from videos to author character animations in VR. We implemented a proof-of-concept prototype and conducted a user study to evaluate the feasibility of the video-based authoring approach as well as gather initial feedback of the prototype. The study results suggest that VideoPoseVR was easy to learn for novice users to author animations and enable rapid exploration of prototyping for applications such as entertainment, skills training, and crowd simulations.

Supplementary Material

Teaser (iss22main-id4293-p-teaser.mp4)
2 min teaser video of VideoPoseVR

References

[1]
Rahul Arora, Rubaiat Habib Kazi, Danny M. Kaufman, Wilmot Li, and Karan Singh. 2019. MagicalHands: Mid-Air Hand Gestures for Animating in VR. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (New Orleans, LA, USA) ( UIST '19). Association for Computing Machinery, New York, NY, USA, 463-477. https://doi.org/10.1145/3332165.3347942
[2]
Narges Ashtari, Andrea Bunt, Joanna McGrenere, Michael Nebeling, and Parmit K. Chilana. 2020. Creating Augmented and Virtual Reality Applications: Current Practices, Challenges, and Opportunities. Association for Computing Machinery, New York, NY, USA, 1-13. https://doi.org/10.1145/3313831.3376722
[3]
Mayra Donaji Barrera Machuca, Wolfgang Stuerzlinger, and Paul Asente. 2019. The efect of spatial ability on immersive 3d drawing. In Proceedings of the 2019 on Creativity and Cognition. 173-186.
[4]
Bruna Berford, Carlos Diaz-Padron, Terry Kaleas, Irem Oz, and Devon Penney. 2017. Building an animation pipeline for vr stories. In ACM SIGGRAPH 2017 Talks. 1-2.
[5]
Blender. 2022. Blender NonLinear Animation. https://docs.blender.org/manual/en/latest/editors/nla/introduction.html. [Online accessed 11-September-2022].
[6]
Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter Gehler, Javier Romero, and Michael J Black. 2016. Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image. In European Conference on Computer Vision. Springer, 561-578.
[7]
Alberto Cannavò, Claudio Demartini, Lia Morra, and Fabrizio Lamberti. 2019. Immersive virtual reality-based interfaces for character animation. IEEE Access 7 ( 2019 ), 125463-125480.
[8]
Zhe Cao, Gines Hidalgo, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2019. OpenPose: realtime multi-person 2D pose estimation using Part Afinity Fields. IEEE transactions on pattern analysis and machine intelligence 43, 1 ( 2019 ), 172-186.
[9]
Z. Cao, G. Hidalgo Martinez, T. Simon, S. Wei, and Y. A. Sheikh. 2019. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Afinity Fields. IEEE Transactions on Pattern Analysis and Machine Intelligence ( 2019 ).
[10]
Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2017. Realtime multi-person 2d pose estimation using part afinity fields. In Proceedings of the IEEE conference on computer vision and pattern recognition. 7291-7299.
[11]
Géry Casiez, Nicolas Roussel, and Daniel Vogel. 2012. 1€ filter: a simple speed-based low-pass filter for noisy input in interactive systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2527-2530.
[12]
Ching-Hang Chen and Deva Ramanan. 2017. 3d human pose estimation= 2d pose estimation+ matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 7035-7043.
[13]
Jiawen Chen, Shahram Izadi, and Andrew Fitzgibbon. 2012. KinÊtre: animating the world with the human body. In Proceedings of the 25th annual ACM symposium on User interface software and technology. 435-444.
[14]
Christopher Clarke, Doga Cavdir, Patrick Chiu, Laurent Denoue, and Don Kimber. 2020. Reactive Video: Adaptive Video Playback Based on User Motion for Supporting Physical Activity. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST '20). Association for Computing Machinery, New York, NY, USA, 196-208. https://doi.org/10.1145/3379337.3415591
[15]
Rishabh Dabral, Anurag Mundhada, Uday Kusupati, Safeer Afaque, Abhishek Sharma, and Arjun Jain. 2018. Learning 3d human pose from structure and motion. In Proceedings of the European Conference on Computer Vision (ECCV). 668-683.
[16]
Ruta Desai, Fraser Anderson, Justin Matejka, Stelian Coros, James McCann, George Fitzmaurice, and Tovi Grossman. 2019. Geppetto: Enabling semantic design of expressive robot behaviors. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1-14.
[17]
Facebook. 2016. Oculus Medium. https://www.oculus.com/medium/. Accessed: 2021-09-08.
[18]
Facebook. 2021. Facebook Quill. https://quill.fb.com/. Accessed: 2021-09-08.
[19]
Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai, and Cewu Lu. 2017. Rmpe: Regional multi-person pose estimation. In Proceedings of the IEEE International Conference on Computer Vision. 2334-2343.
[20]
Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai, and Cewu Lu. 2017. RMPE: Regional Multi-person Pose Estimation. In ICCV.
[21]
Quentin Galvane, I-Sheng Lin, Fernando Argelaguet, Tsai-Yen Li, and Marc Christie. 2019. VR as a Content Creation Tool for Movie Previsualisation. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). 303-311. https://doi.org/10.1109/VR. 2019.8798181
[22]
Michael Robert Gardner and Warren W Sheafer. 2017. Systems to support co-creative collaboration in mixed-reality environments. In Virtual, augmented, and mixed realities in education. Springer, 157-178.
[23]
Oliver Glauser, Wan-Chun Ma, Daniele Panozzo, Alec Jacobson, Otmar Hilliges, and Olga Sorkine-Hornung. 2016. Rig animation with a tangible and modular input device. ACM Transactions on Graphics (TOG) 35, 4 ( 2016 ), 1-11.
[24]
Mar González-Franco, Zelia Egan, Matt Peachey, A. Antley, Tanmay Randhavane, Payod Panda, Yaying Zhang, Cheng Yao Wang, Derek F. Reilly, Tabitha C. Peck, A. S. Won, A. Steed, and E. Ofek. 2020. MoveBox: Democratizing MoCap for the Microsoft Rocketbox Avatar Library. 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR) ( 2020 ), 91-98.
[25]
Mar Gonzalez-Franco, Eyal Ofek, Ye Pan, Angus Antley, Anthony Steed, Bernhard Spanlang, Antonella Maselli, Domna Banakou, Nuria Pelechano, Sergio Orts-Escolano, Veronica Orvalho, Laura Trutoiu, Markus Wojcik, Maria V. SanchezVives, Jeremy Bailenson, Mel Slater, and Jaron Lanier. 2020. The Rocketbox Library and the Utility of Freely Available Rigged Avatars. Frontiers in Virtual Reality 1 ( 2020 ), 20. https://doi.org/10.3389/frvir. 2020.561558
[26]
Google. 2016. Tilt Brush. http://www.tiltbrush.com/. Accessed: 2021-09-08.
[27]
Natsuki Hamanishi and Jun Rekimoto. 2020. PoseAsQuery: Full-Body Interface for Repeated Observation of a Person in a Video with Ambiguous Pose Indexes and Performed Poses. In Proceedings of the Augmented Humans International Conference (Kaiserslautern, Germany) (AHs '20). Association for Computing Machinery, New York, NY, USA, Article 13, 11 pages. https://doi.org/10.1145/3384657.3384658
[28]
Natsuki Hamanishi and Jun Rekimoto. 2020. SuppleView: Rotation-Based Browsing Method by Changing Observation Angle of View for an Actor in Existing Videos. In Proceedings of the International Conference on Advanced Visual Interfaces (Salerno, Italy) (AVI '20). Association for Computing Machinery, New York, NY, USA, Article 95, 3 pages. https://doi.org/10.1145/3399715.3401952
[29]
Robert Held, Ankit Gupta, Brian Curless, and Maneesh Agrawala. 2012. 3D puppetry: a kinect-based interface for 3D animation. In UIST, Vol. 12. Citeseer, 423-434.
[30]
Cathleen E. Hughes, Lelin Zhang, Jürgen P. Schulze, Eve Edelstein, and Eduardo Macagno. 2013. CaveCAD: Architectural design in the CAVE. In 2013 IEEE Symposium on 3D User Interfaces (3DUI). 193-194. https://doi.org/10.1109/3DUI. 2013. 6550244
[31]
Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. 2013. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE transactions on pattern analysis and machine intelligence 36, 7 ( 2013 ), 1325-1339.
[32]
Jef Johnson, Matthijs Douze, and Hervé Jégou. 2021. Billion-Scale Similarity Search with GPUs. IEEE Transactions on Big Data 7, 3 ( 2021 ), 535-547. https://doi.org/10.1109/TBDATA. 2019.2921572
[33]
Herve Jégou, Matthijs Douze, and Cordelia Schmid. 2011. Product Quantization for Nearest Neighbor Search. IEEE Transactions on Pattern Analysis and Machine Intelligence 33, 1 ( 2011 ), 117-128. https://doi.org/10.1109/TPAMI. 2010.57
[34]
Angjoo Kanazawa, Jason Y Zhang, Panna Felsen, and Jitendra Malik. 2019. Learning 3d human dynamics from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5614-5623.
[35]
Ning Kang, Junxuan Bai, Junjun Pan, and Hong Qin. 2019. Interactive Animation Generation of Virtual Characters Using Single RGB-D Camera. Vis. Comput. 35, 6-8 ( June 2019 ), 849-860. https://doi.org/10.1007/s00371-019-01678-7
[36]
Muhammed Kocabas, Nikos Athanasiou, and Michael J Black. 2020. VIBE: Video inference for human body pose and shape estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5253-5263.
[37]
Yuki Koyama, Issei Sato, and Masataka Goto. 2020. Sequential gallery for interactive visual design optimization. ACM Transactions on Graphics (TOG) 39, 4 ( 2020 ), 88-1.
[38]
Mikko Kytö, Krupakar Dhinakaran, Aki Martikainen, and Perttu Hämäläinen. 2015. Improving 3D character posing with a gestural interface. IEEE computer graphics and applications 37, 1 ( 2015 ), 70-78.
[39]
Bokyung Lee, Michael Lee, Pan Zhang, Alexander Tessier, and Azam Khan. 2019. Semantic Human Activity Annotation Tool Using Skeletonized Surveillance Videos. In Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers (London, United Kingdom) (UbiComp/ISWC '19 Adjunct). Association for Computing Machinery, New York, NY, USA, 312-315. https://doi.org/10.1145/3341162.3343807
[40]
Bokyung Lee, Michael Lee, Pan Zhang, Alexander Tessier, Daniel Saakes, and Azam Khan. 2019. Skeletonographer: Skeleton-Based Digital Ethnography Tool. In Conference Companion Publication of the 2019 on Computer Supported Cooperative Work and Social Computing (Austin, TX, USA) ( CSCW '19). Association for Computing Machinery, New York, NY, USA, 14-17. https://doi.org/10.1145/3311957.3359510
[41]
Brian Lee, Savil Srivastava, Ranjitha Kumar, Ronen Brafman, and Scott R Klemmer. 2010. Designing with interactive example galleries. In Proceedings of the SIGCHI conference on human factors in computing systems. 2257-2266.
[42]
Germán Leiva, Jens Emil Grønbaek, Clemens Nylandsted Klokmose, Cuong Nguyen, Rubaiat Habib Kazi, and Paul Asente. 2021. Rapido: Prototyping Interactive AR Experiences through Programming by Demonstration. In The 34th Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST '21). Association for Computing Machinery, New York, NY, USA, 626-637. https://doi.org/10.1145/3472749.3474774
[43]
Germán Leiva, Cuong Nguyen, Rubaiat Habib Kazi, and Paul Asente. 2020. Pronto: Rapid Augmented Reality Video Prototyping Using Sketches and Enaction. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) ( CHI '20). Association for Computing Machinery, New York, NY, USA, 1-13. https: //doi.org/10.1145/3313831.3376160
[44]
Jingyuan Liu, Hongbo Fu, and Chiew-Lan Tai. 2020. PoseTween: Pose-Driven Tween Animation. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST '20). Association for Computing Machinery, New York, NY, USA, 791-804. https://doi.org/10.1145/3379337.3415822
[45]
Diogo C. Luvizon, David Picard, and Hedi Tabia. 2018. 2D/3D Pose Estimation and Action Recognition Using Multitask Deep Learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[46]
Naureen Mahmood, Nima Ghorbani, Nikolaus F Troje, Gerard Pons-Moll, and Michael J Black. 2019. AMASS: Archive of motion capture as surface shapes. In Proceedings of the IEEE/CVF international conference on computer vision. 5442-5451.
[47]
Julieta Martinez, Rayat Hossain, Javier Romero, and James J Little. 2017. A simple yet efective baseline for 3d human pose estimation. In Proceedings of the IEEE International Conference on Computer Vision. 2640-2649.
[48]
Dushyant Mehta, Oleksandr Sotnychenko, Franziska Mueller, Weipeng Xu, Mohamed Elgharib, Pascal Fua, Hans-Peter Seidel, Helge Rhodin, Gerard Pons-Moll, and Christian Theobalt. 2020. XNect: Real-time multi-person 3D motion capture with a single RGB camera. ACM Transactions on Graphics (TOG) 39, 4 ( 2020 ), 82-1.
[49]
Meta. 2022. Horizon Worlds | Virtual Reality Worlds and Communities. https://www.oculus.com/horizon-worlds/. [Online accessed 11-September-2022].
[50]
Microsoft. 2019. Microsoft Maquette. https://www.maquette.ms/. Accessed: 2021-09-08.
[51]
Mindshow. 2020. Mindshow. https://mindshow.com/. Accessed: 2021-09-08.
[52]
Gyeongsik Moon, Juyong Chang, and Kyoung Mu Lee. 2019. Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image. In The IEEE Conference on International Conference on Computer Vision (ICCV).
[53]
Leon Müller, Ken Pfeufer, Jan Gugenheimer, Bastian Pfleging, Sarah Prange, and Florian Alt. 2021. SpatialProto: Exploring Real-World Motion Captures for Rapid Prototyping of Interactive Mixed Reality. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1-13.
[54]
Michael Nebeling, Katy Lewis, Yu-Cheng Chang, Lihan Zhu, Michelle Chung, Piaoyang Wang, and Janet Nebeling. 2020. XRDirector: A role-based collaborative immersive authoring system. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1-12.
[55]
Michael Nitsche and Sanjeev Nayak. 2012. Cell phone puppets: turning mobile phones into performing objects. In International Conference on Entertainment Computing. Springer, 363-372.
[56]
nvrmind. 2018. ANIMVR Revolutionizes Your 3D content Production. https://nvrmind.io/. [Online accessed 8-August2021].
[57]
Ye Pan and Kenny Mitchell. 2020. PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). 758-759. https: //doi.org/10.1109/VRW50115. 2020.00230
[58]
Min Je Park, Min Gyu Choi, Yoshihisa Shinagawa, and Sung Yong Shin. 2006. Video-Guided Motion Synthesis Using Example Motions. ACM Trans. Graph. 25, 4 (Oct. 2006 ), 1327-1359. https://doi.org/10.1145/1183287.1183291
[59]
Dario Pavllo, Christoph Feichtenhofer, David Grangier, and Michael Auli. 2019. 3d human pose estimation in video with temporal convolutions and semi-supervised training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 7753-7762.
[60]
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020 ( 2021 ).
[61]
Mir Rayat Imtiaz Hossain and James J Little. 2018. Exploiting temporal information for 3d human pose estimation. In Proceedings of the European Conference on Computer Vision (ECCV). 68-84.
[62]
Roblox. 2022. Animation Capture | Roblox Creation Documentation. https://create.roblox.com/docs/building-andvisuals/animation/animation-capture. [Online accessed 11-September-2022].
[63]
Rec Room. 2022. Rec Room-Build and Play Games Together. https://recroom.com/. [Online accessed 11-September2022 ].
[64]
Leonid Sigal and Michael J Black. 2006. Humaneva: Synchronized video and motion capture dataset for evaluation of articulated human motion. Brown Univertsity TR 120 ( 2006 ).
[65]
Storyblocks. 2017. Storyblocks: Create More Video, Faster Than Ever. https://www.storyblocks.com/video. [ Online accessed 8-August-2021].
[66]
Yu Sun, Qian Bao, Wu Liu, Yili Fu, Black Michael J., and Tao Mei. 2021. Monocular, One-stage, Regression of Multiple 3D People. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).
[67]
Kosuke Takahashi, Dan Mikami, Mariko Isogawa, Yoshinori Kusachi, and Naoki Saijo. 2019. VR-based Batter Training System with Motion Sensing and Performance Visualization. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). 1353-1354. https://doi.org/10.1109/VR. 2019.8798005
[68]
Atima Tharatipyakul, Kenny T. W. Choo, and Simon T. Perrault. 2020. Pose Estimation for Facilitating Movement Learning from Online Videos. In Proceedings of the International Conference on Advanced Visual Interfaces (Salerno, Italy) (AVI '20). Association for Computing Machinery, New York, NY, USA, Article 64, 5 pages. https://doi.org/10. 1145/3399715.3399835
[69]
Tvori. 2019. Tvori. http://tvori.co/. Accessed: 2021-09-08.
[70]
Unity. 2021. Unity Manual: Animation System Overview. https://docs.unity3d.com/Manual/AnimationOverview.html. [Online accessed 8-August-2021].
[71]
Unity. 2022. Unity Avatar Mask. https://docs.unity3d.com/Manual/class-AvatarMask.html. [Online accessed 11-September-2022].
[72]
Andreia Valente, Augusto Esteves, and Daniel Lopes. 2021. From A-Pose to AR-Pose: Animating Characters in Mobile AR. In ACM SIGGRAPH 2021 Appy Hour (Virtual Event, USA) (SIGGRAPH '21). Association for Computing Machinery, New York, NY, USA, Article 4, 2 pages. https://doi.org/10.1145/3450415.3464401
[73]
Daniel Vogel, Paul Lubos, and Frank Steinicke. 2018. AnimationVR-Interactive Controller-Based Animating in Virtual Reality. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). 1-1. https://doi.org/10.1109/VR. 2018. 8446550
[74]
Timo von Marcard, Roberto Henschel, Michael J Black, Bodo Rosenhahn, and Gerard Pons-Moll. 2018. Recovering accurate 3d human pose in the wild using imus and a moving camera. In Proceedings of the European Conference on Computer Vision (ECCV). 601-617.
[75]
Cheng Yao Wang, Shengguang Bai, and Andrea Stevenson Won. 2020. ReliveReality: Enabling Socially Reliving Experiences in Virtual Reality via a Single RGB camera. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). 710-711. https://doi.org/10.1109/VRW50115. 2020.00206
[76]
Nora S Willett, Hijung Valentina Shin, Zeyu Jin, Wilmot Li, and Adam Finkelstein. 2020. Pose2Pose: Pose Selection and Transfer for 2D Character Animation. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (IUI '20). Association for Computing Machinery, New York, NY, USA, 88-99. https://doi.org/10.1145/ 3377325.3377505
[77]
Wei Wu, Justin Hartless, Aaron Tesei, Venkata Gunji, Steven Ayer, and Jeremi London. 2019. Design assessment in virtual and mixed reality environments: Comparison of novices and experts. Journal of Construction Engineering and Management 145, 9 ( 2019 ).
[78]
Yuliang Xiu, Jiefeng Li, Haoyu Wang, Yinghong Fang, and Cewu Lu. 2018. Pose Flow: Eficient Online Pose Tracking. In BMVC.
[79]
Weipeng Xu, Avishek Chatterjee, Michael Zollhoefer, Helge Rhodin, Pascal Fua, Hans-Peter Seidel, and Christian Theobalt. 2019. Mo 2 cap 2: Real-time mobile 3d motion capture with a cap-mounted fisheye camera. IEEE transactions on visualization and computer graphics 25, 5 ( 2019 ), 2093-2101.
[80]
Yuanlu Xu, Song-Chun Zhu, and Tony Tung. 2019. Denserac: Joint 3d pose and shape estimation by dense render-andcompare. In Proceedings of the IEEE International Conference on Computer Vision. 7760-7770.
[81]
Hui Ye, Kin Chung Kwan, Wanchao Su, and Hongbo Fu. 2020. <i>ARAnimator</i>: In-Situ Character Animation in Mobile AR with User-Defined Motion Gestures. ACM Trans. Graph. 39, 4, Article 83 ( July 2020 ), 12 pages. https: //doi.org/10.1145/3386569.3392404
[82]
Wenjie Yin, Hang Yin, Kim Baraka, Danica Kragic, and Mårten Björkman. 2022. Dance Style Transfer with Cross-modal Transformer. arXiv preprint arXiv:2208.09406 ( 2022 ).
[83]
Andrei Zanfir, Elisabeta Marinoiu, Mihai Zanfir, Alin-Ionut Popa, and Cristian Sminchisescu. 2018. Deep Network for the Integrated 3D Sensing of Multiple People in Natural Images. In Advances in Neural Information Processing Systems 31.
[84]
Pengfei Zhang, Cuiling Lan, Wenjun Zeng, Junliang Xing, Jianru Xue, and Nanning Zheng. 2020. Semantics-Guided Neural Networks for Eficient Skeleton-Based Human Action Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.

Cited By

View all
  • (2024)A Virtual Reality Direct-Manipulation Tool for Posing and Animation of Digital Human Bodies: An Evaluation of Creativity SupportMultimodal Technologies and Interaction10.3390/mti80700608:7(60)Online publication date: 10-Jul-2024
  • (2024)TimeTunnel: Integrating Spatial and Temporal Motion Editing for Character Animation in Virtual RealityProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641927(1-17)Online publication date: 11-May-2024
  • (2023)Immersive Sampling: Exploring Sampling for Future Creative Practices in Media-Rich, Immersive SpacesProceedings of the 2023 ACM Designing Interactive Systems Conference10.1145/3563657.3596131(212-229)Online publication date: 10-Jul-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 6, Issue ISS
December 2022
746 pages
EISSN:2573-0142
DOI:10.1145/3554337
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 November 2022
Published in PACMHCI Volume 6, Issue ISS

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. 3D motion
  2. computer animation
  3. content creation
  4. virtual reality

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)123
  • Downloads (Last 6 weeks)5
Reflects downloads up to 28 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)A Virtual Reality Direct-Manipulation Tool for Posing and Animation of Digital Human Bodies: An Evaluation of Creativity SupportMultimodal Technologies and Interaction10.3390/mti80700608:7(60)Online publication date: 10-Jul-2024
  • (2024)TimeTunnel: Integrating Spatial and Temporal Motion Editing for Character Animation in Virtual RealityProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641927(1-17)Online publication date: 11-May-2024
  • (2023)Immersive Sampling: Exploring Sampling for Future Creative Practices in Media-Rich, Immersive SpacesProceedings of the 2023 ACM Designing Interactive Systems Conference10.1145/3563657.3596131(212-229)Online publication date: 10-Jul-2023
  • (2023)Open Datasets in Human Activity Recognition Research—Issues and Challenges: A ReviewIEEE Sensors Journal10.1109/JSEN.2023.331764523:22(26952-26980)Online publication date: 15-Nov-2023

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media