Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

HDHumans: A Hybrid Approach for High-fidelity Digital Humans

Published: 24 August 2023 Publication History

Abstract

Photo-real digital human avatars are of enormous importance in graphics, as they enable immersive communication over the globe, improve gaming and entertainment experiences, and can be particularly beneficial for AR and VR settings. However, current avatar generation approaches either fall short in high-fidelity novel view synthesis, generalization to novel motions, reproduction of loose clothing, or they cannot render characters at the high resolution offered by modern displays. To this end, we propose HDHumans, which is the first method for HD human character synthesis that jointly produces an accurate and temporally coherent 3D deforming surface and highly photo-realistic images of arbitrary novel views and of motions not seen at training time. At the technical core, our method tightly integrates a classical deforming character template with neural radiance fields (NeRF). Our method is carefully designed to achieve a synergy between classical surface deformation and a NeRF. First, the template guides the NeRF, which allows synthesizing novel views of a highly dynamic and articulated character and even enables the synthesis of novel motions. Second, we also leverage the dense pointclouds resulting from the NeRF to further improve the deforming surface via 3D-to-3D supervision. We outperform the state of the art quantitatively and qualitatively in terms of synthesis quality and resolution, as well as the quality of 3D surface reconstruction.

Supplemental Material

ZIP File - habermann
Supplemental movie, appendix, image and software files for, HDHumans: A Hybrid Approach for High-fidelity Digital Humans

References

[1]
Kfir Aberman, Mingyi Shi, Jing Liao, Dani Lischinski, Baoquan Chen, and Daniel Cohen-Or. 2019. Deep Video-Based Performance Cloning. Comput. Graph. Forum 38, 2 (2019), 219--233. https://doi.org/10.1111/cgf.13632
[2]
Agisoft. 2016. PhotoScan. http://www.agisoft.com.
[3]
Thiemo Alldieck, Marcus Magnor, Weipeng Xu, Christian Theobalt, and Gerard Pons-Moll. 2018. Video Based Reconstruction of 3D People Models. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 1, 1 (2018), 8387--8397. https://doi.org/10.1109/CVPR.2018.00875
[4]
Anonymous. 2022. Neural Novel Actor: Learning Generalizable Neural Radiance Field for Human Actors with Pose Control. (2022).
[5]
Timur Bagautdinov, Chenglei Wu, Tomas Simon, Fabián Prada, Takaaki Shiratori, Shih-En Wei, Weipeng Xu, Yaser Sheikh, and Jason Saragih. 2021. Driving-Signal Aware Full-Body Avatars. ACM Trans. Graph. 40, 4, Article 143 (jul 2021), 17 pages. https://doi.org/10.1145/3450626.3459850
[6]
Joel Carranza, Christian Theobalt, Marcus A. Magnor, and Hans-Peter Seidel. 2003. Free-viewpoint Video of Human Actors. ACM Trans. Graph. 22, 3 (July 2003).
[7]
Dan Casas, Marco Volino, John Collomosse, and Adrian Hilton. 2014. 4D Video Textures for Interactive Character Appearance. Comput. Graph. Forum 33, 2 (May 2014), 0.
[8]
Caroline Chan, Shiry Ginosar, Tinghui Zhou, and Alexei A Efros. 2019. Everybody Dance Now. IEEE International Conference on Computer Vision (ICCV) 1 (2019), 0--0.
[9]
Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. 2022. Efficient Geometry-aware 3D Generative Adversarial Networks. In CVPR.
[10]
Jianchuan Chen, Ying Zhang, Di Kang, Xuefei Zhe, Linchao Bao, and Huchuan Lu. 2021. Animatable Neural Radiance Fields from Monocular RGB Video. arXiv:2106.13629 [cs.CV]
[11]
Alvaro Collet, Ming Chuang, Pat Sweeney, Don Gillett, Dennis Evseev, David Calabrese, Hugues Hoppe, Adam Kirk, and Steve Sullivan. 2015. High-quality streamable free-viewpoint video. ACM Transactions on Graphics (TOG) 34, 4 (2015), 69.
[12]
Terrance DeVries, Miguel Angel Bautista, Nitish Srivastava, Graham W. Taylor, and Joshua M. Susskind. 2021. Unconstrained Scene Generation with Locally Conditioned Radiance Fields. arXiv (2021).
[13]
Fridovich-Keil and Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. 2022. Plenoxels: Radiance Fields without Neural Networks. In CVPR.
[14]
Stephan J. Garbin, Marek Kowalski, Matthew Johnson, Jamie Shotton, and Julien Valentin. 2021. FastNeRF: High-Fidelity Neural Rendering at 200FPS. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 14346--14355.
[15]
A. K. Grigor'ev, Artem Sevastopolsky, Alexander Vakhitov, and Victor S. Lempitsky. 2019. Coordinate-Based Texture Inpainting for Pose-Guided Human Image Generation. Computer Vision and Pattern Recognition (CVPR) (2019), 12127--12136.
[16]
Marc Habermann, Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, and Christian Theobalt. 2021. Real-Time Deep Dynamic Characters. ACM Trans. Graph. 40, 4, Article 94 (jul 2021), 16 pages. https://doi.org/10.1145/3450626.3459749
[17]
Marc Habermann, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, and Christian Theobalt. 2020. DeepCap: Monocular Human Performance Capture Using Weak Supervision. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR) 1 (2020), 1.
[18]
Marc Habermann, Weipeng Xu, Michael Zollhöfer, Gerard Pons-Moll, and Christian Theobalt. 2019. LiveCap: Real-Time Human Performance Capture From Monocular Video. ACM Transactions on Graphics (TOG) 38, 2, Article 14 (2019), 17 pages. https://doi.org/10.1145/3311970
[19]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1, 1 (2016), 770--778. https://doi.org/10.1109/CVPR.2016.90
[20]
Peter Hedman, Pratul P. Srinivasan, Ben Mildenhall, Jonathan T. Barron, and Paul Debevec. 2021. Baking Neural Radiance Fields for Real-Time View Synthesis. ICCV (2021).
[21]
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In Proceedings of the 31st International Conference on Neural Information Processing Systems (Long Beach, California, USA) (NIPS'17). Curran Associates Inc., Red Hook, NY, USA, 6629--6640.
[22]
Anna Hilsmann, Philipp Fechteler, Wieland Morgenstern, Wolfgang Paier, Ingo Feldmann, Oliver Schreer, and Peter Eisert. 2020. Going beyond free viewpoint: creating animatable volumetric video of human performances. IET Computer Vision 14, 6 (Sept. 2020).
[23]
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. Image-to-Image Translation with Conditional Adversarial Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1, 1 (2017), 5967--5976. https://doi.org/10.1109/CVPR.2017.632
[24]
Zhang Jiakai, Liu Xinhang, Ye Xinyi, Zhao Fuqiang, Zhang Yanshun, Wu Minye, Zhang Yingliang, Xu Lan, and Yu Jingyi. 2021. Editable Free-Viewpoint Video using a Layered Neural Representation. In ACM SIGGRAPH.
[25]
Moritz Kappel, Vladislav Golyanik, Mohamed Elgharib, Jann-Ole Henningson, Hans-Peter Seidel, Susana Castillo, Christian Theobalt, and Marcus Magnor. 2020. High-Fidelity Neural Human Motion Transfer from Monocular Video. arXiv:2012.10974 [cs.CV]
[26]
Youngjoong Kwon, Dahun Kim, Duygu Ceylan, and Henry Fuchs. 2021. Neural Human Performer: Learning Generalizable Radiance Fields for Human Performance Rendering. NeurIPS (2021).
[27]
Marc Levoy. 1990. Efficient Ray Tracing of Volume Data. ACM Trans. Graph. 9, 3 (July 1990), 245--261. https://doi.org/10.1145/78964.78965
[28]
Guannan Li, Yebin Liu, and Qionghai Dai. 2014. Free-viewpoint Video Relighting from Multi-view Sequence Under General Illumination. Mach. Vision Appl. 25, 7 (Oct. 2014), 1737--1746. https://doi.org/10.1007/s00138-013-0559-0
[29]
Ruilong Li, Julian Tanke, Minh Vo, Michael Zollhofer, Jurgen Gall, Angjoo Kanazawa, and Christoph Lassner. 2022. TAVA: Template-free animatable volumetric actors. European Conference on Computer Vision (ECCV).
[30]
Yining Li, Chen Huang, and Chen Change Loy. 2019. Dense Intrinsic Appearance Flow for Human Pose Transfer. In IEEE Conference on Computer Vision and Pattern Recognition.
[31]
Z. Li, Simon Niklaus, Noah Snavely, and Oliver Wang. 2020. Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes. ArXiv abs/2011.13084 (2020).
[32]
Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. 2020a. Neural Sparse Voxel Fields. NeurIPS (2020).
[33]
Lingjie Liu, Marc Habermann, Viktor Rudnev, Kripasindhu Sarkar, Jiatao Gu, and Christian Theobalt. 2021. Neural Actor: Neural Free-View Synthesis of Human Actors with Pose Control. ACM Trans. Graph. 40, 6, Article 219 (dec 2021), 16 pages. https://doi.org/10.1145/3478513.3480528
[34]
Lingjie Liu, Weipeng Xu, Marc Habermann, Michael Zollhöfer, Florian Bernard, Hyeongwoo Kim, Wenping Wang, and Christian Theobalt. 2020b. Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation. Transactions on Visualization and Computer Graphics (TVCG) PP (2020), 1--1. https://doi.org/10.1109/TVCG.2020.2996594
[35]
Lingjie Liu, Weipeng Xu, Michael Zollhöfer, Hyeongwoo Kim, Florian Bernard, Marc Habermann, Wenping Wang, and Christian Theobalt. 2019b. Neural Rendering and Reenactment of Human Actor Videos. ACM Transactions on Graphics (TOG) 38, 5, Article 139 (2019), 14 pages. https://doi.org/10.1145/3333002
[36]
Wen Liu, Zhixin Piao, Jie Min, Wenhan Luo, Lin Ma, and Shenghua Gao. 2019a. Liquid warping gan: A unified framework for human motion imitation, appearance transfer and novel view synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5904--5913.
[37]
Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. 2019. Neural volumes: Learning dynamic renderable volumes from images. ACM Transactions on Graphics (TOG) 38, 4 (2019), 65.
[38]
Stephen Lombardi, Tomas Simon, Gabriel Schwartz, Michael Zollhoefer, Yaser Sheikh, and Jason Saragih. 2021. Mixture of Volumetric Primitives for Efficient Neural Rendering. arXiv:2103.01954 [cs.GR]
[39]
Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. 2015. SMPL: A Skinned Multi-Person Linear Model. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 34, 6 (Oct. 2015), 248:1--248:16.
[40]
Liqian Ma, Xu Jia, Qianru Sun, Bernt Schiele, Tinne Tuytelaars, and Luc Van Gool. 2017. Pose guided person image generation. In Advances in Neural Information Processing Systems. 405--415.
[41]
Liqian Ma, Qianru Sun, Stamatios Georgoulis, Luc van Gool, Bernt Schiele, and Mario Fritz. 2018. Disentangled Person Image Generation. Computer Vision and Pattern Recognition (CVPR) (2018).
[42]
Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In Computer Vision -- ECCV 2020, Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (Eds.). Springer International Publishing, Cham, 405--421.
[43]
Natalia Neverova, Riza Alp Güler, and Iasonas Kokkinos. 2018. Dense Pose Transfer. European Conference on Computer Vision (ECCV) (2018).
[44]
Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. 2020. Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR).
[45]
Atsuhiro Noguchi, Xiao Sun, Stephen Lin, and Tatsuya Harada. 2021. Neural Articulated Radiance Field. In International Conference on Computer Vision.
[46]
Michael Oechsle, Songyou Peng, and Andreas Geiger. 2021. UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction. In International Conference on Computer Vision (ICCV). Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Steven M. Seitz, and Ricardo Martin-Brualla. 2020. Deformable Neural Radiance Fields. arXiv preprint arXiv:2011.12948 (2020).
[47]
Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, and Steven M. Seitz. 2021. HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields. ACM Trans. Graph. 40, 6, Article 238 (dec 2021).
[48]
Sida Peng, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Hujun Bao, and Xiaowei Zhou. 2021a. Animatable Neural Radiance Fields for Human Body Modeling. ICCV (2021).
[49]
Sida Peng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, and Xiaowei Zhou. 2021b. Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans. CVPR 1, 1 (2021), 9054--9063.
[50]
Sergey Prokudin, Michael J. Black, and Javier Romero. 2021. SMPLpix: Neural Avatars from 3D Human Models. In Winter Conference on Applications of Computer Vision (WACV). 1810--1819.
[51]
Albert Pumarola, Antonio Agudo, Alberto Sanfeliu, and Francesc Moreno-Noguer. 2018. Unsupervised Person Image Synthesis in Arbitrary Poses. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[52]
Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. 2020. D-NeRF: Neural Radiance Fields for Dynamic Scenes. arXiv:2011.13961 [cs.CV]
[53]
Amit Raj, Julian Tanke, James Hays, Minh Vo, Carsten Stoll, and Christoph Lassner. 2021. ANR: Articulated Neural Rendering for Virtual Avatars. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[54]
Christian Reiser, Songyou Peng, Yiyi Liao, and Andreas Geiger. 2021. KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs. In International Conference on Computer Vision (ICCV).
[55]
Kripasindhu Sarkar, Dushyant Mehta, Weipeng Xu, Vladislav Golyanik, and Christian Theobalt. 2020. Neural Re-Rendering of Humans from a Single Image. In European Conference on Computer Vision (ECCV).
[56]
Aliaksandra Shysheya, Egor Zakharov, Kara-Ali Aliev, Renat Bashirov, Egor Burkov, Karim Iskakov, Aleksei Ivakhnenko, Yury Malkov, Igor Pasechnik, Dmitry Ulyanov, Alexander Vakhitov, and Victor Lempitsky. 2019. Textured Neural Avatars. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[57]
Vincent Sitzmann, Michael Zollhöfer, and Gordon Wetzstein. 2019. Scene representation networks: Continuous 3D-structure-aware neural scene representations. In Advances in Neural Information Processing Systems. 1119--1130.
[58]
Olga Sorkine and Marc Alexa. 2007. As-rigid-as-possible Surface Modeling. In Proceedings of the Fifth Eurographics Symposium on Geometry Processing (Barcelona, Spain) (SGP '07). Eurographics Association.
[59]
Shih-Yang Su, Frank Yu, Michael Zollhoefer, and Helge Rhodin. 2021. A-NeRF: Surface-free Human 3D Pose Refinement via Neural Rendering. arXiv preprint arXiv:2102.06199 (2021).
[60]
Robert W. Sumner, Johannes Schmid, and Mark Pauly. 2007. Embedded Deformation for Shape Manipulation. ACM Trans. Graph. 26, 3 (July 2007).
[61]
TheCaptury. 2020. The Captury. http://www.thecaptury.com/.
[62]
Treedys. 2020. Treedys. https://www.treedys.com/.
[63]
Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhöfer, Christoph Lassner, and Christian Theobalt. 2021. Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video. In IEEE International Conference on Computer Vision (ICCV). IEEE.
[64]
Marco Volino, Dan Casas, John Collomosse, and Adrian Hilton. 2014. Optimal Representation of Multiple View Video. In Proceedings of the British Machine Vision Conference. BMVA Press.
[65]
Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. 2021. NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction. NeurIPS (2021).
[66]
Shaofei Wang, Katja Schwarz, Andreas Geiger, and Siyu Tang. 2022. ARAH: Animatable Volume Rendering of Articulated Human SDFs. In European Conference on Computer Vision.
[67]
Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. 2018. Video-to-Video Synthesis. Advances in Neural Information Processing Systems 31 (2018). https://proceedings.neurips.cc/paper/2018/file/d86ea612dec96096c5e0fcc8dd42ab6d-Paper.pdf
[68]
Ziyan Wang, Timur Bagautdinov, Stephen Lombardi, Tomas Simon, Jason Saragih, Jessica Hodgins, and Michael Zollhöfer. 2020. Learning Compositional Radiance Fields of Dynamic Human Heads. arXiv:2012.09955 [cs.CV]
[69]
Chung-Yi Weng, Brian Curless, Pratul P. Srinivasan, Jonathan T. Barron, and Ira Kemelmacher-Shlizerman. 2022. HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video. arXiv (2022).
[70]
Minye Wu, Yuehao Wang, Qiang Hu, and Jingyi Yu. 2020. Multi-View Neural Human Rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1682--1691.
[71]
Wenqi Xian, Jia-Bin Huang, Johannes Kopf, and Changil Kim. 2020. Space-time Neural Irradiance Fields for Free-Viewpoint Video. arXiv:2011.12950 [cs.CV]
[72]
Feng Xu, Yebin Liu, Carsten Stoll, James Tompkin, Gaurav Bharaj, Qionghai Dai, Hans-Peter Seidel, Jan Kautz, and Christian Theobalt. 2011. Video-based Characters: Creating New Human Performances from a Multi-view Video Database. In ACM SIGGRAPH 2011 Papers (SIGGRAPH '11). ACM, New York, NY, USA, Article 32, 10 pages. https://doi.org/10.1145/1964921.1964927
[73]
Hongyi Xu, Thiemo Alldieck, and Cristian Sminchisescu. 2021. H-NeRF: Neural Radiance Fields for Rendering and Temporal Reconstruction of Humans in Motion. arXiv:2110.13746 [cs.CV]
[74]
Lior Yariv, Jiatao Gu, Yoni Kasten, and Yaron Lipman. 2021. Volume rendering of neural implicit surfaces. In Thirty-Fifth Conference on Neural Information Processing Systems.
[75]
Lior Yariv, Yoni Kasten, Dror Moran, Meirav Galun, Matan Atzmon, Basri Ronen, and Yaron Lipman. 2020. Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance. Advances in Neural Information Processing Systems 33 (2020).
[76]
Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, and Angjoo Kanazawa. 2021. PlenOctrees for Real-time Rendering of Neural Radiance Fields. In ICCV.
[77]
R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang. 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, Los Alamitos, CA, USA, 586--595. https://doi.org/10.1109/CVPR.2018.00068
[78]
Zhen Zhu, Tengteng Huang, Baoguang Shi, Miao Yu, Bofei Wang, and Xiang Bai. 2019. Progressive Pose Attention Transfer for Person Image Generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2347--2356.
[79]
C Lawrence Zitnick, Sing Bing Kang, Matthew Uyttendaele, Simon Winder, and Richard Szeliski. 2004. High-quality video view interpolation using a layered representation. In ACM Transactions on Graphics (TOG), Vol. 23. ACM, 600--608.

Cited By

View all
  • (2024)TriHuman: A Real-time and Controllable Tri-plane Representation for Detailed Human Geometry and Appearance SynthesisACM Transactions on Graphics10.1145/369714044:1(1-17)Online publication date: 24-Sep-2024
  • (2024)Robust Dual Gaussian Splatting for Immersive Human-centric Volumetric VideosACM Transactions on Graphics10.1145/368792643:6(1-15)Online publication date: 19-Dec-2024
  • (2024)Millimetric Human Surface Capture in MinutesSIGGRAPH Asia 2024 Conference Papers10.1145/3680528.3687690(1-12)Online publication date: 3-Dec-2024
  • Show More Cited By

Index Terms

  1. HDHumans: A Hybrid Approach for High-fidelity Digital Humans

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image Proceedings of the ACM on Computer Graphics and Interactive Techniques
      Proceedings of the ACM on Computer Graphics and Interactive Techniques  Volume 6, Issue 3
      August 2023
      403 pages
      EISSN:2577-6193
      DOI:10.1145/3617582
      Issue’s Table of Contents
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 24 August 2023
      Published in PACMCGIT Volume 6, Issue 3

      Check for updates

      Badges

      • Best Paper

      Author Tags

      1. human modeling
      2. human performance capture
      3. human synthesis
      4. neural synthesis

      Qualifiers

      • Research-article
      • Research
      • Refereed

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)465
      • Downloads (Last 6 weeks)26
      Reflects downloads up to 13 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)TriHuman: A Real-time and Controllable Tri-plane Representation for Detailed Human Geometry and Appearance SynthesisACM Transactions on Graphics10.1145/369714044:1(1-17)Online publication date: 24-Sep-2024
      • (2024)Robust Dual Gaussian Splatting for Immersive Human-centric Volumetric VideosACM Transactions on Graphics10.1145/368792643:6(1-15)Online publication date: 19-Dec-2024
      • (2024)Millimetric Human Surface Capture in MinutesSIGGRAPH Asia 2024 Conference Papers10.1145/3680528.3687690(1-12)Online publication date: 3-Dec-2024
      • (2024)Creating a 3D Mesh in A-Pose from a Single Image for Character RiggingProceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation10.1111/cgf.15177(1-11)Online publication date: 21-Aug-2024
      • (2024)Animatable Virtual Humans: Learning Pose-Dependent Human Representations in UV Space for Interactive Performance SynthesisIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.337211730:5(2644-2650)Online publication date: 11-Mar-2024
      • (2024)Neural Novel Actor: Learning a Generalized Animatable Neural Representation for Human ActorsIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.330543330:8(5719-5732)Online publication date: 1-Aug-2024
      • (2024)GART: Gaussian Articulated Template Models2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.01879(19876-19887)Online publication date: 16-Jun-2024
      • (2024)Animatable Gaussians: Learning Pose-Dependent Gaussian Maps for High-Fidelity Human Avatar Modeling2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.01864(19711-19722)Online publication date: 16-Jun-2024
      • (2024)VINECS: Video-based Neural Character Skinning2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.00137(1377-1387)Online publication date: 16-Jun-2024
      • (2024)MaskRecon: High-quality human reconstruction via masked autoencoders using a single RGB-D imageNeurocomputing10.1016/j.neucom.2024.128487(128487)Online publication date: Aug-2024
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Full Access

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media