Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3641519.3657448acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
research-article

VR-GS: A Physical Dynamics-Aware Interactive Gaussian Splatting System in Virtual Reality

Published: 13 July 2024 Publication History

Abstract

As 3D content becomes increasingly prevalent, there’s a growing focus on the development of engagements with 3D virtual content. Unfortunately, traditional techniques for creating, editing, and interacting with this content are fraught with difficulties. They tend to be not only engineering-intensive but also require extensive expertise, which adds to the frustration and inefficiency in virtual object manipulation. Our proposed VR-GS system represents a leap forward in human-centered 3D content interaction, offering a seamless and intuitive user experience. By developing a physical dynamics-aware interactive Gaussian Splatting (GS) in a Virtual Reality (VR) setting, and constructing a highly efficient two-level embedding strategy alongside deformable body simulations, VR-GS ensures real-time execution with highly realistic dynamic responses. The components of our system are designed for high efficiency and effectiveness, starting from detailed scene reconstruction and object segmentation, advancing through multi-view image in-painting, and extending to interactive physics-based editing. The system also incorporates real-time deformation embedding and dynamic shadow casting, ensuring a comprehensive and engaging virtual experience.

Supplemental Material

MP4 File
Presentation video
MP4 File - presentation
presentation

References

[1]
Aaron Bangor, Philip Kortum, and James Miller. 2009. Determining what individual SUS scores mean: Adding an adjective rating scale. Journal of usability studies 4, 3 (2009), 114–123.
[2]
Chong Bao, Yinda Zhang, Bangbang Yang, Tianxing Fan, Zesong Yang, Hujun Bao, Guofeng Zhang, and Zhaopeng Cui. 2023. Sine: Semantic-driven image-based nerf editing with prior-guided editing field. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20919–20929.
[3]
Jianchuan Chen, Ying Zhang, Di Kang, Xuefei Zhe, Linchao Bao, Xu Jia, and Huchuan Lu. 2021. Animatable neural radiance fields from monocular rgb videos. arXiv preprint arXiv:2106.13629 (2021).
[4]
Yiwen Chen, Zilong Chen, Chi Zhang, Feng Wang, Xiaofeng Yang, Yikai Wang, Zhongang Cai, Lei Yang, Huaping Liu, and Guosheng Lin. 2023. Gaussianeditor: Swift and controllable 3d editing with gaussian splatting. arXiv preprint arXiv:2311.14521 (2023).
[5]
Ho Kei Cheng, Seoung Wug Oh, Brian Price, Joon-Young Lee, and Alexander Schwing. 2023. Putting the Object Back into Video Object Segmentation. arXiv preprint arXiv:2310.12982 (2023).
[6]
Nianchen Deng, Zhenyi He, Jiannan Ye, Budmonde Duinkharjav, Praneeth Chakravarthula, Xubo Yang, and Qi Sun. 2022. Fov-nerf: Foveated neural radiance fields for virtual reality. IEEE Transactions on Visualization and Computer Graphics 28, 11 (2022), 3854–3864.
[7]
Crispin Deul, Patrick Charrier, and Jan Bender. 2016. Position-based rigid-body dynamics. Computer Animation and Virtual Worlds 27, 2 (2016), 103–112.
[8]
Bardienus P Duisterhof, Zhao Mandi, Yunchao Yao, Jia-Wei Liu, Mike Zheng Shou, Shuran Song, and Jeffrey Ichnowski. 2023. MD-Splatting: Learning Metric Deformation from 4D Gaussians in Highly Deformable Scenes. arXiv preprint arXiv:2312.00583 (2023).
[9]
Jiemin Fang, Junjie Wang, Xiaopeng Zhang, Lingxi Xie, and Qi Tian. 2023. Gaussianeditor: Editing 3d gaussians delicately with text instructions. arXiv preprint arXiv:2311.16037 (2023).
[10]
Yutao Feng, Xiang Feng, Yintong Shang, Ying Jiang, Chang Yu, Zeshun Zong, Tianjia Shao, Hongzhi Wu, Kun Zhou, Chenfanfu Jiang, 2024a. Gaussian Splashing: Dynamic Fluid Synthesis with Gaussian Splatting. arXiv preprint arXiv:2401.15318 (2024).
[11]
Yutao Feng, Yintong Shang, Xuan Li, Tianjia Shao, Chenfanfu Jiang, and Yin Yang. 2024b. PIE-NeRF: Physics-based Interactive Elastodynamics with NeRF. In Proceedings of the IEEE/CVF International Conference on Computer Vision.
[12]
Joseph L Gabbard, Deborah Hix, and J Edward Swan. 1999. User-centered design and evaluation of virtual environments. IEEE computer Graphics and Applications 19, 6 (1999), 51–59.
[13]
Lin Gao, Jia-Mu Sun, Kaichun Mo, Yu-Kun Lai, Leonidas J Guibas, and Jie Yang. 2023. SceneHGN: Hierarchical Graph Networks for 3D Indoor Scene Generation with Fine-Grained Geometry. IEEE Transactions on Pattern Analysis and Machine Intelligence (2023).
[14]
Kelly S Hale and Kay M Stanney. 2014. Handbook of virtual environments: Design, implementation, and applications. CRC Press.
[15]
Ayaan Haque, Matthew Tancik, Alexei A Efros, Aleksander Holynski, and Angjoo Kanazawa. 2023. Instruct-nerf2nerf: Editing 3d scenes with instructions. arXiv preprint arXiv:2303.12789.
[16]
Yi-Hua Huang, Yan-Pei Cao, Yu-Kun Lai, Ying Shan, and Lin Gao. 2023a. NeRF-texture: Texture synthesis with neural radiance fields. In ACM SIGGRAPH 2023 Conference Proceedings. 1–10.
[17]
Yi-Hua Huang, Yang-Tian Sun, Ziyi Yang, Xiaoyang Lyu, Yan-Pei Cao, and Xiaojuan Qi. 2023b. SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes. arXiv preprint arXiv:2312.14937 (2023).
[18]
Roy S Kalawsky. 1999. VRUSE—a computerised diagnostic tool: for usability evaluation of virtual/synthetic environment systems. Applied ergonomics 30, 1 (1999), 11–25.
[19]
Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 2023. 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM Transactions on Graphics 42, 4 (2023).
[20]
Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun. 2017. Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics (ToG) 36, 4, 1–13.
[21]
Chaojian Li, Sixu Li, Yang Zhao, Wenbo Zhu, and Yingyan Lin. 2022a. RT-NeRF: Real-Time On-Device Neural Radiance Fields Towards Immersive AR/VR Rendering. In Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design. 1–9.
[22]
Ke Li, Tim Rolff, Reinhard Bacher, and Frank Steinicke. 2023b. RealityGit: Cross Reality Version Control of R&D Optical Workbench. In 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 807–808.
[23]
Ke Li, Tim Rolff, Susanne Schmidt, Reinhard Bacher, Simone Frintrop, Wim Leemans, and Frank Steinicke. 2022b. Immersive Neural Graphics Primitives. arXiv preprint arXiv:2211.13494 (2022).
[24]
Ke Li, Tim Rolff, Susanne Schmidt, Reinhard Bacher, Wim Leemans, and Frank Steinicke. 2023c. Interacting with Neural Radiance Fields in Immersive Virtual Reality. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. 1–4.
[25]
Ke Li, Susanne Schmidt, Tim Rolff, Reinhard Bacher, Wim Leemans, and Frank Steinicke. 2023d. Magic nerf lens: Interactive fusion of neural radiance fields for virtual facility inspection. arXiv preprint arXiv:2307.09860 (2023).
[26]
Shaoxu Li and Ye Pan. 2023. Interactive Geometry Editing of Neural Radiance Fields. arXiv preprint arXiv:2303.11537 (2023).
[27]
Xuan Li, Yi-Ling Qiao, Peter Yichen Chen, Krishna Murthy Jatavallabhula, Ming Lin, Chenfanfu Jiang, and Chuang Gan. 2023a. PAC-neRF: Physics augmented continuum neural radiance fields for geometry-agnostic system identification. arXiv preprint arXiv:2303.05512 (2023).
[28]
Yixun Liang, Xin Yang, Jiantao Lin, Haodong Li, Xiaogang Xu, and Yingcong Chen. 2023. LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval Score Matching. arXiv preprint arXiv:2311.11284 (2023).
[29]
Gao Lin, Liu Feng-Lin, Chen Shu-Yu, Jiang Kaiwen, Li Chunpeng, Yukun Lai, and Fu Hongbo. 2023. SketchFaceNeRF: Sketch-based facial generation and editing in neural radiance fields. ACM Transactions on Graphics (2023).
[30]
David B Lindell, Julien NP Martel, and Gordon Wetzstein. 2021. Autoint: Automatic integration for fast neural volume rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14556–14565.
[31]
Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. 2020. Neural sparse voxel fields. Advances in Neural Information Processing Systems 33 (2020), 15651–15663.
[32]
Stephen Lombardi, Tomas Simon, Gabriel Schwartz, Michael Zollhoefer, Yaser Sheikh, and Jason Saragih. 2021. Mixture of volumetric primitives for efficient neural rendering. ACM Transactions on Graphics (ToG) 40, 4 (2021), 1–13.
[33]
William E. Lorensen and Harvey E. Cline. 1987. Marching cubes: A high resolution 3D surface construction algorithm. SIGGRAPH Comput. Graph. 21, 4 (aug 1987), 163–169.
[34]
Miles Macklin, Matthias Müller, and Nuttapong Chentanez. 2016. XPBD: position-based simulation of compliant constrained dynamics. In Proceedings of the 9th International Conference on Motion in Games (Burlingame, California) (MIG ’16). Association for Computing Machinery, New York, NY, USA, 49–54.
[35]
Xiaoxu Meng, Weikai Chen, and Bo Yang. 2023. NeAT: Learning Neural Implicit Surfaces with Arbitrary Topologies from Multi-view Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 248–258.
[36]
Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. 2021. Nerf: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 65, 1 (2021), 99–106.
[37]
Matthias Müller, Nuttapong Chentanez, and Miles Macklin. 2016. Simulating visual geometry. In Proceedings of the 9th International Conference on Motion in Games (Burlingame, California) (MIG ’16). Association for Computing Machinery, New York, NY, USA, 31–38.
[38]
Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. 2022. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics (ToG) 41, 4 (2022), 1–15.
[39]
Ken Museth. 2013. VDB: High-resolution sparse volumes with dynamic topology. ACM Trans. Graph. 32, 3, Article 27 (jul 2013), 22 pages.
[40]
Michael Niemeyer and Andreas Geiger. 2021. Campari: Camera-aware decomposed generative neural radiance fields. In 2021 International Conference on 3D Vision (3DV). IEEE, 951–961.
[41]
Michael Oechsle, Songyou Peng, and Andreas Geiger. 2021. Unisurf: Unifying neural implicit surfaces and radiance fields for multi-view reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5589–5599.
[42]
Keunhong Park, Utkarsh Sinha, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Steven M Seitz, and Ricardo Martin-Brualla. 2021. Nerfies: Deformable neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5865–5874.
[43]
Sida Peng, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Xiaowei Zhou, and Hujun Bao. 2021. Animatable neural radiance fields for modeling dynamic human bodies. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 14314–14323.
[44]
Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. 2021. D-nerf: Neural radiance fields for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10318–10327.
[45]
Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. 2017. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 652–660.
[46]
Yi-Ling Qiao, Alexander Gao, Yiran Xu, Yue Feng, Jia-Bin Huang, and Ming C Lin. 2023. Dynamic mesh-aware radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 385–396.
[47]
Daniel Rebain, Wei Jiang, Soroosh Yazdani, Ke Li, Kwang Moo Yi, and Andrea Tagliasacchi. 2021. Derf: Decomposed radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14153–14161.
[48]
Tim Rolff, Susanne Schmidt, Ke Li, Frank Steinicke, and Simone Frintrop. 2023. VRS-NeRF: Accelerating Neural Radiance Field Rendering with Variable Rate Shading. In 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 243–252.
[49]
Hang Si. 2015. TetGen, a Delaunay-Based Quality Tetrahedral Mesh Generator. ACM Trans. Math. Softw. 41, 2, Article 11 (feb 2015), 36 pages.
[50]
Liangchen Song, Anpei Chen, Zhong Li, Zhang Chen, Lele Chen, Junsong Yuan, Yi Xu, and Andreas Geiger. 2023. Nerfplayer: A streamable dynamic scene representation with decomposed neural radiance fields. IEEE Transactions on Visualization and Computer Graphics 29, 5 (2023), 2732–2742.
[51]
Olga Sorkine and Daniel Cohen-Or. 2004. Least-squares meshes. In Proceedings Shape Modeling Applications, 2004. IEEE, 191–199.
[52]
Kay M Stanney, Mansooreh Mollaghasemi, Leah Reeves, Robert Breaux, and David A Graeber. 2003. Usability engineering of virtual environments (VEs): identifying multiple criteria that drive effective VE system design. International Journal of Human-Computer Studies 58, 4 (2003), 447–481.
[53]
Jia-Mu Sun, Tong Wu, Yong-Liang Yang, Yu-Kun Lai, and Lin Gao. 2023. SOL-NeRF: Sunlight modeling for outdoor scene decomposition and relighting. In SIGGRAPH Asia 2023 Conference Papers. 1–11.
[54]
Alistair G Sutcliffe, Charalambos Poullis, Andreas Gregoriades, Irene Katsouri, Aimilia Tzanavari, and Kyriakos Herakleous. 2019. Reflecting on the design process for virtual reality applications. International Journal of Human–Computer Interaction 35, 2 (2019), 168–179.
[55]
Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, Anastasia Remizova, Arsenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Kiwoong Park, and Victor Lempitsky. 2022. Resolution-robust large mask inpainting with fourier convolutions. In Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2149–2159.
[56]
Haithem Turki, Vasu Agrawal, Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder, Deva Ramanan, Michael Zollhöfer, and Christian Richardt. 2023. HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces. arXiv preprint arXiv:2312.03160 (2023).
[57]
Can Wang, Ruixiang Jiang, Menglei Chai, Mingming He, Dongdong Chen, and Jing Liao. 2023. Nerf-art: Text-driven neural radiance fields stylization. IEEE Transactions on Visualization and Computer Graphics (2023).
[58]
Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. 2021. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. arXiv preprint arXiv:2106.10689 (2021).
[59]
L.R. Wanger, J.A. Ferwerda, and D.P. Greenberg. 1992. Perceiving spatial relationships in computer-generated images. IEEE Computer Graphics and Applications 12, 3, 44–58.
[60]
Tong Wu, Jia-Mu Sun, Yu-Kun Lai, and Lin Gao. 2023. De-nerf: Decoupled neural radiance fields for view-consistent appearance editing and high-frequency environmental relighting. In ACM SIGGRAPH 2023 conference proceedings. 1–11.
[61]
Hongchi Xia, Zhi-Hao Lin, Wei-Chiu Ma, and Shenlong Wang. 2024. Video2Game: Real-time, Interactive, Realistic and Browser-Compatible Environment from a Single Video. In CVPR.
[62]
Tianyi Xie, Zeshun Zong, Yuxin Qiu, Xuan Li, Yutao Feng, Yin Yang, and Chenfanfu Jiang. 2024. Physgaussian: Physics-integrated 3d gaussians for generative dynamics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[63]
Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent Sitzmann, and Srinath Sridhar. 2022. Neural fields in visual computing and beyond. In Computer Graphics Forum, Vol. 41. Wiley Online Library, 641–676.
[64]
Linning Xu, Vasu Agrawal, William Laney, Tony Garcia, Aayush Bansal, Changil Kim, Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder, Aljaž Božič, 2023a. VR-NeRF: High-fidelity virtualized walkable spaces. In SIGGRAPH Asia 2023 Conference Papers. 1–12.
[65]
Yuelang Xu, Benwang Chen, Zhe Li, Hongwen Zhang, Lizhen Wang, Zerong Zheng, and Yebin Liu. 2023b. Gaussian Head Avatar: Ultra High-fidelity Head Avatar via Dynamic Gaussians. arXiv preprint arXiv:2312.03029 (2023).
[66]
Zeyu Yang, Hongye Yang, Zijie Pan, Xiatian Zhu, and Li Zhang. 2023. Real-time photorealistic dynamic scene representation and rendering with 4d gaussian splatting. arXiv preprint arXiv:2310.10642 (2023).
[67]
Lior Yariv, Jiatao Gu, Yoni Kasten, and Yaron Lipman. 2021. Volume rendering of neural implicit surfaces. Advances in Neural Information Processing Systems 34 (2021), 4805–4815.
[68]
Mingqiao Ye, Martin Danelljan, Fisher Yu, and Lei Ke. 2023. Gaussian grouping: Segment and edit anything in 3d scenes. arXiv preprint arXiv:2312.00732 (2023).
[69]
Yongning Zhu and Robert Bridson. 2005. Animating sand as a fluid. ACM Transactions on Graphics (TOG) 24, 3 (2005), 965–972.
[70]
Wojciech Zielonka, Timur Bagautdinov, Shunsuke Saito, Michael Zollhöfer, Justus Thies, and Javier Romero. 2023. Drivable 3d gaussian avatars. arXiv preprint arXiv:2311.08581 (2023).

Cited By

View all
  • (2024)An XR Framework Proposal to Assist Designers in Minimizing CybersicknessAnais Estendidos do XXVI Simpósio de Realidade Virtual e Aumentada (SVR Estendido 2024)10.5753/svr_estendido.2024.244711(21-22)Online publication date: 30-Sep-2024
  • (2024)Integration of 3D Gaussian Splatting and Neural Radiance Fields in Virtual Reality Fire FightingRemote Sensing10.3390/rs1613244816:13(2448)Online publication date: 3-Jul-2024
  • (2024)Implementing Immersive Worlds for Metaverse-Based Participatory Design through Photogrammetry and BlockchainISPRS International Journal of Geo-Information10.3390/ijgi1306021113:6(211)Online publication date: 18-Jun-2024
  • Show More Cited By

Index Terms

  1. VR-GS: A Physical Dynamics-Aware Interactive Gaussian Splatting System in Virtual Reality

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SIGGRAPH '24: ACM SIGGRAPH 2024 Conference Papers
    July 2024
    1106 pages
    ISBN:9798400705250
    DOI:10.1145/3641519
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 July 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Gaussian Splatting
    2. Neural Radiance Fields
    3. Real-Time Interactions

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    SIGGRAPH '24
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 1,822 of 8,601 submissions, 21%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)889
    • Downloads (Last 6 weeks)134
    Reflects downloads up to 09 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)An XR Framework Proposal to Assist Designers in Minimizing CybersicknessAnais Estendidos do XXVI Simpósio de Realidade Virtual e Aumentada (SVR Estendido 2024)10.5753/svr_estendido.2024.244711(21-22)Online publication date: 30-Sep-2024
    • (2024)Integration of 3D Gaussian Splatting and Neural Radiance Fields in Virtual Reality Fire FightingRemote Sensing10.3390/rs1613244816:13(2448)Online publication date: 3-Jul-2024
    • (2024)Implementing Immersive Worlds for Metaverse-Based Participatory Design through Photogrammetry and BlockchainISPRS International Journal of Geo-Information10.3390/ijgi1306021113:6(211)Online publication date: 18-Jun-2024
    • (2024)Multi-level Partition of Unity on Differentiable Moving ParticlesACM Transactions on Graphics10.1145/368798943:6(1-21)Online publication date: 19-Nov-2024
    • (2024)Real-time Large-scale Deformation of Gaussian SplattingACM Transactions on Graphics10.1145/368775643:6(1-17)Online publication date: 19-Dec-2024
    • (2024)XPBI: Position-Based Dynamics with Smoothing Kernels Handles Continuum InelasticitySIGGRAPH Asia 2024 Conference Papers10.1145/3680528.3687577(1-12)Online publication date: 3-Dec-2024
    • (2024)GauSPU: 3D Gaussian Splatting Processor for Real-Time SLAM Systems2024 57th IEEE/ACM International Symposium on Microarchitecture (MICRO)10.1109/MICRO61859.2024.00114(1562-1573)Online publication date: 2-Nov-2024
    • (2024)SAD-GS: Shape-aligned Depth-supervised Gaussian Splatting2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)10.1109/CVPRW63382.2024.00290(2842-2851)Online publication date: 17-Jun-2024
    • (2024)Recent advances in 3D Gaussian splattingComputational Visual Media10.1007/s41095-024-0436-y10:4(613-642)Online publication date: 8-Jul-2024
    • (2024)Enhancing 3D Gaussian splatting for low-quality images: semantically guided training and unsupervised quality assessmentThe Visual Computer10.1007/s00371-024-03754-zOnline publication date: 24-Dec-2024
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media