Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3641519.3657412acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
research-article
Open access

CNS-Edit: 3D Shape Editing via Coupled Neural Shape Optimization

Published: 13 July 2024 Publication History

Abstract

This paper introduces a new approach based on a coupled representation and a neural volume optimization to implicitly perform 3D shape editing in latent space. This work has three innovations. First, we design the coupled neural shape (CNS) representation for supporting 3D shape editing. This representation includes a latent code, which captures high-level global semantics of the shape, and a 3D neural feature volume, which provides a spatial context to associate with the local shape changes given by the editing. Second, we formulate the coupled neural shape optimization procedure to co-optimize the two coupled components in the representation subject to the editing operation. Last, we offer various 3D shape editing operators, i.e., copy, resize, delete, and drag, and derive each into an objective for guiding the CNS optimization, such that we can iteratively co-optimize the latent code and neural feature volume to match the editing target. With our approach, we can achieve a rich variety of editing results that are not only aware of the shape semantics but are also not easy to achieve by existing approaches. Both quantitative and qualitative evaluations demonstrate the strong capabilities of our approach over the state-of-the-art solutions.

Supplemental Material

MP4 File - presentation
presentation
ZIP File
The supplementary materials for CNS-Edit.
ZIP File
This the supplementary material for paper CNS-Edit.
ZIP File
You can compile supp.tex to obtain the final pdf. Thanks.

References

[1]
Rameen Abdal, Yipeng Qin, and Peter Wonka. 2019. Image2StyleGAN: How to embed images into the stylegan latent space?. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 4432–4441.
[2]
Rameen Abdal, Peihao Zhu, Niloy J Mitra, and Peter Wonka. 2021. StyleFlow: Attribute-conditioned exploration of stylegan-generated images using conditional continuous normalizing flows. In ACM Transactions on Graphics (SIGGRAPH), Vol. 40. 1–21.
[3]
Panos Achlioptas, Ian Huang, Minhyuk Sung, Sergey Tulyakov, and Leonidas Guibas. 2023. ChangeIt3D: Language-Assisted 3D Shape Edits and Deformations. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4]
Chrystiano Araújo, Nicholas Vining, Silver Burla, Manuel Ruivo de Oliveira, Enrique Rosales, and Alla Sheffer. 2023. Slippage-Preserving Reshaping of Human-Made 3D Content. ACM Transactions on Graphics (SIGGRAPH Asia) 42, 6 (2023).
[5]
Omer Bar-Tal, Dolev Ofri-Amar, Rafail Fridman, Yoni Kasten, and Tali Dekel. 2022. Text2LIVE: Text-driven layered image and video editing. In European Conference on Computer Vision (ECCV). 707–723.
[6]
Tim Brooks, Aleksander Holynski, and Alexei A Efros. 2023. InstructPix2Pix: Learning to follow image editing instructions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 18392–18402.
[7]
Dan Cascaval, Mira Shalah, Phillip Quinn, Rastislav Bodik, Maneesh Agrawala, and Adriana Schulz. 2022. Differentiable 3d cad programs for bidirectional editing. In Computer Graphics Forum, Vol. 41. 309–323.
[8]
Angel X. Chang, Thomas Funkhouser, Leonidas J. Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, 2015. ShapeNet: An information-rich 3D model repository. arXiv preprint arXiv:1512.03012 (2015).
[9]
Zhiqin Chen, Andrea Tagliasacchi, and Hao Zhang. 2020. BSP-Net: Generating compact meshes via binary space partitioning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 45–54.
[10]
Zhiqin Chen and Hao Zhang. 2019. Learning implicit fields for generative shape modeling. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 5939–5948.
[11]
Anton Cherepkov, Andrey Voynov, and Artem Babenko. 2021. Navigating the GAN parameter space for semantic image editing. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 3671–3680.
[12]
Gene Chou, Yuval Bahat, and Felix Heide. 2023. Diffusion-SDF: Conditional generative modeling of signed distance functions. In IEEE International Conference on Computer Vision (ICCV). 2262–2272.
[13]
Matt Deitke, Ruoshi Liu, Matthew Wallingford, Huong Ngo, Oscar Michel, Aditya Kusupati, Alan Fan, Christian Laforte, Vikram Voleti, Samir Yitzhak Gadre, 2023a. Objaverse-XL: A universe of 10M+ 3D objects. arXiv preprint arXiv:2307.05663 (2023).
[14]
Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. 2023b. Objaverse: A universe of annotated 3D objects. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 13142–13153.
[15]
Yufan Deng, Ruida Wang, Yuhao Zhang, Yu-Wing Tai, and Chi-Keung Tang. 2023. DragVideo: Interactive Drag-style Video Editing. arxiv:cs.GR/2312.02216
[16]
Rao Fu, Xiao Zhan, Yiwen Chen, Daniel Ritchie, and Srinath Sridhar. 2022. ShapeCrafter: A recursive text-conditioned 3D shape generation model. Conference on Neural Information Processing Systems (NeurIPS).
[17]
Rinon Gal, Amit Bermano, Hao Zhang, and Daniel Cohen-Or. 2020. MRGAN: Multi-rooted 3D shape generation with unsupervised part disentanglement. In In ICCV Workshop on Structural and Compositional Learning on 3D Data (StruCo3D).2039–2048.
[18]
Ran Gal, Olga Sorkine, Niloy J. Mitra, and Daniel Cohen-Or. 2009. iWIRES: an analyze-and-edit approach to shape manipulation. ACM Transactions on Graphics (SIGGRAPH) (2009).
[19]
Chenjian Gao, Qian Yu, Lu Sheng, Yi-Zhe Song, and Dong Xu. 2022b. SketchSampler: Sketch-Based 3D Reconstruction via View-Dependent Depth Sampling. In European Conference on Computer Vision (ECCV). 464–479.
[20]
Jun Gao, Tianchang Shen, Zian Wang, Wenzheng Chen, Kangxue Yin, Daiqing Li, Or Litany, Zan Gojcic, and Sanja Fidler. 2022a. GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images. In Conference on Neural Information Processing Systems (NeurIPS).
[21]
Lore Goetschalckx, Alex Andonian, Aude Oliva, and Phillip Isola. 2019. GANalyze: Toward visual definitions of cognitive image properties. In IEEE International Conference on Computer Vision (ICCV). 5744–5753.
[22]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Conference on Neural Information Processing Systems (NeurIPS). 2672–2680.
[23]
Benoit Guillard, Edoardo Remelli, Pierre Yvernay, and Pascal Fua. 2021. Sketch2Mesh: Reconstructing and editing 3D shapes from sketches. In IEEE International Conference on Computer Vision (ICCV). 13023–13032.
[24]
Zekun Hao, Hadar Averbuch-Elor, Noah Snavely, and Serge Belongie. 2020. DualSDF: Semantic shape manipulation using a two-level representation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 7631–7641.
[25]
Ayaan Haque, Matthew Tancik, Alexei A Efros, Aleksander Holynski, and Angjoo Kanazawa. 2023. Instruct-nerf2nerf: Editing 3d scenes with instructions. In IEEE International Conference on Computer Vision (ICCV). 19740–19750.
[26]
Amir Hertz, Rana Hanocka, Raja Giryes, and Daniel Cohen-Or. 2020. PointGMM: A neural GMM network for point clouds. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 12054–12063.
[27]
Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. 2022a. Prompt-to-Prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626 (2022).
[28]
Amir Hertz, Or Perel, Raja Giryes, Olga Sorkine-Hornung, and Daniel Cohen-Or. 2022b. SPAGHETTI: Editing Implicit Shapes Through Part Aware Generation. arXiv preprint arXiv:2201.13168 (2022).
[29]
Amir Hertz, Or Perel, Raja Giryes, Olga Sorkine-Hornung, and Daniel Cohen-Or. 2023. Mesh Draping: Parametrization-Free Neural Mesh Transfer. In Computer Graphics Forum, Vol. 42. 72–85.
[30]
Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising Diffusion Probabilistic Models. Conference on Neural Information Processing Systems (NeurIPS) (2020), 6840–6851.
[31]
Jingyu Hu*, Ka-Hei Hui*, Zhengzhe Liu, Ruihui Li, and Chi-Wing Fu. 2023a. Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and Manipulation. ACM Transactions on Graphics (TOG) (2023).
[32]
Jingyu Hu*, Ka-Hei Hui*, Zhengzhe Liu, Hao Zhang, and Chi-Wing Fu. 2023b. CLIPXPlore: Coupled CLIP and Shape Spaces for 3D Shape Exploration. In Proceedings of SIGGRAPH Asia. 1–12.
[33]
Ian Huang, Panos Achlioptas, Tianyi Zhang, Sergey Tulyakov, Minhyuk Sung, and Leonidas Guibas. 2022. LADIS: Language disentanglement for 3D shape editing. In In Findings of Empirical Methods in Natural Language Processing (EMNLP).
[34]
Ka-Hei Hui, Ruihui Li, Jingyu Hu, and Chi-Wing Fu. 2022. Neural Wavelet-domain Diffusion for 3D Shape Generation. In Proceedings of SIGGRAPH Asia. 1–9.
[35]
Ka-Hei Hui*, Ruihui Li*, Jingyu Hu, and Chi-Wing Fu (* joint first authors). 2022. Neural Template: Topology-aware Reconstruction and Disentangled Generation of 3D Meshes. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 18572–18582.
[36]
Le Hui, Rui Xu, Jin Xie, Jianjun Qian, and Jian Yang. 2020. Progressive point cloud deconvolution generation network. In European Conference on Computer Vision (ECCV). 397–413.
[37]
Moritz Ibing, Isaak Lim, and Leif Kobbelt. 2021. 3D Shape Generation With Grid-Based Implicit Functions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 13559–13568.
[38]
Chiyu Jiang, Jingwei Huang, Andrea Tagliasacchi, and Leonidas J Guibas. 2020. ShapeFlow: Learnable deformation flows among 3D shapes. Conference on Neural Information Processing Systems (NeurIPS), 9745–9757.
[39]
Pushkar Joshi, Mark Meyer, Tony DeRose, Brian Green, and Tom Sanocki. 2007. Harmonic coordinates for character articulation. ACM Transactions on Graphics 26, 3 (2007), 71–es.
[40]
Tao Ju, Scott Schaefer, and Joe Warren. 2005. Mean value coordinates for closed triangular meshes. ACM Transactions on Graphics 24, 3 (2005), 561–566.
[41]
Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, and Michal Irani. 2023. Imagic: Text-based real image editing with diffusion models. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 6007–6017.
[42]
Sebastian Koch, Albert Matveev, Zhongshi Jiang, Francis Williams, Alexey Artemov, Evgeny Burnaev, Marc Alexa, Denis Zorin, and Daniele Panozzo. 2019. ABC: A big cad model dataset for geometric deep learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 9601–9611.
[43]
Juil Koo, Seungwoo Yoo, Minh Hieu Nguyen, and Minhyuk Sung. 2023. SALAD: Part-level latent diffusion for 3D shape generation and manipulation. In IEEE International Conference on Computer Vision (ICCV). 14441–14451.
[44]
Ruihui Li, Xianzhi Li, Ke-Hei Hui, and Chi-Wing Fu. 2021. SP-GAN: Sphere-Guided 3D Shape Generation and Manipulation. ACM Transactions on Graphics (SIGGRAPH) 40, 4 (2021).
[45]
Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. 2023. Magic3D: High-resolution text-to-3D content creation. In Conference on Neural Information Processing Systems (NeurIPS). 300–309.
[46]
Connor Z. Lin, Niloy J. Mitra, Gordon Wetzstein, Leonidas Guibas, and Paul Guerrero. 2022. NeuForm: Adaptive Overfitting for Neural Shape Editing. In Conference on Neural Information Processing Systems (NeurIPS).
[47]
Yaron Lipman, David Levin, and Daniel Cohen-Or. 2008. Green coordinates. ACM Transactions on Graphics 27, 3 (2008), 1–10.
[48]
Yaron Lipman, Olga Sorkine, Daniel Cohen-Or, David Levin, Christian Rossi, and Hans-Peter Seidel. 2004. Differential Coordinates for Interactive Mesh Editing. In Proceedings of IEEE International Conference on Shape Modeling and Applications. 181–190.
[49]
Minghua Liu, Minhyuk Sung, Radomir Mech, and Hao Su. 2021. DeepMetaHandles: Learning deformation meta-handles of 3D meshes with biharmonic coordinates. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 12–21.
[50]
Zhengzhe Liu, Peng Dai, Ruihui Li, Xiaojuan Qi, and Chi-Wing Fu. 2022a. ISS: Image as a step stone for text-guided 3D shape generation. In International Conference on Learning Representations (ICLR).
[51]
Zhen Liu, Yao Feng, Michael J. Black, Derek Nowrouzezahrai, Liam Paull, and Weiyang Liu. 2023a. MeshDiffusion: Score-based Generative 3D Mesh Modeling. In International Conference on Learning Representations.
[52]
Zhengzhe Liu, Jingyu Hu, Ka-Hei Hui, Xiaojuan Qi, Daniel Cohen-Or, and Chi-Wing Fu. 2023b. EXIM: A Hybrid Explicit-Implicit Representation for Text-Guided 3D Shape Generation. ACM Transactions on Graphics (SIGGRAPH Asia) 42, 6 (2023), 1–12.
[53]
Zhengzhe Liu, Yi Wang, Xiaojuan Qi, and Chi-Wing Fu. 2022b. Towards implicit text-guided 3D shape generation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 17896–17906.
[54]
Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. 2015. SMPL: A Skinned Multi-Person Linear model. ACM Transactions on Graphics (SIGGRAPH Asia) 34, 6 (2015), 248:1–248:16.
[55]
Shitong Luo and Wei Hu. 2021. Diffusion probabilistic models for 3D point cloud generation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2837–2845.
[56]
Zhaoyang Lyu, Jinyi Wang, Yuwei An, Ya Zhang, Dahua Lin, and Bo Dai. 2023. Controllable Mesh Generation Through Sparse Latent Point Diffusion Models. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 271–280.
[57]
Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. 2019. Occupancy networks: Learning 3D reconstruction in function space. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 4460–4470.
[58]
Elie Michel and Tamy Boubekeur. 2021. DAG amendment for inverse control of parametric shapes. ACM Transactions on Graphics (TOG) 40, 4 (2021), 1–14.
[59]
Aryan Mikaeili, Or Perel, Mehdi Safaee, Daniel Cohen-Or, and Ali Mahdavi-Amiri. 2023. Sked: Sketch-guided text-based 3d editing. In IEEE International Conference on Computer Vision (ICCV). 14607–14619.
[60]
Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. 2021. NeRF: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 65, 1 (2021), 99–106.
[61]
Niloy Mitra, Michael Wand, Hao Zhang, Daniel Cohen-Or, and Martin Bokeloh. 2013. Structure-aware shape processing. In Eurographics State-of-the-art Report (STAR).
[62]
Chong Mou, Xintao Wang, Jiechong Song, Ying Shan, and Jian Zhang. 2023. DragonDiffusion: Enabling drag-style manipulation on diffusion models. arXiv preprint arXiv:2307.02421 (2023).
[63]
Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, and Mark Chen. 2022. Point-E: A system for generating 3D point clouds from complex prompts. arXiv preprint arXiv:2212.08751 (2022).
[64]
Xingang Pan, Ayush Tewari, Thomas Leimkühler, Lingjie Liu, Abhimitra Meka, and Christian Theobalt. 2023. Drag your GAN: Interactive point-based manipulation on the generative image manifold. In Proceedings of SIGGRAPH. 1–11.
[65]
Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. 2023. Zero-shot image-to-image translation. In Proceedings of SIGGRAPH. 1–11.
[66]
Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall. 2022. DreamFusion: Text-to-3D using 2D Diffusion. In International Conference on Learning Representations (ICLR).
[67]
Xuanchi Ren, Jiahui Huang, Xiaohui Zeng, Ken Museth, Sanja Fidler, and Francis Williams. 2023. XCube: Large-Scale 3D Generative Modeling using Sparse Voxel Hierarchies. arXiv preprint (2023).
[68]
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 10684–10695.
[69]
Yujun Shen, Jinjin Gu, Xiaoou Tang, and Bolei Zhou. 2020. Interpreting the latent space of GANs for semantic face editing. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 9243–9252.
[70]
Yujun Shen and Bolei Zhou. 2021. Closed-form factorization of latent semantics in GANs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1532–1540.
[71]
Yujun Shi, Chuhui Xue, Jiachun Pan, Wenqing Zhang, Vincent YF Tan, and Song Bai. 2023. DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing. arXiv preprint arXiv:2306.14435 (2023).
[72]
Yawar Siddiqui, Antonio Alliegro, Alexey Artemov, Tatiana Tommasi, Daniele Sirigatti, Vladislav Rosov, Angela Dai, and Matthias Nießner. 2023. MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers. arXiv preprint arXiv:2311.15475 (2023).
[73]
Edward J. Smith and David Meger. 2017. Improved adversarial systems for 3D object generation and reconstruction. In Conference on Robot Learning. 87–96.
[74]
Olga Sorkine and Marc Alexa. 2007. As-Rigid-As-Possible surface modeling. In Symposium on Geometry processing, Vol. 4. 109–116.
[75]
Olga Sorkine, Daniel Cohen-Or, Yaron Lipman, Marc Alexa, Christian Rössl, and Hans-Peter Seidel. 2004. Laplacian surface editing. In Eurographics Symposium on Geometry Processing (SGP). 175–184.
[76]
Jiapeng Tang, Lev Markhasin, Bi Wang, Justus Thies, and Matthias Nießner. 2022. Neural shape deformation priors. In Conference on Neural Information Processing Systems (NeurIPS). 17117–17132.
[77]
Konstantinos Tertikas, Despoina Paschalidou, Boxiao Pan, Jeong Joon Park, Mikaela Angelina Uy, Ioannis Emiris, Yannis Avrithis, and Leonidas Guibas. 2023. Generating Part-Aware Editable 3D Shapes without 3D Supervision. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 4466–4478.
[78]
Jean-Marc Thiery and Tamy Boubekeur. 2022. Green coordinates for triquad cages in 3D. In Proceedings of SIGGRAPH Asia. 1–8.
[79]
Andrey Voynov and Artem Babenko. 2020. Unsupervised Discovery of Interpretable Directions in the GAN Latent Space. In Proceedings of International Conference on Machine Learning (ICML). 9786–9796.
[80]
Weiyue Wang, Duygu Ceylan, Radomir Mech, and Ulrich Neumann. 2019. 3DN: 3D Deformation Network. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1038–1046.
[81]
Yanzhen Wang, Kai Xu, Jun Li, Hao Zhang, Ariel Shamir, Ligang Liu, Zhiquan Cheng, and Yueshan Xiong. 2011. Symmetry Hierarchy of Man-Made Objects. Computer Graphics Forum (Eurographics) 30, 2 (2011), 287–296.
[82]
Jiajun Wu, Chengkai Zhang, Tianfan Xue, Bill Freeman, and Josh Tenenbaum. 2016. Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling. In Conference on Neural Information Processing Systems (NeurIPS). 82–90.
[83]
Wang Yifan, Noam Aigerman, Vladimir G Kim, Siddhartha Chaudhuri, and Olga Sorkine-Hornung. 2020. Neural Cages for detail-preserving 3D deformations. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 75–83.
[84]
Kangxue Yin, Zhiqin Chen, Siddhartha Chaudhuri, Matthew Fisher, Vladimir Kim, and Hao Zhang. 2020. COALESCE: Component Assembly by Learning to Synthesize Connections. In Proc. of 3DV.
[85]
Yu-Jie Yuan, Yu-Kun Lai, Tong Wu, Lin Gao, and Ligang Liu. 2021. A Revisit of Shape Editing Techniques: from the Geometric to the Neural Viewpoint. arxiv:cs.GR/2103.01694
[86]
Xiaohui Zeng, Arash Vahdat, Francis Williams, Zan Gojcic, Or Litany, Sanja Fidler, and Karsten Kreis. 2022. LION: Latent Point Diffusion Models for 3D Shape Generation. In Conference on Neural Information Processing Systems (NeurIPS).
[87]
Biao Zhang, Jiapeng Tang, Matthias Nießner, and Peter Wonka. 2023. 3DShape2VecSet: A 3D Shape Representation for Neural Fields and Generative Diffusion Models. ACM Transactions on Graphics (SIGGRAPH) 42, 4 (2023), 1-16.
[88]
Song-Hai Zhang, Yuan-Chen Guo, and Qing-Wen Gu. 2021. Sketch2Model: View-aware 3D modeling from single free-hand sketches. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 6012–6021.
[89]
Xin-Yang Zheng, Hao Pan, Peng-Shuai Wang, Xin Tong, Yang Liu, and Heung-Yeung Shum. 2023. Locally Attentional SDF Diffusion for Controllable 3D Shape Generation. ACM Transactions on Graphics (SIGGRAPH) 42, 4 (2023).
[90]
Youyi Zheng, Hongbo Fu, Daniel Cohen-Or, Oscar Kin-Chung Au, and Chiew-Lan Tai. 2011. Component-wise Controllers for Structure-Preserving Shape Manipulation. Computer Graphics Forum (Eurographics) (2011).
[91]
Linqi Zhou, Yilun Du, and Jiajun Wu. 2021. 3D shape generation and completion through point-voxel diffusion. In IEEE International Conference on Computer Vision (ICCV). 5826–5835.

Cited By

View all
  • (2024)iShapEditing: Intelligent Shape Editing with Diffusion ModelsComputer Graphics Forum10.1111/cgf.1525343:7Online publication date: 8-Nov-2024
  • (2024)DragVideo: Interactive Drag-Style Video EditingComputer Vision – ECCV 202410.1007/978-3-031-72992-8_11(183-199)Online publication date: 30-Oct-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGGRAPH '24: ACM SIGGRAPH 2024 Conference Papers
July 2024
1106 pages
ISBN:9798400705250
DOI:10.1145/3641519
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 July 2024

Check for updates

Author Tags

  1. 3D shape editing
  2. shape representation

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

SIGGRAPH '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,822 of 8,601 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)454
  • Downloads (Last 6 weeks)164
Reflects downloads up to 14 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)iShapEditing: Intelligent Shape Editing with Diffusion ModelsComputer Graphics Forum10.1111/cgf.1525343:7Online publication date: 8-Nov-2024
  • (2024)DragVideo: Interactive Drag-Style Video EditingComputer Vision – ECCV 202410.1007/978-3-031-72992-8_11(183-199)Online publication date: 30-Oct-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media