Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3581783.3612306acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Digging into Depth Priors for Outdoor Neural Radiance Fields

Published: 27 October 2023 Publication History

Abstract

Neural Radiance Fields (NeRFs) have demonstrated impressive performance in vision and graphics tasks, such as novel view synthesis and immersive reality. However, the shape-radiance ambiguity of radiance fields remains a challenge, especially in the sparse viewpoints setting. Recent work resorts to integrating depth priors into outdoor NeRF training to alleviate the issue. However, the criteria for selecting depth priors and the relative merits of different priors have not been thoroughly investigated. Moreover, the relative merits of selecting different approaches to use the depth priors is also an unexplored problem. In this paper, we provide a comprehensive study and evaluation of employing depth priors to outdoor neural radiance fields, covering common depth sensing technologies and most application ways. Specifically, we conduct extensive experiments with two representative NeRF methods equipped with four commonly-used depth priors and different depth usages on two widely used outdoor datasets. Our experimental results reveal several interesting findings that can potentially benefit practitioners and researchers in training their NeRF models with depth priors. Project page: https://cwchenwang.github.io/outdoor-nerf-depth

References

[1]
Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. 2021. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proc. of the IEEE/CVF Intl. Conf. on Computer Vision (ICCV).
[2]
Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. 2022. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR).
[3]
Shariq Farooq Bhat, Ibraheem Alhashim, and Peter Wonka. 2021. Adabins: Depth estimation using adaptive bins. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR). 4009--4018.
[4]
Wenjing Bian, Zirui Wang, Kejie Li, Jia-Wang Bian, and Victor Adrian Prisacariu. 2022. NoPe-NeRF: Optimising Neural Radiance Field with No Pose Prior. arXiv preprint arXiv:2212.07388 (2022).
[5]
Yuanzhouhan Cao, Yidong Li, Haokui Zhang, Chao Ren, and Yifan Liu. 2021. Learning Structure Affinity for Video Depth Estimation. In Proceedings of the 29th ACM International Conference on Multimedia. 190--198.
[6]
Jia-Ren Chang and Yong-Sheng Chen. 2018. Pyramid stereo matching network. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 5410--5418.
[7]
Ming-Fang Chang, John Lambert, Patsorn Sangkloy, Jagjeet Singh, Slawomir Bak, Andrew Hartnett, De Wang, Peter Carr, Simon Lucey, Deva Ramanan, et al. 2019. Argoverse: 3d tracking and forecasting with rich maps. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR). 8748--8757.
[8]
Zhi Chen, Xiaoqing Ye, Liang Du, Wei Yang, Liusheng Huang, Xiao Tan, Zhenbo Shi, Fumin Shen, and Errui Ding. 2021. AggNet for Self-supervised Monocular Depth Estimation: Go An Aggressive Step Furthe. In Proceedings of the 29th ACM International Conference on Multimedia. 1526--1534.
[9]
Xinjing Cheng, Peng Wang, and Ruigang Yang. 2018. Depth estimation via affinity learned with convolutional spatial propagation network. In Proc. Eur. Conf. Comput. Vis. (ECCV). 103--119.
[10]
Kangle Deng, Andrew Liu, Jun-Yan Zhu, and Deva Ramanan. 2022. Depth-supervised nerf: Fewer views and faster training for free. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR).
[11]
Abdelrahman Eldesokey, Michael Felsberg, Karl Holmquist, and Michael Persson. 2020. Uncertainty-aware cnns for depth completion: Uncertainty from beginning to end. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR). 12014--12023.
[12]
Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. 2022. Plenoxels: Radiance Fields without Neural Networks. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR).
[13]
Andreas Geiger, Philip Lenz, and Raquel Urtasun. 2012a. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR).
[14]
Andreas Geiger, Philip Lenz, and Raquel Urtasun. 2012b. Are we ready for autonomous driving? the kitti vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 3354--3361.
[15]
Clément Godard, Oisin Mac Aodha, Michael Firman, and Gabriel J Brostow. 2019. Digging into self-supervised monocular depth estimation. In Proceedings of the IEEE International Conference on Computer Vision. 3828--3838.
[16]
Xiaodong Gu, Zhiwen Fan, Siyu Zhu, Zuozhuo Dai, Feitong Tan, and Ping Tan. 2020. Cascade cost volume for high-resolution multi-view stereo and stereo matching. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2495--2504.
[17]
Xiaoyang Guo, Kai Yang, Wukui Yang, Xiaogang Wang, and Hongsheng Li. 2019. Group-wise correlation stereo network. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 3273--3282.
[18]
Jose L Herrera, Carlos R Del-Blanco, and Narciso Garcia. 2018. Automatic depth extraction from 2D images using a cluster-based learning framework. IEEE Trans. on Image Processing (TIP), Vol. 27, 7 (2018), 3288--3299.
[19]
James T Kajiya and Brian P Von Herzen. 1984. Ray tracing volume densities. ACM SIGGRAPH computer graphics, Vol. 18, 3 (1984), 165--174.
[20]
Alex Kendall, Hayk Martirosyan, Saumitro Dasgupta, Peter Henry, Ryan Kennedy, Abraham Bachrach, and Adam Bry. 2017. End-to-end learning of geometry and context for deep stereo regression. In IEEE International Conference on Computer Vision (ICCV). 66--75.
[21]
Md Fahim Faysal Khan, Nelson Daniel Troncoso Aldas, Abhishek Kumar, Siddharth Advani, and Vijaykrishnan Narayanan. 2021. Sparse to dense depth completion using a generative adversarial network with intelligent sampling strategies. In Proceedings of the 29th ACM International Conference on Multimedia. 5528--5536.
[22]
Jin Han Lee, Myung-Kyu Han, Dong Wook Ko, and Il Hong Suh. 2019. From big to small: Multi-scale local planar guidance for monocular depth estimation. arXiv preprint arXiv:1907.10326 (2019).
[23]
Rui Li, Xiantuo He, Yu Zhu, Xianjun Li, Jinqiu Sun, and Yanning Zhang. 2020. Enhancing self-supervised monocular depth estimation via incorporating robust constraints. In Proceedings of the 28th ACM International Conference on Multimedia. 3108--3117.
[24]
Zhengfa Liang, Yulan Guo, Yiliu Feng, Wei Chen, Linbo Qiao, Li Zhou, Jianfeng Zhang, and Hengzhu Liu. 2019. Stereo matching using multi-level cost volume and multi-scale feature constancy. In IEEE transactions on pattern analysis and machine intelligence (TPAMI).
[25]
Yiyi Liao, Lichao Huang, Yue Wang, Sarath Kodagoda, Yinan Yu, and Yong Liu. 2017. Parse geometry from a line: Monocular depth estimation with partial laser observation. In Proc. IEEE Int. Conf. Robot. Automat. (ICRA). 5059--5066.
[26]
Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey. 2021. Barf: Bundle-adjusting neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5741--5751.
[27]
Lina Liu, Yiyi Liao, Yue Wang, Andreas Geiger, and Yong Liu. 2021a. Learning steering kernels for guided depth completion. IEEE Trans. on Image Processing (TIP), Vol. 30 (2021), 2850--2861.
[28]
Lina Liu, Xibin Song, Xiaoyang Lyu, Junwei Diao, Mengmeng Wang, Yong Liu, and Liangjun Zhang. 2021b. Fcfr-net: Feature fusion based coarse-to-fine residual learning for depth completion. In Proceedings of the AAAI conference on artificial intelligence, Vol. 35. 2136--2144.
[29]
Lina Liu, Xibin Song, Jiadai Sun, Xiaoyang Lyu, Lin Li, Yong Liu, and Liangjun Zhang. 2023. MFF-Net: Towards Efficient Monocular Depth Completion with Multi-modal Feature Fusion. IEEE Robot. Automat. Lett. (RA-L) (2023).
[30]
Xiaoxiao Long, Cheng Lin, Lingjie Liu, Yuan Liu, Peng Wang, Christian Theobalt, Taku Komura, and Wenping Wang. 2023. NeuralUDF: Learning Unsigned Distance Fields for Multi-view Reconstruction of Surfaces with Arbitrary Topologies. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR).
[31]
Wenjie Luo, Alexander G Schwing, and Raquel Urtasun. 2016. Efficient deep learning for stereo matching. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 5695--5703.
[32]
Fangchang Ma, Guilherme Venturelli Cavalheiro, and Sertac Karaman. 2019. Self-supervised sparse-to-dense: Self-supervised depth completion from lidar and monocular camera. In Proc. IEEE Int. Conf. Robot. Automat. (ICRA). 3288--3295.
[33]
Moritz Menze and Andreas Geiger. 2015. Object scene flow for autonomous vehicles. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 3061--3070.
[34]
Andreas Meuleman, Yu-Lun Liu, Chen Gao, Jia-Bin Huang, Changil Kim, Min H Kim, and Johannes Kopf. 2023. Progressively Optimized Local Radiance Fields for Robust View Synthesis. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR).
[35]
Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In Proc. Eur. Conf. Comput. Vis. (ECCV).
[36]
Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. 2021. Nerf: Representing scenes as neural radiance fields for view synthesis. Commun. ACM (2021).
[37]
Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. 2022. Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. ACM Trans. on Graphics (TOG) (2022).
[38]
Guang-Yu Nie, Ming-Ming Cheng, Yun Liu, Zhengfa Liang, Deng-Ping Fan, Yue Liu, and Yongtian Wang. 2019. Multi-level context ultra-aggregation for stereo matching. In IEEE conference on computer vision and pattern recognition (CVPR). 3283--3291.
[39]
Jinsun Park, Kyungdon Joo, Zhe Hu, Chi-Kuei Liu, and In So Kweon. 2020. Non-local spatial propagation network for depth completion. In Proc. Eur. Conf. Comput. Vis. (ECCV). 120--136.
[40]
Keunhong Park, Utkarsh Sinha, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Steven M Seitz, and Ricardo Martin-Brualla. 2021. Nerfies: Deformable neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5865--5874.
[41]
Chao Qu, Ty Nguyen, and Camillo Taylor. 2020. Depth completion via deep basis fitting. In Proc. of the IEEE Winter Conf. on Applications of Computer Vision (WACV). 71--80.
[42]
Konstantinos Rematas, Andrew Liu, Pratul P Srinivasan, Jonathan T Barron, Andrea Tagliasacchi, Thomas Funkhouser, and Vittorio Ferrari. 2022. Urban radiance fields. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR).
[43]
Barbara Roessle, Jonathan T Barron, Ben Mildenhall, Pratul P Srinivasan, and Matthias Nießner. 2022. Dense depth priors for neural radiance fields from sparse input views. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12892--12901.
[44]
Ashutosh Saxena, Min Sun, and Andrew Y Ng. 2008. Make3d: Learning 3d scene structure from a single still image. IEEE Trans. on Pattern Analalysis and Machine Intelligence (TPAMI), Vol. 31, 5 (2008), 824--840.
[45]
Daniel Scharstein and Richard Szeliski. 2002. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International journal of computer vision (IJCV), Vol. 47, 1--3 (2002), 7--42.
[46]
Guibao Shen, Yingkui Zhang, Jialu Li, Mingqiang Wei, Qiong Wang, Guangyong Chen, and Pheng-Ann Heng. 2021b. Learning regularizer for monocular depth estimation with adversarial guidance. In Proceedings of the 29th ACM International Conference on Multimedia. 5222--5230.
[47]
Zhelun Shen, Yuchao Dai, and Zhibo Rao. 2021a. Cfnet: Cascade and fused cost volume for robust stereo matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13906--13915.
[48]
Zhelun Shen, Yuchao Dai, Xibin Song, Zhibo Rao, Dingfu Zhou, and Liangjun Zhang. 2022. Pcw-net: Pyramid combination and warping cost volume for stereo matching. In Computer Vision--ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXII. Springer, 280--297.
[49]
Minsoo Song, Seokjae Lim, and Wonjun Kim. 2021. Monocular depth estimation using laplacian pyramid-based depth residuals. IEEE transactions on circuits and systems for video technology, Vol. 31, 11 (2021), 4381--4393.
[50]
Matthew Tancik, Vincent Casser, Xinchen Yan, Sabeek Pradhan, Ben Mildenhall, Pratul P Srinivasan, Jonathan T Barron, and Henrik Kretzschmar. 2022. Block-nerf: Scalable large scene neural view synthesis. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR).
[51]
Jie Tang, Fei-Peng Tian, Wei Feng, Jian Li, and Ping Tan. 2020. Learning guided convolutional network for depth completion. IEEE Trans. on Image Processing (TIP), Vol. 30 (2020), 1116--1129.
[52]
Stepan Tulyakov, Anton Ivanov, and Francois Fleuret. 2018. Practical deep stereo (pds): Toward applications-friendly deep stereo matching. Advances in neural information processing systems, Vol. 31 (2018).
[53]
Jonas Uhrig, Nick Schneider, Lukas Schneider, Uwe Franke, Thomas Brox, and Andreas Geiger. 2017. Sparsity invariant cnns. In Proc. of the Intl. Conf. on 3D Vision (3DV). 11--20.
[54]
Chen Wang, Angtian Wang, Junbo Li, Alan Yuille, and Cihang Xie. 2023 b. Benchmarking robustness in neural radiance fields. arXiv preprint arXiv:2301.04075 (2023).
[55]
Chen Wang, Xian Wu, Yuan-Chen Guo, Song-Hai Zhang, Yu-Wing Tai, and Shi-Min Hu. 2022. NeRF-SR: High Quality Neural Radiance Fields using Supersampling. In Proc. of the ACM Intl. Conf. on Multimedia. (MM).
[56]
Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. 2021. NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction. In Proc. of the Conference on Neural Information Processing Systems (NeurIPS).
[57]
Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, Vol. 13, 4 (2004), 600--612.
[58]
Zian Wang, Tianchang Shen, Jun Gao, Shengyu Huang, Jacob Munkberg, Jon Hasselgren, Zan Gojcic, Wenzheng Chen, and Sanja Fidler. 2023 a. Neural Fields meet Explicit Geometric Representation for Inverse Rendering of Urban Scenes. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR).
[59]
Chenming Wu, Jiadai Sun, Zhelun Shen, and Liangjun Zhang. 2023. MapNeRF: Incorporating Map Priors into Neural Radiance Fields for Driving View Simulation. In IEEE International Conference on Intelligent Robots and Systems (IROS).
[60]
Zijin Wu, Xingyi Li, Juewen Peng, Hao Lu, Zhiguo Cao, and Weicai Zhong. 2022. DoF-NeRF: Depth-of-Field Meets Neural Radiance Fields. In Proc. of the ACM Intl. Conf. on Multimedia. (MM). 1718--1729.
[61]
Ziyang Xie, Junge Zhang, Wenye Li, Feihu Zhang, and Li Zhang. 2023 a. S-NeRF: Neural Radiance Fields for Street Views. In Proc. of the Int. Conf. on Learning Representations (ICLR).
[62]
Ziyang Xie, Junge Zhang, Wenye Li, Feihu Zhang, and Li Zhang. 2023 b. S-NeRF: Neural Radiance Fields for Street Views. arXiv preprint arXiv:2303.00749 (2023).
[63]
Wenpeng Xing and Jie Chen. 2022. MVSPlenOctree: Fast and Generic Reconstruction of Radiance Fields in PlenOctree from Multi-view Stereo. In Proc. of the ACM Intl. Conf. on Multimedia. (MM). 5114--5122.
[64]
Haofei Xu and Juyong Zhang. 2020. AANet: Adaptive Aggregation Network for Efficient Stereo Matching. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1959--1968.
[65]
Yan Xu, Xinge Zhu, Jianping Shi, Guofeng Zhang, Hujun Bao, and Hongsheng Li. 2019. Depth completion from sparse lidar data with depth-normal constraints. In Proc. of the IEEE/CVF Intl. Conf. on Computer Vision (ICCV). 2811--2820.
[66]
Gengshan Yang, Joshua Manela, Michael Happold, and Deva Ramanan. 2019. Hierarchical deep stereo matching on high-resolution images. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR). 5515--5524.
[67]
Ze Yang, Yun Chen, Jingkang Wang, Sivabalan Manivasagam, Wei-Chiu Ma, Anqi Joyce Yang, and Raquel Urtasun. 2023. UniSim: A Neural Closed-Loop Sensor Simulator. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1389--1399.
[68]
Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. 2021. pixelnerf: Neural radiance fields from one or few images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4578--4587.
[69]
Xianggang Yu, Jiapeng Tang, Yipeng Qin, Chenghong Li, Xiaoguang Han, Linchao Bao, and Shuguang Cui. 2022b. PVSeRF: joint pixel-, voxel-and surface-aligned radiance field for single-image novel view synthesis. In Proc. of the ACM Intl. Conf. on Multimedia. (MM). 1572--1583.
[70]
Zehao Yu, Songyou Peng, Michael Niemeyer, Torsten Sattler, and Andreas Geiger. 2022a. Monosdf: Exploring monocular geometric cues for neural implicit surface reconstruction. arXiv preprint arXiv:2206.00665 (2022).
[71]
Jure Zbontar, Yann LeCun, et al. 2016. Stereo matching by training a convolutional neural network to compare image patches. J. Mach. Learn. Res., Vol. 17, 1 (2016), 2287--2318.
[72]
Feihu Zhang, Victor Prisacariu, Ruigang Yang, and Philip HS Torr. 2019b. Ga-net: Guided aggregation net for end-to-end stereo matching. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 185--194.
[73]
Jiahui Zhang, Fangneng Zhan, Rongliang Wu, Yingchen Yu, Wenqing Zhang, Bai Song, Xiaoqin Zhang, and Shijian Lu. 2022. VMRF: View Matching Neural Radiance Fields. In Proc. of the ACM Intl. Conf. on Multimedia. (MM). 6579--6587.
[74]
Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. 2020. Nerf: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492 (2020).
[75]
Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR). 586--595.
[76]
Youmin Zhang, Yimin Chen, Xiao Bai, Jun Zhou, Kun Yu, Zhiwei Li, and Kuiyuan Yang. 2019a. Adaptive Unimodal Cost Volume Filtering for Deep Stereo Matching. In arXiv preprint.
[77]
Zihan Zhu, Songyou Peng, Viktor Larsson, Zhaopeng Cui, Martin R Oswald, Andreas Geiger, and Marc Pollefeys. 2023. NICER-SLAM: Neural Implicit Scene Encoding for RGB SLAM. arXiv preprint arXiv:2302.03594 (2023).

Cited By

View all
  • (2024)AlignMiF: Geometry-Aligned Multimodal Implicit Field for LiDAR-Camera Joint Synthesis2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.02006(21230-21240)Online publication date: 16-Jun-2024
  • (2024)LidaRF: Delving into Lidar for Neural Radiance Field on Street Scenes2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.01850(19563-19572)Online publication date: 16-Jun-2024

Index Terms

  1. Digging into Depth Priors for Outdoor Neural Radiance Fields

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      MM '23: Proceedings of the 31st ACM International Conference on Multimedia
      October 2023
      9913 pages
      ISBN:9798400701085
      DOI:10.1145/3581783
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 27 October 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. depth completion
      2. depth estimation
      3. neural radiance field

      Qualifiers

      • Research-article

      Conference

      MM '23
      Sponsor:
      MM '23: The 31st ACM International Conference on Multimedia
      October 29 - November 3, 2023
      Ottawa ON, Canada

      Acceptance Rates

      Overall Acceptance Rate 995 of 4,171 submissions, 24%

      Upcoming Conference

      MM '24
      The 32nd ACM International Conference on Multimedia
      October 28 - November 1, 2024
      Melbourne , VIC , Australia

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)234
      • Downloads (Last 6 weeks)9
      Reflects downloads up to 24 Sep 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)AlignMiF: Geometry-Aligned Multimodal Implicit Field for LiDAR-Camera Joint Synthesis2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.02006(21230-21240)Online publication date: 16-Jun-2024
      • (2024)LidaRF: Delving into Lidar for Neural Radiance Field on Street Scenes2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.01850(19563-19572)Online publication date: 16-Jun-2024

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media