Degradation-Guided Multi-Modal Fusion Network for Depth Map Super-Resolution
<p>Visual comparison example on NYU-v2 dataset [<a href="#B32-electronics-13-04020" class="html-bibr">32</a>]. (<b>a</b>) Color image input, (<b>b</b>) low-resolution depth input, (<b>c</b>) ground truth (GT) depth, (<b>d</b>) DKN [<a href="#B3-electronics-13-04020" class="html-bibr">3</a>], (<b>e</b>) DCTNet [<a href="#B6-electronics-13-04020" class="html-bibr">6</a>], and (<b>f</b>) our proposed DMFNet. The visualization and error comparison demonstrates the superior performance of our DMFNet in restoring clear and accurate depth results.</p> "> Figure 2
<p>An overview of the proposed DMFNet, which consists of the degradation learning branch and depth restoration branch. The former branch employs the Deep Degradation Regularization Module (DDRM) to gradually learn explicit degradation from the LR depth, while the latter branch restores fine-grained depth via the Multi-modal Fusion Block (MFB) and the degradation constraint.</p> "> Figure 3
<p>Scheme of the proposed multi-modal fusion block (MFB).</p> "> Figure 4
<p>Visual results on the synthetic NYU-v2 dataset (<math display="inline"><semantics> <mrow> <mo>×</mo> <mn>16</mn> </mrow> </semantics></math>).</p> "> Figure 5
<p>Visual results on the synthetic RGB-D-D dataset (<math display="inline"><semantics> <mrow> <mo>×</mo> <mn>16</mn> </mrow> </semantics></math>).</p> "> Figure 6
<p>Visual results on the synthetic Lu dataset (<math display="inline"><semantics> <mrow> <mo>×</mo> <mn>16</mn> </mrow> </semantics></math>).</p> "> Figure 7
<p>Visual results on the synthetic Middlebury dataset (<math display="inline"><semantics> <mrow> <mo>×</mo> <mn>16</mn> </mrow> </semantics></math>).</p> "> Figure 8
<p>Visual results on the real-world RGB-D-D dataset.</p> "> Figure 9
<p>Denoising visual results on the synthetic NYU-v2 dataset.</p> "> Figure 10
<p>Visual comparison of the intermediate depth features on RGB-D-D dataset (<math display="inline"><semantics> <mrow> <mo>×</mo> <mn>4</mn> </mrow> </semantics></math>).</p> ">
Abstract
:1. Introduction
- We propose a novel degradation-guided framework to enhance the depth recovery, where a deep degradation regularization loss is introduced to explicitly model the intricate degradation of LR depth.
- We design a multi-modal fusion block to facilitate the multi-domain and multi-modal interaction via spectrum transform and continuous difference operation.
- Extensive experimental results indicate that the proposed DMFNet achieves outstanding performance on four DSR benchmark datasets, reaching the state of the art.
2. Related Work
2.1. Depth Map Super-Resolution
2.2. Frequency Learning
2.3. Degradation Learning
3. Method
3.1. Network Architecture
3.2. Depth Degradation Learning
3.3. Multi-Modal Fusion Block
3.4. Loss Function
4. Experiment
4.1. Dataset
4.2. Metric and Implementation Detail
4.3. Comparison with SoTA Methods
4.4. Ablation Study
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Hui, T.W.; Loy, C.C.; Tang, X. Depth map super-resolution by deep multi-scale guidance. In Proceedings of the ECCV, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 353–369. [Google Scholar]
- Li, Y.; Huang, J.B.; Ahuja, N.; Yang, M.H. Deep joint image filtering. In Proceedings of the ECCV, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 154–169. [Google Scholar]
- Kim, B.; Ponce, J.; Ham, B. Deformable kernel networks for joint image filtering. Int. J. Comput. Vis. 2021, 129, 579–600. [Google Scholar] [CrossRef]
- He, L.; Zhu, H.; Li, F.; Bai, H.; Cong, R.; Zhang, C.; Lin, C.; Liu, M.; Zhao, Y. Towards fast and accurate real-world depth super-resolution: Benchmark dataset and baseline. In Proceedings of the CVPR, Nashville, TN, USA, 20–25 June 2021; pp. 9229–9238. [Google Scholar]
- Yan, Z.; Wang, K.; Li, X.; Zhang, Z.; Li, G.; Li, J.; Yang, J. Learning complementary correlations for depth super-resolution with incomplete data in real world. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 5616–5626. [Google Scholar] [CrossRef] [PubMed]
- Zhao, Z.; Zhang, J.; Xu, S.; Lin, Z.; Pfister, H. Discrete cosine transform network for guided depth map super-resolution. In Proceedings of the CVPR, New Orleans, LA, USA, 18–24 June 2022; pp. 5697–5707. [Google Scholar]
- Metzger, N.; Daudt, R.C.; Schindler, K. Guided Depth Super-Resolution by Deep Anisotropic Diffusion. In Proceedings of the CVPR, Vancouver, BC, Canada, 17–24 June 2023; pp. 18237–18246. [Google Scholar]
- Wang, Z.; Yan, Z.; Yang, J. Sgnet: Structure guided network via gradient-frequency awareness for depth map super-resolution. In Proceedings of the AAAI, Vancouver, BC, Canada, 20–27 February 2024; pp. 5823–5831. [Google Scholar]
- Im, S.; Ha, H.; Choe, G.; Jeon, H.G.; Joo, K.; Kweon, I.S. Accurate 3d reconstruction from small motion clip for rolling shutter cameras. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 775–787. [Google Scholar] [CrossRef] [PubMed]
- Song, X.; Dai, Y.; Zhou, D.; Liu, L.; Li, W.; Li, H.; Yang, R. Channel attention based iterative residual learning for depth map super-resolution. In Proceedings of the CVPR, Seattle, WA, USA, 14–19 June 2020; pp. 5631–5640. [Google Scholar]
- Yang, Y.; Cao, Q.; Zhang, J.; Tao, D. CODON: On orchestrating cross-domain attentions for depth super-resolution. Int. J. Comput. Vis. 2022, 130, 267–284. [Google Scholar] [CrossRef]
- Yan, Z.; Li, X.; Wang, K.; Zhang, Z.; Li, J.; Yang, J. Multi-modal masked pre-training for monocular panoramic depth completion. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 378–395. [Google Scholar]
- Yuan, J.; Jiang, H.; Li, X.; Qian, J.; Li, J.; Yang, J. Structure Flow-Guided Network for Real Depth Super-Resolution. arXiv 2023, arXiv:2301.13416. [Google Scholar] [CrossRef]
- Yan, Z.; Wang, K.; Li, X.; Zhang, Z.; Li, J.; Yang, J. Desnet: Decomposed scale-consistent network for unsupervised depth completion. In Proceedings of the AAAI, Singapore, 17–19 July 2023; pp. 3109–3117. [Google Scholar]
- Yan, Z.; Li, X.; Wang, K.; Chen, S.; Li, J.; Yang, J. Distortion and uncertainty aware loss for panoramic depth completion. In Proceedings of the ICML, Honolulu, HI, USA, 23–29 July 2023; pp. 39099–39109. [Google Scholar]
- Siemonsma, S.; Bell, T. N-DEPTH: Neural Depth Encoding for Compression-Resilient 3D Video Streaming. Electronics 2024, 13, 2557. [Google Scholar] [CrossRef]
- Li, L.; Li, X.; Yang, S.; Ding, S.; Jolfaei, A.; Zheng, X. Unsupervised-learning-based continuous depth and motion estimation with monocular endoscopy for virtual reality minimally invasive surgery. IEEE Trans. Ind. Inform. 2020, 17, 3920–3928. [Google Scholar] [CrossRef]
- Yuan, J.; Jiang, H.; Li, X.; Qian, J.; Li, J.; Yang, J. Recurrent Structure Attention Guidance for Depth Super-Resolution. arXiv 2023, arXiv:2301.13419. [Google Scholar] [CrossRef]
- Sun, B.; Ye, X.; Li, B.; Li, H.; Wang, Z.; Xu, R. Learning scene structure guidance via cross-task knowledge transfer for single depth super-resolution. In Proceedings of the CVPR, Nashville, TN, USA, 19–25 June 2021; pp. 7792–7801. [Google Scholar]
- Min, H.; Cao, J.; Zhou, T.; Meng, Q. IPSA: A Multi-View Perception Model for Information Propagation in Online Social Networks. Big Data Min. Anal. 2024. [Google Scholar]
- Zhou, M.; Yan, K.; Pan, J.; Ren, W.; Xie, Q.; Cao, X. Memory-augmented deep unfolding network for guided image super-resolution. Int. J. Comput. Vis. 2023, 131, 215–242. [Google Scholar] [CrossRef]
- Qin, S.; Xiao, J.; Ge, J. Dip-NeRF: Depth-Based Anti-Aliased Neural Radiance Fields. Electronics 2024, 13, 1527. [Google Scholar] [CrossRef]
- Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the CVPR, Seattle, WA, USA, 14–19 June 2020; pp. 11621–11631. [Google Scholar]
- Yan, Z.; Wang, K.; Li, X.; Zhang, Z.; Li, J.; Yang, J. RigNet: Repetitive image guided network for depth completion. In Proceedings of the ECCV, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 214–230. [Google Scholar]
- Zhang, N.; Nex, F.; Vosselman, G.; Kerle, N. Lite-mono: A lightweight cnn and transformer architecture for self-supervised monocular depth estimation. In Proceedings of the CVPR, Vancouver, BC, Canada, 17–24 June 2023; pp. 18537–18546. [Google Scholar]
- Yan, X.; Xu, S.; Zhang, Y.; Li, B. ELEvent: An Abnormal Event Detection System in Elevator Cars. In Proceedings of the CSCWD, Tianjin, China, 8–10 May 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 675–680. [Google Scholar]
- Liu, Z.; Wang, Q. Edge-Enhanced Dual-Stream Perception Network for Monocular Depth Estimation. Electronics 2024, 13, 1652. [Google Scholar] [CrossRef]
- Zeng, J.; Zhu, Q. NSVDNet: Normalized Spatial-Variant Diffusion Network for Robust Image-Guided Depth Completion. Electronics 2024, 13, 2418. [Google Scholar] [CrossRef]
- Tang, J.; Tian, F.P.; An, B.; Li, J.; Tan, P. Bilateral Propagation Network for Depth Completion. In Proceedings of the CVPR, Seattle WA, USA, 17–21 June 2024; pp. 9763–9772. [Google Scholar]
- Yan, Z.; Lin, Y.; Wang, K.; Zheng, Y.; Wang, Y.; Zhang, Z.; Li, J.; Yang, J. Tri-Perspective View Decomposition for Geometry-Aware Depth Completion. In Proceedings of the CVPR, Seattle WA, USA, 17–21 June 2024; pp. 4874–4884. [Google Scholar]
- Li, Y.; Huang, J.B.; Ahuja, N.; Yang, M.H. Joint image filtering with deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 1909–1923. [Google Scholar] [CrossRef] [PubMed]
- Silberman, N.; Hoiem, D.; Kohli, P.; Fergus, R. Indoor segmentation and support inference from rgbd images. In Proceedings of the ECCV, Florence, Italy, 7–13 October 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 746–760. [Google Scholar]
- Zhong, Z.; Liu, X.; Jiang, J.; Zhao, D.; Chen, Z.; Ji, X. High-resolution depth maps imaging via attention-based hierarchical multi-modal fusion. IEEE Trans. Image Process. 2021, 31, 648–663. [Google Scholar] [CrossRef]
- Shi, W.; Ye, M.; Du, B. Symmetric Uncertainty-Aware Feature Transmission for Depth Super-Resolution. In Proceedings of the ACMMM, Lisbon, Portugal, 10–14 October 2022; pp. 3867–3876. [Google Scholar]
- Deng, X.; Dragotti, P.L. Deep convolutional neural network for multi-modal image restoration and fusion. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 3333–3348. [Google Scholar] [CrossRef] [PubMed]
- Tang, Q.; Cong, R.; Sheng, R.; He, L.; Zhang, D.; Zhao, Y.; Kwong, S. Bridgenet: A joint learning network of depth map super-resolution and monocular depth estimation. In Proceedings of the ACM MM, Virtual Event, 20–24 October 2021; pp. 2148–2157. [Google Scholar]
- De Lutio, R.; Becker, A.; D’Aronco, S.; Russo, S.; Wegner, J.D.; Schindler, K. Learning graph regularisation for guided super-resolution. In Proceedings of the CVPR, New Orleans, LA, USA, 18–24 June 2022; pp. 1979–1988. [Google Scholar]
- Zhou, M.; Huang, J.; Li, C.; Yu, H.; Yan, K.; Zheng, N.; Zhao, F. Adaptively learning low-high frequency information integration for pan-sharpening. In Proceedings of the ACM MM, Lisboa, Portugal, 10–14 October 2022; pp. 3375–3384. [Google Scholar]
- Zhou, M.; Huang, J.; Yan, K.; Yu, H.; Fu, X.; Liu, A.; Wei, X.; Zhao, F. Spatial-frequency domain information integration for pan-sharpening. In Proceedings of the ECCV, Tel Aviv, Israel, 23–27 October 2022; pp. 274–291. [Google Scholar]
- Jiang, L.; Dai, B.; Wu, W.; Loy, C.C. Focal frequency loss for image reconstruction and synthesis. In Proceedings of the ICCV, Montreal, QC, Canada, 10–17 October 2021; pp. 13919–13929. [Google Scholar]
- Mao, X.; Liu, Y.; Liu, F.; Li, Q.; Shen, W.; Wang, Y. Intriguing findings of frequency selection for image deblurring. In Proceedings of the AAAI, Singapore, 17–19 July 2023; pp. 1905–1913. [Google Scholar]
- Lin, X.; Li, Y.; Hsiao, J.; Ho, C.; Kong, Y. Catch Missing Details: Image Reconstruction with Frequency Augmented Variational Autoencoder. In Proceedings of the CVPR, Vancouver, BC, Canada, 17–24 June 2023; pp. 1736–1745. [Google Scholar]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image super-resolution using very deep residual channel attention networks. In Proceedings of the ECCV, Munich, Germany, 8–14 September 2018; pp. 286–301. [Google Scholar]
- Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual dense network for image super-resolution. In Proceedings of the CVPR, Long Beach, CA, USA, 15–20 June 2019; pp. 2472–2481. [Google Scholar]
- Ahn, N.; Kang, B.; So Kweon, I. Fast, accurate, and lightweight super-resolution with cascading residual network. In Proceedings of the ECCV, Munich, Germany, 8–14 September 2018; pp. 252–268. [Google Scholar]
- Kim, J.; Kwon Lee, J.; Mu Lee, K. Deeply-recursive convolutional network for image super-resolution. In Proceedings of the CVPR, Long Beach, CA, USA, 15–20 June 2019; pp. 1637–1645. [Google Scholar]
- Liu, W.; Wang, Z.; Liu, X.; Zeng, N.; Liu, Y. Deep information-preserving network for image super-resolution. IEEE Trans. Image Process. 2020, 29, 1031–1042. [Google Scholar]
- Tai, Y.; Yang, J.; Liu, X. Image super-resolution via deep recursive residual network. In Proceedings of the CVPR, Honolulu, HI, USA, 21–26 July 2017; pp. 2790–2798. [Google Scholar]
- Huang, Y.; He, R.; Sun, Z.; Tan, T. Deep edge-guided network for single image super-resolution. In Proceedings of the CVPR, Seattle, WA, USA, 14–19 June 2020; pp. 1378–1387. [Google Scholar]
- Liu, X.; Liu, W.; Li, M.; Zeng, N.; Liu, Y. Deep attention-aware network for color image super-resolution. In Proceedings of the ICIP, Anchorage, AK, USA, 19–22 September 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 464–468. [Google Scholar]
- Wang, L.; Wang, Y.; Dong, X.; Xu, Q.; Yang, J.; An, W.; Guo, Y. Unsupervised degradation representation learning for blind super-resolution. In Proceedings of the CVPR, Nashville, TN, USA, 20–25 June 2021; pp. 10581–10590. [Google Scholar]
- Liang, J.; Zeng, H.; Zhang, L. Efficient and degradation-adaptive network for real-world image super-resolution. In Proceedings of the ECCV, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 574–591. [Google Scholar]
- Zhang, K.; Zuo, W.; Zhang, L. Learning a single convolutional super-resolution network for multiple degradations. In Proceedings of the CVPR, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3262–3271. [Google Scholar]
- Gu, J.; Lu, H.; Zuo, W.; Dong, C. Blind super-resolution with iterative kernel correction. In Proceedings of the CVPR, Long Beach, CA, USA, 15–20 June 2019; pp. 1604–1613. [Google Scholar]
- Hirschmuller, H.; Scharstein, D. Evaluation of cost functions for stereo matching. In Proceedings of the CVPR, Minneapolis, MN, USA, 18–23 June 2007; pp. 1–8. [Google Scholar]
- Scharstein, D.; Pal, C. Learning conditional random fields for stereo. In Proceedings of the CVPR, Minneapolis, MN, USA, 18–23 June 2007; pp. 1–8. [Google Scholar]
- Lu, S.; Ren, X.; Liu, F. Depth enhancement via low-rank matrix completion. In Proceedings of the CVPR, Columbus, OH, USA, 23–28 June 2014; pp. 3390–3397. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Su, H.; Jampani, V.; Sun, D.; Gallo, O.; Learned-Miller, E.; Kautz, J. Pixel-adaptive convolutional neural networks. In Proceedings of the CVPR, Long Beach, CA, USA, 15–20 June 2019; pp. 11166–11175. [Google Scholar]
- Guo, C.; Li, C.; Guo, J.; Cong, R.; Fu, H.; Han, P. Hierarchical features driven residual learning for depth map super-resolution. IEEE Trans. Image Process. 2018, 28, 2545–2557. [Google Scholar] [CrossRef]
- Zhong, Z.; Liu, X.; Jiang, J.; Zhao, D.; Ji, X. Deep attentional guided image filtering. IEEE Trans. Neural Netw. Learn. Syst. 2023, 35, 12236–12250. [Google Scholar] [CrossRef]
- Zhao, Z.; Zhang, J.; Gu, X.; Tan, C.; Xu, S.; Zhang, Y.; Timofte, R.; Van Gool, L. Spherical space feature decomposition for guided depth map super-resolution. arXiv 2023, arXiv:2303.08942. [Google Scholar]
- Wang, Z.; Yan, Z.; Yang, M.H.; Pan, J.; Yang, J.; Tai, Y.; Gao, G. Scene Prior Filtering for Depth Map Super-Resolution. arXiv 2024, arXiv:2402.13876. [Google Scholar]
Method | NYU-v2 | RGB-D-D | Lu | Middlebury | #P (M) | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
DJF [2] | 2.80 | 5.33 | 9.46 | 3.41 | 5.57 | 8.15 | 1.65 | 3.96 | 6.75 | 1.68 | 3.24 | 5.62 | 0.08 |
DJFR [31] | 2.38 | 4.94 | 9.18 | 3.35 | 5.57 | 7.99 | 1.15 | 3.57 | 6.77 | 1.32 | 3.19 | 5.57 | 0.08 |
PAC [59] | 1.89 | 3.33 | 6.78 | 1.25 | 1.98 | 3.49 | 1.20 | 2.33 | 5.19 | 1.32 | 2.62 | 4.58 | - |
CUNet [35] | 1.92 | 3.70 | 6.78 | 1.18 | 1.95 | 3.45 | 0.91 | 2.23 | 4.99 | 1.10 | 2.17 | 4.33 | 0.21 |
DSRNet [60] | 3.00 | 5.16 | 8.41 | - | - | - | 1.77 | 3.10 | 5.11 | 1.77 | 3.05 | 4.96 | 45.49 |
DKN [3] | 1.62 | 3.26 | 6.51 | 1.30 | 1.96 | 3.42 | 0.96 | 2.16 | 5.11 | 1.23 | 2.12 | 4.24 | 1.16 |
FDKN [3] | 1.86 | 3.58 | 6.96 | 1.18 | 1.91 | 3.41 | 0.82 | 2.10 | 5.05 | 1.08 | 2.17 | 4.50 | 0.69 |
FDSR [4] | 1.61 | 3.18 | 5.86 | 1.16 | 1.82 | 3.06 | 1.29 | 2.19 | 5.00 | 1.13 | 2.08 | 4.39 | 0.60 |
DAGF [61] | 1.36 | 2.87 | 6.06 | - | - | - | 0.83 | 1.93 | 4.80 | 1.15 | 1.80 | 3.70 | 2.44 |
GraphSR [37] | 1.79 | 3.17 | 6.02 | 1.30 | 1.83 | 3.12 | 0.92 | 2.05 | 5.15 | 1.11 | 2.12 | 4.43 | 32.53 |
DMFNet | 1.17 | 2.43 | 4.88 | 1.16 | 1.75 | 2.62 | 0.91 | 1.77 | 3.93 | 1.07 | 1.74 | 3.15 | 0.64 |
Train | FDSR [4] | DCTNet [6] | SUFT [34] | SSDNet [62] | SGNet [8] | SPFNet [63] | DMFNet |
---|---|---|---|---|---|---|---|
NYU-v2 | 7.50 | 7.37 | 7.22 | 7.32 | 7.22 | 7.23 | 7.15 |
RGB-D-D | 5.49 | 5.43 | 5.41 | 5.38 | 5.32 | 4.63 | 4.13 |
Scale | DJF [2] | DJFR [31] | DSRNet [60] | PAC [59] | DKN [3] | DAGF [61] | SPFNet | DMFNet |
---|---|---|---|---|---|---|---|---|
NYU-v2 | ||||||||
3.74 | 4.01 | 4.36 | 4.23 | 3.39 | 3.25 | 3.45 | 2.92 | |
5.95 | 6.21 | 6.31 | 6.24 | 5.24 | 5.01 | 5.15 | 4.63 | |
9.61 | 9.90 | 9.75 | 9.54 | 8.41 | 7.54 | 7.94 | 7.12 | |
Middlebury | ||||||||
1.80 | 1.86 | 1.84 | 1.81 | 1.76 | 1.72 | 1.67 | 1.64 | |
2.99 | 3.07 | 2.99 | 2.94 | 2.68 | 2.61 | 2.61 | 2.48 | |
5.16 | 5.27 | 4.70 | 5.08 | 4.55 | 4.24 | 4.24 | 3.90 |
DMFNet | MFB | Degradation Representation d | Degradation Loss | NYU-v2 | Lu |
---|---|---|---|---|---|
i | 5.26 () | 4.25 () | |||
ii | ✓ | 5.10 () | 4.14 () | ||
iii | ✓ | ✓ | 5.04 () | 4.07 () | |
iv | ✓ | ✓ | ✓ | 4.88 () | 3.93 () |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Han, L.; Wang, X.; Zhou, F.; Wu, D. Degradation-Guided Multi-Modal Fusion Network for Depth Map Super-Resolution. Electronics 2024, 13, 4020. https://doi.org/10.3390/electronics13204020
Han L, Wang X, Zhou F, Wu D. Degradation-Guided Multi-Modal Fusion Network for Depth Map Super-Resolution. Electronics. 2024; 13(20):4020. https://doi.org/10.3390/electronics13204020
Chicago/Turabian StyleHan, Lu, Xinghu Wang, Fuhui Zhou, and Diansheng Wu. 2024. "Degradation-Guided Multi-Modal Fusion Network for Depth Map Super-Resolution" Electronics 13, no. 20: 4020. https://doi.org/10.3390/electronics13204020
APA StyleHan, L., Wang, X., Zhou, F., & Wu, D. (2024). Degradation-Guided Multi-Modal Fusion Network for Depth Map Super-Resolution. Electronics, 13(20), 4020. https://doi.org/10.3390/electronics13204020