STs-NeRF: Novel View Synthesis of Space Targets Based on Improved Neural Radiance Fields
"> Figure 1
<p>Illustration showing that the NeRF [<a href="#B3-remotesensing-16-02327" class="html-bibr">3</a>] and its variations are still lacking in terms of image quality, and that the reconstruction of some small structures is not good enough [<a href="#B13-remotesensing-16-02327" class="html-bibr">13</a>].</p> "> Figure 2
<p>Some typical algorithms in chronological order.</p> "> Figure 3
<p>The six images from our dataset to illustrate how satellites are photographed.</p> "> Figure 4
<p>Some details of generating sampling points. In the left figure, point <span class="html-italic">o</span> is the center of the object. In the right figure, the sector formed by the arc and the two line segments starting from the camera is the range of sampling points generated from this perspective.</p> "> Figure 5
<p>The structure of the dynamic encoder.</p> "> Figure 6
<p>The structure of the neural network.</p> "> Figure 7
<p>It can be seen from this perspective that some small structures were not reconstructed until <math display="inline"><semantics> <mrow> <mi>l</mi> <mi>l</mi> <mi>n</mi> </mrow> </semantics></math> was added to the network [<a href="#B3-remotesensing-16-02327" class="html-bibr">3</a>].</p> "> Figure 8
<p>When inputting points and dirs into the network at the same time for novel view rendering, we can see that the images in the middle are missing; however, in Figure (<b>e</b>), they are complete.</p> "> Figure 9
<p>Effect of channel number on reconstruction effect.</p> "> Figure 10
<p>Effect of scale of dropout on reconstruction effect.</p> "> Figure 11
<p>Comparison between NeRF [<a href="#B3-remotesensing-16-02327" class="html-bibr">3</a>], MipNeRF [<a href="#B13-remotesensing-16-02327" class="html-bibr">13</a>], NeuS [<a href="#B15-remotesensing-16-02327" class="html-bibr">15</a>] and NeRF2Mesh [<a href="#B16-remotesensing-16-02327" class="html-bibr">16</a>] on the Ciomp_SAT_89 dataset.</p> ">
Abstract
:1. Introduction
2. Related Works
2.1. NeRF and NeRF Variants
2.2. Remote Sensing Novel View Synthesis
3. Methodology
3.1. Preliminaries on NeRF
3.2. Overall Architecture
3.3. Our NeRF
3.3.1. Segment the Images
3.3.2. Process Internal and External Parameters
3.3.3. Generate Rays and Samples and Encode
3.3.4. Network Structure
4. Experiments and Results
4.1. Experimental Details
4.2. Datasets and Metrics
4.2.1. Datasets
4.2.2. Quality Assessment Metrics
4.3. Ablation Studies
4.3.1. Effect of Improvements
4.3.2. Results of Hyperparameters
4.4. Comparisons with the State-of-the-Art (SOTA) Methods
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Schonberger, J.L.; Frahm, J.M. Structure-from-Motion Revisited. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 4104–4113. [Google Scholar]
- Moulon, P.; Monasse, P.; Perrot, R.; Marlet, R. OpenMVG: Open Multiple View Geometry. In Proceedings of the International Workshop on Reproducible Research in Pattern Recognition, Cancun, Mexico, 4 December 2016; pp. 60–74. [Google Scholar]
- Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In Proceedings of the European Conference on Computer Vision, Online, 23–28 August 2020. [Google Scholar]
- Yao, Y.; Luo, Z.X.; Li, S.W.; Fang, T.; Quan, L. MVSNet: Depth Inference for Unstructured Multi-view Stereo. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 785–801. [Google Scholar]
- Yu, Z.H.; Gao, S.H. Fast-MVSNet: Sparse-to-Dense Multi-View Stereo with Learned Propagation and Gauss–Newton Refinement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 1946–1955. [Google Scholar]
- Yg, J.Y.; Alvarez, J.M.; Liu, M.M. Self-supervised Learning of Depth Inference for Multi-view Stereo. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Online, 19–25 June 2021; pp. 7522–7530. [Google Scholar]
- Park, J.J.; Florence, P.; Straub, J.; Newcombe, R.; Lovegrove, S. DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Lone Beach, CA, USA, 16–20 June 2019; pp. 165–174. [Google Scholar]
- Mescheder, L.; Oechsle, M.; Niemeyer, M.; Nowozin, S.; Geiger, A. Occupancy Networks: Learning 3D Reconstruction in Function Space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Lone Beach, CA, USA, 16–20 June 2019; pp. 4455–4465. [Google Scholar]
- Kajiya, J.T.; Von Herzen, B.P. Ray tracing volume densities. ACM SIGGRAPH Comput. Graph. 1984, 18, 165–174. [Google Scholar] [CrossRef]
- Mari, R.; Facciolo, G.; Ehret, T. Sat-NeRF: Learning multi-view satellite photogrammetry with transient objects and shadow modeling using rpc cameras. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 1311–1321. [Google Scholar]
- Hsu, C.-H.; Lin, C.-H. Dual Reconstruction with Densely Connected Residual Network for Single Image Super-Resolution. In Proceedings of the International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Li, X.; Zhang, B.; Sander, P.V.; Liao, J. Blind Geometric Distortion Correction on Images Through Deep Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Lone Beach, CA, USA, 16–20 June 2019. [Google Scholar]
- Barron, J.T.; Mildenhall, B.; Tancik, M.; Hedman, P.; Martin-Brualla, R.; Srinivasan, P.P. Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. In Proceedings of the IEEE/CVF Conference on on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 5835–5844. [Google Scholar]
- Martin-Brualla, R.; Radwan, N.; Sajjadi, M.S.M.; Barron, J.T.; Dosovitskiy, A.; Duckworth, D. NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Online, 19–25 June 2021; pp. 3206–7215. [Google Scholar]
- Wang, P.; Liu, L.J.; Liu, Y.; Theobalt, C.; Komura, T.; Wang, W.P. NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction. In Proceedings of the Neural Information Processing Systems, Electr Network, Online, 6–14 December 2021. [Google Scholar]
- Tang, J.; Zhou, H.; Chen, X.; Hu, T.; Ding, E.; Wang, J.; Zeng, G. Delicate Textured Mesh Recovery from NeRF via Adaptive Surface Refinement. arXiv 2022, arXiv:2303.02091. [Google Scholar]
- Barron, J.T.; Mildenhall, B.; Verbin, D.; Srinivasan, P.P.; Hedman, P. Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 5460–5469. [Google Scholar]
- Mildenhall, B.; Hedman, P.; Martin-Brualla, R.; Srinivasan, P.P.; Barron, J.T. NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 16169–16178. [Google Scholar]
- Verbin, D.; Hedman, P.; Mildenhall, B.; Zickler, T.; Barron, J.T.; Srinivasan, P.P. Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields. In Proceedings of the Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 19–24 June 2022; pp. 5481–5490. [Google Scholar]
- Zhang, X.M.; Srinivasan, P.P.; Deng, B.Y.; Debevec, P.; Freeman, W.T.; Barron, J.T. NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination. Acm Trans. Graph. 2021, 40, 1–18. [Google Scholar] [CrossRef]
- Dai, S.; Cao, Y.; Duan, P.; Chen, X. SRes-NeRF: Improved Neural Radiance Fields for Realism and Accuracy of Specular Reflections. In Proceedings of the International Conference on MultiMedia Modeling, Bergen, Norway, 9–12 January 2023; pp. 306–317. [Google Scholar]
- Hwang, I.; Kim, J.; Kim, Y.M. Ev-NeRF: Event Based Neural Radiance Field. In Proceedings of the Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 837–847. [Google Scholar]
- Yu, A.; Ye, V.; Tancik, M.; Kanazawa, A. pixelNeRF: Neural Radiance Fields from One or Few Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Online, 19–25 June 2021; pp. 4576–4585. [Google Scholar]
- Chen, A.P.; Xu, Z.X.; Geiger, A.; Yu, J.Y.; Su, H. TensoRF: Tensorial Radiance Fields. In Proceedings of the European Conference on Computer Vision, Tel-Aviv, Israel, 23–24 October 2022; pp. 333–350. [Google Scholar]
- Garbin, S.J.; Kowalski, M.; Johnson, M.; Shotton, J.; Valentin, J. FastNeRF: High-Fidelity Neural Rendering at 200FPS. In Proceedings of the IEEE/CVF Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 14326–14335. [Google Scholar]
- Muller, T.; Evans, A.; Schied, C.; Keller, A. Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. Acm Trans. Graph. 2022, 41, 1–15. [Google Scholar] [CrossRef]
- Chen, A.P.; Xu, Z.X.; Zhao, F.Q.; Zhang, X.S.; Xiang, F.B.; Yu, J.Y.; Su, H. MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo. In Proceedings of the IEEE/CVF Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 14104–14113. [Google Scholar]
- Bian, W.; Wang, Z.; Li, K.; Bian, J. Victor Adrian Prisacariu NoPe-NeRF: Optimising Neural Radiance Field with No Pose Prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 19–24 June 2022; pp. 5428–5438. [Google Scholar]
- Liu, S.; Zhang, X.M.; Zhang, Z.T.; Zhang, R.; Zhu, J.Y.; Russell, B. Editing Conditional Radiance Fields. In Proceedings of the IEEE/CVF Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 5753–5763. [Google Scholar]
- Yuan, Y.J.; Sun, Y.T.; Lai, Y.K.; Ma, Y.W.; Jia, R.F.; Gao, L. NeRF-Editing: Geometry Editing of Neural Radiance Fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 18332–18343. [Google Scholar]
- Lazova, V.; Guzov, V.; Olszewski, K.; Tulyakov, S.; Pons-Moll, G. Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation. In Proceedings of the Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 4329–4339. [Google Scholar]
- Xu, Q.G.; Xu, Z.X.; Philip, J.; Bi, S.; Shu, Z.X.; Sunkavalli, K.; Neumann, U. Point-NeRF: Point-based Neural Radiance Fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 5428–5438. [Google Scholar]
- Zhang, K.; Kolkin, N.; Bi, S.; Luan, F.J.; Xu, Z.X.; Shechtman, E.; Snavely, N. ARF: Artistic Radiance Fields. In Proceedings of the European Conference on Computer Vision, Tel-Aviv, Israel, 23–24 October 2022; pp. 717–733. [Google Scholar]
- Sunderhauf, N.; Abou-Chakra, J.; Miller, D. Density-aware NeRF Ensembles: Quantifying Predictive Uncertainty in Neural Radiance Fields. In Proceedings of the International Conference on Robotics and Automation, London, UK, 29 May–2 June 2023; pp. 9370–9376. [Google Scholar]
- Liu, J.; Nie, Q.; Liu, Y.; Wang, C. NeRF-Loc: Visual Localization with Conditional Neural Radiance Field. In Proceedings of the International Conference on Robotics and Automation, London, UK, 29 May–2 June 2023; pp. 9385–9392. [Google Scholar]
- Zhu, H. Density-aware X-NeRF: Explicit Neural Radiance Field for Multi-Scene 360. Insufficient RGB-D Views. In Proceedings of the Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 5755–5764. [Google Scholar]
- Hao, F.; Shang, X.; Li, W.; Zhang, L.; Lu, B. VT-NeRF: Neural radiance field with a vertex-texture latent code for high-fidelity dynamic human-body rendering. IET Comput. Vis. 2023; early view. [Google Scholar] [CrossRef]
- Kim, H.; Lee, D.; Kang, S.; Kim, P. Complex-Motion NeRF: Joint Reconstruction and Pose Optimization with Motion and Depth Priors. IEEE Access 2023, 11, 97425–97434. [Google Scholar] [CrossRef]
- Qiu, J.; Zhu, Y.; Jiang, P.-T.; Cheng, M.-M.; Ren, B. RDNeRF: Relative depth guided NeRF for dense free view synthesis. Vis. Comput. 2023, 40, 1485–1497. [Google Scholar] [CrossRef]
- Klenk, S.; Koestler, L.; Scaramuzza, D.; Cremers, D. E-NeRF: Neural Radiance Fields from a Moving Event Camera. IEEE Robot. Autom. Lett. 2023, 8, 1587–1594. [Google Scholar] [CrossRef]
- Xie, S.L.; Zhang, L.; Jeon, G.; Yang, X.M. Remote Sensing Neural Radiance Fields for Multi-View Satellite Photogrammetry. Remote Sens. 2023, 15, 3808. [Google Scholar] [CrossRef]
- Lv, J.W.; Guo, J.Y.; Zhang, Y.T.; Zhao, X.; Lei, B. Neural Radiance Fields for High-Resolution Remote Sensing Novel View Synthesis. Remote Sens. 2023, 15, 3920. [Google Scholar] [CrossRef]
- Zhang, K.; Luan, F.; Wang, Q.; Bala, K.; Snavely, N. PhySG: Inverse rendering with spherical gaussians for physics-based material editing and relighting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Online, 19–25 June 2021. [Google Scholar]
- Yariv, L.; Gu, J.; Kasten, Y.; Lipman, Y. Volume rendering of neural implicit surfaces. In Proceedings of the Thirty-fifth Annual Conference on Neural Information Processing Systems, Online, 6–14 December 2021. [Google Scholar]
Data | Resolution | Texture | Lighting Conditions | Object | Camera |
---|---|---|---|---|---|
Blender [3] | High | Rich | Unchanged | Fixed | Moved |
Ours | Low | Poor | Changed | Rotating | Fixed |
PSNR | |||||||||
SAT01 | SAT02 | SAT03 | SAT04 | SAT05 | SAT06 | SAT07 | SAT08 | avg | |
[3] | 32.35 | 33.16 | 32.81 | 32.31 | 32.62 | 33.82 | 33.38 | 32.47 | 32.87 |
33.66 | 34.37 | 33.56 | 33.60 | 32.94 | 34.45 | 35.15 | 33.42 | 33.89 | |
33.69 | 34.52 | 33.80 | 34.84 | 33.80 | 35.82 | 35.25 | 33.60 | 34.41 | |
35.40 | 37.07 | 32.05 | 34.69 | 34.87 | 34.00 | 39.07 | 34.08 | 35.15 | |
SSIM | |||||||||
SAT01 | SAT02 | SAT03 | SAT04 | SAT05 | SAT06 | SAT07 | SAT08 | avg | |
[3] | 0.955 | 0.959 | 0.958 | 0.954 | 0.962 | 0.968 | 0.966 | 0.961 | 0.960 |
0.965 | 0.967 | 0.965 | 0.961 | 0.965 | 0.973 | 0.973 | 0.967 | 0.967 | |
0.965 | 0.967 | 0.965 | 0.966 | 0.970 | 0.977 | 0.974 | 0.969 | 0.969 | |
0.968 | 0.981 | 0.950 | 0.968 | 0.981 | 0.979 | 0.983 | 0.982 | 0.974 | |
LPIPS | |||||||||
SAT01 | SAT02 | SAT03 | SAT04 | SAT05 | SAT06 | SAT07 | SAT08 | avg | |
[3] | 0.066 | 0.067 | 0.070 | 0.076 | 0.066 | 0.056 | 0.058 | 0.064 | 0.065 |
0.048 | 0.048 | 0.051 | 0.058 | 0.066 | 0.043 | 0.041 | 0.050 | 0.051 | |
0.054 | 0.056 | 0.058 | 0.057 | 0.056 | 0.046 | 0.046 | 0.053 | 0.053 | |
0.044 | 0.032 | 0.085 | 0.064 | 0.030 | 0.030 | 0.031 | 0.033 | 0.044 |
PSNR | |||||||||
chair | lego | materials | mic | hotdog | ficus | drums | ship | avg | |
PhySG [43] | 24.00 | 20.19 | 18.86 | 22.33 | 24.08 | 19.02 | 20.99 | 15.35 | 20.60 |
VolSDF [44] | 30.57 | 29.64 | 29.13 | 30.53 | 35.11 | 22.91 | 20.43 | 25.51 | 27.98 |
NeRF [3] | 33.00 | 32.54 | 29.62 | 32.91 | 36.18 | 30.13 | 25.01 | 28.65 | 30.36 |
MipNeRF [13] | 35.12 | 35.92 | 30.64 | 36.76 | 37.34 | 33.19 | 25.36 | 30.52 | 33.10 |
ours | 31.57 | 32.04 | 28.30 | 33.19 | 36.36 | 28.92 | 26.46 | 30.87 | 30.96 |
SSIM | |||||||||
chair | lego | materials | mic | hotdog | ficus | drums | ship | avg | |
PhySG [43] | 0.898 | 0.821 | 0.838 | 0.933 | 0.912 | 0.873 | 0.884 | 0.727 | 0.861 |
VolSDF [44] | 0.949 | 0.951 | 0.954 | 0.969 | 0.972 | 0.929 | 0.893 | 0.842 | 0.932 |
NeRF [3] | 0.967 | 0.961 | 0.949 | 0.980 | 0.974 | 0.964 | 0.925 | 0.856 | 0.947 |
MipNeRF [13] | 0.981 | 0.980 | 0.959 | 0.992 | 0.982 | 0.980 | 0.933 | 0.885 | 0.961 |
ours | 0.971 | 0.966 | 0.930 | 0.971 | 0.980 | 0.954 | 0.927 | 0.875 | 0.946 |
LPIPS | |||||||||
chair | lego | materials | mic | hotdog | ficus | drums | ship | avg | |
PhySG [43] | 0.093 | 0.172 | 0.142 | 0.082 | 0.117 | 0.112 | 0.113 | 0.322 | 0.144 |
VolSDF [44] | 0.056 | 0.054 | 0.048 | 0.191 | 0.043 | 0.068 | 0.119 | 0.191 | 0.096 |
NeRF [3] | 0.046 | 0.050 | 0.063 | 0.028 | 0.121 | 0.044 | 0.091 | 0.206 | 0.081 |
MipNeRF [13] | 0.020 | 0.018 | 0.040 | 0.008 | 0.026 | 0.021 | 0.064 | 0.135 | 0.041 |
ours | 0.022 | 0.019 | 0.073 | 0.040 | 0.019 | 0.029 | 0.053 | 0.097 | 0.044 |
PSNR | |||||||||
SAT01 | SAT02 | SAT03 | SAT04 | SAT05 | SAT06 | SAT07 | SAT08 | avg | |
NeRF [3] | 28.60 | 29.99 | 29.79 | 28.82 | 29.95 | 31.35 | 31.09 | 29.91 | 29.94 |
MipNeRF [13] | 26.09 | 23.96 | 23.48 | 22.76 | 24.29 | 25.94 | 23.57 | 25.23 | 24.41 |
NeuS [15] | 28.54 | 29.89 | 31.02 | 29.54 | 28.67 | 29.99 | 28.93 | 32.04 | 29.83 |
NeRF2Mesh [16] | 18.21 | 19.30 | 18.77 | 15.66 | 19.64 | 15.78 | 18.37 | 21.00 | 18.34 |
Ours | 35.40 | 37.07 | 33.80 | 34.84 | 34.87 | 35.82 | 39.07 | 34.08 | 35.62 |
SSIM | |||||||||
SAT01 | SAT02 | SAT03 | SAT04 | SAT05 | SAT06 | SAT07 | SAT08 | avg | |
NeRF [3] | 0.930 | 0.943 | 0.942 | 0.930 | 0.932 | 0.953 | 0.951 | 0.939 | 0.940 |
MipNeRF [13] | 0.898 | 0.922 | 0.866 | 0.878 | 0.927 | 0.935 | 0.924 | 0.933 | 0.910 |
NeuS [15] | 0.951 | 0.957 | 0.975 | 0.952 | 0.968 | 0.975 | 0.970 | 0.977 | 0.965 |
NeRF2Mesh [16] | 0.706 | 0.799 | 0.807 | 0.662 | 0.809 | 0.746 | 0.806 | 0.833 | 0.771 |
Ours | 0.968 | 0.981 | 0.965 | 0.966 | 0.981 | 0.977 | 0.983 | 0.982 | 0.976 |
LPIPS | |||||||||
SAT01 | SAT02 | SAT03 | SAT04 | SAT05 | SAT06 | SAT07 | SAT08 | avg | |
NeRF [3] | 0.093 | 0.090 | 0.090 | 0.110 | 0.100 | 0.072 | 0.070 | 0.086 | 0.089 |
MipNeRF [13] | 0.144 | 0.130 | 0.222 | 0.205 | 0.133 | 0.132 | 0.114 | 0.148 | 0.153 |
NeuS [15] | 0.021 | 0.015 | 0.018 | 0.018 | 0.019 | 0.019 | 0.021 | 0.011 | 0.018 |
NeRF2Mesh [16] | 0.383 | 0.338 | 0.282 | 0.434 | 0.353 | 0.421 | 0.343 | 0.255 | 0.350 |
Ours | 0.044 | 0.032 | 0.058 | 0.057 | 0.030 | 0.046 | 0.031 | 0.033 | 0.041 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ma, K.; Liu, P.; Sun, H.; Teng, J. STs-NeRF: Novel View Synthesis of Space Targets Based on Improved Neural Radiance Fields. Remote Sens. 2024, 16, 2327. https://doi.org/10.3390/rs16132327
Ma K, Liu P, Sun H, Teng J. STs-NeRF: Novel View Synthesis of Space Targets Based on Improved Neural Radiance Fields. Remote Sensing. 2024; 16(13):2327. https://doi.org/10.3390/rs16132327
Chicago/Turabian StyleMa, Kaidi, Peixun Liu, Haijiang Sun, and Jiawei Teng. 2024. "STs-NeRF: Novel View Synthesis of Space Targets Based on Improved Neural Radiance Fields" Remote Sensing 16, no. 13: 2327. https://doi.org/10.3390/rs16132327