TomoSAR 3D Reconstruction for Buildings Using Very Few Tracks of Observation: A Conditional Generative Adversarial Network Approach
"> Figure 1
<p>Diagram of TomoSAR result using very few tracks for building. The blue points represent the estimated scatterers using three tracks, and the orange line indicates the ideal surface. There will inevitably be large errors in elevation inversion, resulting in the fuzzy structure of the building and inaccuracy in estimation of height.</p> "> Figure 2
<p>Flowchart of proposed method. The proposed method is composed of two main modules. The data generation module explains the generation of the super-resolution dataset that contains the paired low-quality and high-quality slice sets. The CGAN module illustrates the dominant compositions of the CGAN model and the data flowpath.</p> "> Figure 3
<p>Diagram of TomoSAR imaging geometry. TomoSAR expands the spatial resolution in the elevation direction by coherent antenna phase centers (APCs), represented as a series of black points.</p> "> Figure 4
<p>Flowchart of TomoSAR imaging procedure. The procedure contains a series of operations. Firstly, the image registration ensures the azimuth-range units in different coherent SAR images related to the same scatterers. Secondly, the channel imbalance calibration compensates the phase errors among channels. Thirdly, sparse recovery methods are applied to invert the elevation position of scatterers. Finally, the coordinate system is transformed from radar system to the geodetic system.</p> "> Figure 5
<p>Flowchart of data generation. The final output is the paired super-resolution dataset composed of low-quality and high-quality slice sets. The low-quality set of low-SNR and low-resolution contains range-elevation 2D binary slices, which is generated from three tracks. In contrast, the high-quality set uses all tracks, and has high SNR and resolution. The binary operation means that the value set is 1 if a scatterer is estimated in this position. All slices are in radar coordinate system.</p> "> Figure 6
<p>Flowchart of the CGAN module. The CGAN consists of two models named generator (G) and discriminator (D). The generator produces a fake result as similar as possible to the ground truth to make the discriminator believe the generation is the truth. On the contrary, the discriminator is used to distinguish the fake result and ground truth. After iterations, the generator will be able to generate a refined result which is hard to tell from the corresponding ground truth. Besides this, the content loss between the generated result and ground truth is also considered avoiding position bias.</p> "> Figure 7
<p>Network structure of the generator. The generator is composed of three dominant parts: downsampling compression, feature extraction, and upsampling reconstruction. The downsampling compression part compresses the data dimension and expands the feature dimension from 64 to 256. The feature dimension is indicated by the number on the left side of blocks, such as n128. The feature extraction part is composed of nine stacked blocks based on res-net structure, which has capability of digging out high-dimensional features of data. The upsampling part decreases the feature dimension and reconstructs the data dimension by deconvolution (TransposedConv) layers.</p> "> Figure 8
<p>Network structure of the discriminator. The network contains four convolutional layers increasing the feature dimension from 64 to 512, and finally becomes 1 to indicate the truth possibility of a small area in the receptive field. In the first four layers, the BatchNorm model is inserted into layers to accelerate the convergence and the LeakyReLU model is used as the activation function.</p> "> Figure 9
<p>Optical and intensity SAR images of YunCheng data. Panel (<b>a</b>) is the optical image of target scene including nine buildings in total. Each of the buildings is indicated using rectangles and numbered in the top-left corner. Panel (<b>b</b>) is the SAR image covering the same scene. The buildings in the SAR image are indicated with rectangles and are numbered correspondingly. Besides, two buildings marked with red rectangles at the bottom-right of the images are selected as training set. The other buildings with white rectangles are the testing set. Moreover, buildings #3, #4, and #5 are strongly overlapped in the SAR image and buildings #6, #7, #8, and #9 are nonoverlapped buildings. Slice 1 and slice 2 are selected at two azimuth positions to explain the results by different methods of overlapped and nonoverlapped buildings.</p> "> Figure 10
<p>3D reconstruction results of all-track and three-track TomoSAR. Panel (<b>a</b>) is the 3D height distribution of all-track TomoSAR scatterers. The buildings can be easily distinguished. The structures of buildings are refined. Panel (<b>b</b>) is the normalized strength distribution of all-track TomoSAR. The dominant scatterers mainly locate at the surface of buildings. Panel (<b>c</b>) is the height distribution of three-track TomoSAR. The structures of buildings are fuzzy. Besides, there are also lots of artifacts and outliers. Panel (<b>d</b>) is the normalized strength distribution of three-track TomoSAR. Compared with all-track results, the strength distribution is worse with lots of powerful artifacts and outliers, which makes the height distribution of buildings fuzzy and declines the quality of reconstruction.</p> "> Figure 11
<p>Results of nonoverlapped buildings #6, #7, #8, and #9 reconstructed by all-track and three-track TomoSAR at slice 1 position. Panel (<b>a</b>) is height map of all-track TomoSAR. The identity numbers of buildings are placed near the corresponding ones. Panel (<b>b</b>) is the normalized strength map of all-track TomoSAR. Panel (<b>c</b>) is the height map of three-track TomoSAR and (<b>d</b>) is the normalized strength map of three-track TomoSAR. The heights of four buildings are estimated from the strength maps and indicated by orange lines at the top of buildings. Compared with all-track TomoSAR results, the results of three-track TomoSAR have more artifacts and outliers with stronger power. The structures become blurry and are affected by the powerful multipath scattering (marked with red circles).</p> "> Figure 12
<p>Results of nonoverlapped buildings #6, #7, #8, and #9 reconstructed by CGAN and nonlocal methods using three tracks at slice 1. Panel (<b>a</b>) is the height map of CGAN. Panel (<b>b</b>) is the normalized strength map of CGAN. Panel (<b>c</b>) is the height map of the nonlocal algorithm. Panel (<b>d</b>) is the normalized strength map of the nonlocal algorithm. The nonlocal algorithm can remove the artifacts and outliers by increasing the SNR. However, it is still affected by the multipath scattering, marked with red circles. In contrast, the proposed CGAN method generates a higher quality result by suppressing more artifacts and outliers. In addition, the multipath scattering is also well suppressed so that the structures are much clearer. The height estimations are also labeled with orange lines.</p> "> Figure 13
<p>Results of overlapped buildings #3 and #4 reconstructed using three and all tracks at slice 2 position. Panel (<b>a</b>) is the height map of all-track TomoSAR. The identity number of buildings is placed near the corresponding ones. Panel (<b>b</b>) is the normalized strength map of all-track TomoSAR. Panel (<b>c</b>) is the height map of three-track TomoSAR. Panel (<b>d</b>) is the normalized strength map of three-track TomoSAR. The buildings #3 and #4 are overlapped in the SAR image. From results of three-track TomoSAR, the overlapped two buildings cannot be distinguished from each other; the structure of building #3 is too blurry to tell it apart from building #4. Moreover, the top of building #3 is hard to determine, and it is impossible to estimate its height. The height estimation is labeled with orange lines.</p> "> Figure 14
<p>Results of overlapped buildings reconstructed by nonlocal algorithm and proposed CGAN methods. Panel (<b>a</b>) is the height map of the nonlocal algorithm. Panel (<b>b</b>) is the normalized strength map of the nonlocal algorithm. Panel (<b>c</b>) is the height map of the proposed CGAN method. Panel (<b>d</b>) is the normalized strength map of the proposed method. Generally, the nonlocal algorithm can divide two overlapped buildings with an obvious interval. However, there are still some artifacts and outliers. In contrast, the results of the proposed method can greatly distinguish between two buildings with a large interval. Additionally, the roofs of two buildings are clear. The height of building #3 estimated in the height map of the proposed method is closer to ground truth. The height estimation of building #3 in the nonlocal result is severely different from that of all-track results, which is probably affected by the multipath scattering. The height estimation is labeled with orange lines.</p> "> Figure 15
<p>Comparison of entire scene 3D reconstruction between the nonlocal algorithm and proposed CGAN method. Panel (<b>a</b>) is the reconstruction result of the nonlocal algorithm. Panel (<b>b</b>) is the reconstruction result of the proposed CGAN method. Comparatively, the quality of the CGAN method is higher with fewer artifacts and outliers. In addition, the structures of buildings are much clearer, and are closer to the results of all-track TomoSAR.</p> "> Figure 16
<p>Optical and SAR intensity images of spaceborne data. Panel (<b>a</b>) is the SAR intensity image of the building. The red line is the slice selected to show details. Panel (<b>b</b>) is the corresponding optical image.</p> "> Figure 17
<p>3D views of reconstruction by CS method using all tracks and three tracks. Panel (<b>a</b>) is the 3D view of reconstruction using three tracks. There are lots of outliers. Panel (<b>b</b>) is the 3D view of reconstruction using all tracks. There are few outliers and the surface of the building is clean and refined.</p> "> Figure 18
<p>Reconstruction results of CS method using three and all tracks at the position indicated by the red line in the SAR intensity image. Panel (<b>a</b>) is the height map of three-track TomoSAR. Panel (<b>b</b>) is the normalized strength map of all-track TomoSAR. The orange circles indicate the outliers. Panel (<b>c</b>) is the height map of all-track TomoSAR. Panel (<b>d</b>) is the normalized strength map of all-track TomoSAR. The orange lines mark the height of the building. There are lots of outliers in the three-track reconstruction result.</p> "> Figure 19
<p>Reconstruction results of the nonlocal algorithm and the proposed CGAN methods. Panel (<b>a</b>) is the height map of the nonlocal algorithm. Panel (<b>b</b>) is the normalized strength map of the nonlocal algorithm. Panel (<b>c</b>) is the height map of the proposed CGAN method. Panel (<b>d</b>) is the normalized strength map of the proposed CGAN method. Generally, the nonlocal algorithm can compress the outliers. In contrast, the proposed CGAN method can further compress the outliers and generate more refined surface. The orange lines mark the height of the building.</p> "> Figure 19 Cont.
<p>Reconstruction results of the nonlocal algorithm and the proposed CGAN methods. Panel (<b>a</b>) is the height map of the nonlocal algorithm. Panel (<b>b</b>) is the normalized strength map of the nonlocal algorithm. Panel (<b>c</b>) is the height map of the proposed CGAN method. Panel (<b>d</b>) is the normalized strength map of the proposed CGAN method. Generally, the nonlocal algorithm can compress the outliers. In contrast, the proposed CGAN method can further compress the outliers and generate more refined surface. The orange lines mark the height of the building.</p> "> Figure 20
<p>3D views of reconstruction results of the nonlocal algorithm and the proposed CGAN method. Panel (<b>a</b>) is the 3D result of the nonlocal algorithm using three tracks. There are also lots of outliers and the reconstructed surface is not smooth. Panel (<b>b</b>) is the 3D result of the proposed CGAN method using three tracks. There are few outliers and the surface of buildings is clean and refined. The CGAN, which is just trained on the airborne dataset, can effectively and robustly process the spaceborne dataset without any tuning.</p> ">
Abstract
:1. Introduction
- (1)
- The Conditional generative adversarial network (CGAN) is originally applied to generate high-quality TomoSAR 3D reconstruction for buildings using very few tracks by learning the high-dimensional features of architectural structures.
- (2)
- Instead of directly processing large 3D data, the range-elevation 2D slices are processed to reduce network parameters and computational complexity, which makes it possible to deal with large-scale scenes. In order to solve the problem of possible misalignment among generations, the content loss between the input and generation is considered, so that the architectural structure can be reconstructed at the correct position.
- (3)
- The overlap of buildings in TomoSAR images makes buildings seem fused together and makes it hard to identify them from each other. The proposed method is able to distinguish the overlapped buildings correctly and estimate the heights of buildings. Compared with the widely used nonlocal algorithm, our method can estimate the height of buildings more accurately, and it is of higher time efficiency.
2. Materials and Methods
- Low-quality slice set: The low-resolution and low-SNR range-elevation slices generated using three tracks by TomoSAR procedure.
- High-quality slice set: The high-resolution and high-SNR range-elevation slices generated using all tracks.
2.1. Data Generation Module
2.1.1. TomoSAR Principle
2.1.2. TomoSAR Procedure
2.1.3. Data Generation
2.2. CGAN Module
2.2.1. Generator
2.2.2. Discriminator
2.2.3. Loss Function
3. Results and Discussion
3.1. Network Training
3.2. Airborne Dataset
3.2.1. Nonoverlapped Buildings
3.2.2. Overlapped Buildings
3.3. Spaceborne Dataset
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Reigber, A.; Moreira, A. First Demonstration of Airborne SAR Tomography Using Multibaseline L-Band Data. IEEE Trans. Geosci. Remote Sens. 2000, 38, 2142–2152. [Google Scholar] [CrossRef]
- Frey, O.; Meier, E. Analyzing Tomographic SAR Data of a Forest With Respect to Frequency, Polarization, and Focusing Technique. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3648–3659. [Google Scholar] [CrossRef] [Green Version]
- Zhu, X.X.; Bamler, R. Very High Resolution Spaceborne SAR Tomography in Urban Environment. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4296–4308. [Google Scholar] [CrossRef] [Green Version]
- Shi, Y.; Bamler, R.; Wang, Y.; Zhu, X.X. SAR Tomography at the Limit: Building Height Reconstruction Using Only 3–5 TanDEM-X Bistatic Interferograms. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8026–8037. [Google Scholar] [CrossRef]
- Zhu, X.X.; Ge, N.; Shahzad, M. Joint Sparsity in SAR Tomography for Urban Mapping. IEEE J. Sel. Top. Signal Process. 2015, 9, 1498–1509. [Google Scholar] [CrossRef] [Green Version]
- Lu, H.; Zhang, H.; Deng, Y.; Wang, J.; Yu, W. Building 3-D Reconstruction With a Small Data Stack Using SAR Tomography. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2461–2474. [Google Scholar] [CrossRef]
- Zhou, S.; Li, Y.; Zhang, F.; Chen, L.; Bu, X. Automatic Reconstruction of 3-D Building Structures for TomoSAR Using Neural Networks. In Proceedings of the 2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), Chongqing, China, 11–13 December 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Wang, S.; Guo, J.; Zhang, Y.; Hu, Y.; Ding, C.; Wu, Y. Single Target SAR 3D Reconstruction Based on Deep Learning. Sensors 2021, 21, 964. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. arXiv 2014, arXiv:1406.2661. [Google Scholar] [CrossRef]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
- Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A. Improved Training of Wasserstein GANs. arXiv 2017, arXiv:1704.00028. [Google Scholar]
- Karras, T.; Laine, S.; Aila, T. A Style-Based Generator Architecture for Generative Adversarial Networks. arXiv 2019, arXiv:1812.04948. [Google Scholar]
- Wang, W.; Huang, Q.; You, S.; Yang, C.; Neumann, U. Shape Inpainting Using 3D Generative Adversarial Network and Recurrent Convolutional Networks. arXiv 2017, arXiv:1711.06375. [Google Scholar]
- Zhang, H.; Xu, T.; Li, H.; Zhang, S.; Wang, X.; Huang, X.; Metaxas, D. StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks. arXiv 2017, arXiv:1612.03242. [Google Scholar]
- Brock, A.; Donahue, J.; Simonyan, K. Large Scale GAN Training for High Fidelity Natural Image Synthesis. arXiv 2018, arXiv:1809.11096. [Google Scholar]
- Tolosana, R.; Vera-Rodriguez, R.; Fierrez, J.; Morales, A.; Ortega-Garcia, J. DeepFakes and Beyond: A Survey of Face Manipulation and Fake Detection. arXiv 2020, arXiv:2001.00179. [Google Scholar] [CrossRef]
- Wang, X.; Han, S.; Chen, Y.; Gao, D.; Vasconcelos, N. Volumetric Attention for 3D Medical Image Segmentation and Detection. arXiv 2004, arXiv:2004.01997. [Google Scholar]
- Yu, Q.; Xia, Y.; Xie, L.; Fishman, E.K.; Yuille, A.L. Thickened 2D Networks for Efficient 3D Medical Image Segmentation. arXiv 2019, arXiv:1904.01150. [Google Scholar]
- Kupyn, O.; Budzan, V.; Mykhailych, M.; Mishkin, D.; Matas, J. DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks. arXiv 2018, arXiv:1711.07064. [Google Scholar]
- Zhu, X.X.; Bamler, R. Super-Resolution of Sparse Reconstruction for Tomographic SAR Imaging—Demonstration with Real Data. In Proceedings of the EUSAR 2012 9th European Conference on Synthetic Aperture Radar, Nuremberg, Germany, 23–26 April 2012; pp. 215–218. [Google Scholar]
- Zhu, X.X.; Bamler, R. Sparse Reconstrcution Techniques for SAR Tomography. In Proceedings of the 2011 17th International Conference on Digital Signal Processing (DSP), Corfu, Greece, 13–15 June 2011; pp. 1–8. [Google Scholar] [CrossRef]
- Zhu, X.X.; Adam, N.; Bamler, R. Space-Borne High Resolution Tomographic Interferometry. In Proceedings of the 2009 IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009; Volume 4, pp. IV–869–IV–872. [Google Scholar] [CrossRef]
- Jiao, Z.; Ding, C.; Qiu, X.; Zhou, L.; Chen, L.; Han, D.; Guo, J. Urban 3D Imaging Using Airborne TomoSAR: Contextual Information-Based Approach in the Statistical Way. ISPRS J. Photogramm. Remote Sens. 2020, 170, 127–141. [Google Scholar] [CrossRef]
- Fornaro, G.; Lombardini, F.; Pauciullo, A.; Reale, D.; Viviani, F. Tomographic Processing of Interferometric SAR Data: Developments, Applications, and Future Research Perspectives. IEEE Signal Process. Mag. 2014, 31, 41–50. [Google Scholar] [CrossRef]
- Xiang, Z.X.; Bamler, R. Compressive Sensing for High Resolution Differential SAR Tomography-the SL1MMER Algorithm. In Proceedings of the 2010 IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 20–30 July 2010; pp. 17–20. [Google Scholar] [CrossRef] [Green Version]
- Weiß, M.; Fornaro, G.; Reale, D. Multi Scatterer Detection within Tomographic SAR Using a Compressive Sensing Approach. In Proceedings of the 2015 3rd International Workshop on Compressed Sensing Theory and Its Applications to Radar, Sonar and Remote Sensing (CoSeRa), Pisa, Italy, 17–19 June 2015; pp. 11–15. [Google Scholar] [CrossRef]
- Lie-Chen, L.; Dao-Jing, L. Sparse Array SAR 3D Imaging for Continuous Scene Based on Compressed Sensing. J. Electron. Inf. Technol. 2014, 36, 2166. [Google Scholar] [CrossRef]
- Guizar-Sicairos, M.; Thurman, S.T.; Fienup, J.R. Efficient Subpixel Image Registration Algorithms. Opt. Lett. 2008, 33, 156–158. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. arXiv 2018, arXiv:1611.07004. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
- Cozzolino, D.; Verdoliva, L.; Scarpa, G.; Poggi, G. Nonlocal Sar Image Despeckling by Convolutional Neural Networks. In Proceedings of the IGARSS 2019–2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 5117–5120. [Google Scholar] [CrossRef]
- Deledalle, C.A.; Denis, L.; Tupin, F. NL-InSAR: Nonlocal Interferogram Estimation. IEEE Trans. Geosci. Remote Sens. 2011, 49, 1441–1452. [Google Scholar] [CrossRef]
- D’Hondt, O.; López-Martínez, C.; Guillaso, S.; Hellwich, O. Nonlocal Filtering Applied to 3-D Reconstruction of Tomographic SAR Data. IEEE Trans. Geosci. Remote Sens. 2018, 56, 272–285. [Google Scholar] [CrossRef]
- Shi, Y.; Zhu, X.X.; Bamler, R. Nonlocal Compressive Sensing-Based SAR Tomography. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3015–3024. [Google Scholar] [CrossRef] [Green Version]
- D’Hondt, O.; López-Martínez, C.; Guillaso, S.; Hellwich, O. Impact of Non-Local Filtering on 3D Reconstruction from Tomographic SAR Data. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 2476–2479. [Google Scholar]
- Cheng, R.; Liang, X.; Zhang, F.; Chen, L. Multipath Scattering of Typical Structures in Urban Areas. IEEE Trans. Geosci. Remote Sens. 2019, 57, 342–351. [Google Scholar] [CrossRef]
All-Track | Three-Track | |
---|---|---|
Number of tracks | 8 | 3 |
Maximal elevation aperture | 0.588 m | 0.168 m |
Distance from the scene center | 1308 m | |
Wavelength | 2.1 cm | |
Incidence angle at scene center | 58° |
Parameter | Value |
---|---|
search window size | |
patch size | |
iterations | 10 |
posterior similarity coefficient (h) | 5.3 |
sparse prior KL similarity | |
minimum number of similar blocks () | 10 |
Building_Index | All Tracks | 3 Tracks\Error | Nonlocal\Error | Proposed\Error |
---|---|---|---|---|
#3 | 75 | 89\14 | 90\15 | 75\0 |
#4 | 40 | Hard to recognize | 42\2 | 42\2 |
#6 | 81 | 82\1 | 82\1 | 81\0 |
#7 | 74 | 87\13 | 87\13 | 73\1 |
#8 | 90 | 90\0 | 91\1 | 90\0 |
#9 | 93 | 93\0 | 93\0 | 93\0 |
Method | Time Consumption (s) |
---|---|
Nonlocal (10 iterations) | 14,492 |
Proposed GAN | 10 |
All-Track | Three-Track | |
---|---|---|
Number of tracks | 19 | 3 |
Maximal elevation aperture | 215 m | 42 m |
Distance from the scene center | 617 km | |
Wavelength | 3.1 cm | |
Incidence angle at scene center | 66° |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, S.; Guo, J.; Zhang, Y.; Hu, Y.; Ding, C.; Wu, Y. TomoSAR 3D Reconstruction for Buildings Using Very Few Tracks of Observation: A Conditional Generative Adversarial Network Approach. Remote Sens. 2021, 13, 5055. https://doi.org/10.3390/rs13245055
Wang S, Guo J, Zhang Y, Hu Y, Ding C, Wu Y. TomoSAR 3D Reconstruction for Buildings Using Very Few Tracks of Observation: A Conditional Generative Adversarial Network Approach. Remote Sensing. 2021; 13(24):5055. https://doi.org/10.3390/rs13245055
Chicago/Turabian StyleWang, Shihong, Jiayi Guo, Yueting Zhang, Yuxin Hu, Chibiao Ding, and Yirong Wu. 2021. "TomoSAR 3D Reconstruction for Buildings Using Very Few Tracks of Observation: A Conditional Generative Adversarial Network Approach" Remote Sensing 13, no. 24: 5055. https://doi.org/10.3390/rs13245055
APA StyleWang, S., Guo, J., Zhang, Y., Hu, Y., Ding, C., & Wu, Y. (2021). TomoSAR 3D Reconstruction for Buildings Using Very Few Tracks of Observation: A Conditional Generative Adversarial Network Approach. Remote Sensing, 13(24), 5055. https://doi.org/10.3390/rs13245055