PolSAR Image Building Extraction with G0 Statistical Texture Using Convolutional Neural Network and Superpixel
<p>Flow chart of building extraction based on CNN and superpixel in the PolSAR image.</p> "> Figure 2
<p>The SAR and optical images, and the true ground map. (<b>a1</b>,<b>b1</b>,<b>c1</b>) are the PauliRGB images from E-SAR, GF-3, and RADARSAT-2, respectively. (<b>a2</b>,<b>b2</b>,<b>c2</b>) are the optical images. (<b>a3</b>,<b>b3</b>,<b>c3</b>) are the true ground map. Red is the building and green is the non-building in (<b>a3</b>,<b>b3</b>,<b>c3</b>).</p> "> Figure 3
<p>The convolutional neural network structure for building extraction.</p> "> Figure 4
<p>ESAR image building extraction results. (<b>a</b>) Quan’s threshold extraction method; (<b>b</b>) PauliRGB and G<sup>0</sup> statistical texture parameters using SVM results; (<b>c</b>) RVCNN; (<b>d</b>) PFDCNN; (<b>e</b>) PauliRGB and G<sup>0</sup> Statistical texture parameters using CNN; (<b>f</b>) as the result of introducing superpixel constraints in (<b>e</b>).</p> "> Figure 5
<p>GF-3 image building extraction results. (<b>a</b>) Quan’s threshold extraction method; (<b>b</b>) PauliRGB and G<sup>0</sup> statistical texture parameters using SVM results; (<b>c</b>) RVCNN; (<b>d</b>) PFDCNN; (<b>e</b>) PauliRGB and G<sup>0</sup> Statistical texture parameters using CNN; (<b>f</b>) as the result of introducing superpixel constraints in (<b>e</b>).</p> "> Figure 6
<p>RADARSAT-2 image building extraction results. (<b>a</b>) Quan’s threshold extraction method; (<b>b</b>) PauliRGB and G<sup>0</sup> statistical texture parameters using SVM results; (<b>c</b>) RVCNN; (<b>d</b>) PFDCNN; (<b>e</b>) PauliRGB and G<sup>0</sup> Statistical texture parameters using CNN; (<b>f</b>) as the result of introducing superpixel constraints in (<b>e</b>).</p> "> Figure 7
<p>Buildings and similar building features in the PolSAR image (<b>A</b>,<b>B</b>,<b>C</b>) images from ESAR. Group (<b>D</b>) images from GF-3 and Group (<b>E</b>) images from RADARSAT-2. Where (<b>a</b>) is the optical image, (<b>b</b>) is PauliRGB, (<b>c</b>) is the mask of the real building distributed on PauliRGB (red area), and (<b>d</b>) is the G<sup>0</sup> statistical texture parameter. (<b>e</b>) Classification results obtained by using only PauliRGB as CNN input training, in which red is a building, green is a non-building, and (<b>f</b>) is a classification obtained by adding a G<sup>0</sup> statistical texture parameter to PauliRGB as a CNN input training, in which red is a building and green is a non-building.</p> "> Figure 8
<p>Comparison of feature sets under three data using G<sup>0</sup> texture parameters before and after comparison. (<b>a</b>,<b>b</b>) is the experimental result under ESAR data, (<b>c</b>,<b>d</b>) is the experimental result under GF-3 data, and (<b>e</b>,<b>f</b>) is the experimental result under RADASAT-2 data. Where (<b>a</b>,<b>c</b>,<b>e</b>) is the experimental result of the input of only PauliRGB and (<b>b</b>,<b>d</b>,<b>f</b>) is the experimental result of the input texture parameter of PauliRGB and G<sup>0</sup>.</p> "> Figure 9
<p>Comparison of using the MLP and CNN methods. (<b>a</b>,<b>b</b>) is the experimental result under ESAR data, (<b>c</b>,<b>d</b>) is the experimental result under GF-3 data, and (<b>e</b>,<b>f</b>) is the experimental result under RADASAT-2 data. Where (<b>a</b>,<b>c</b>,<b>e</b>) is the experimental result of the MLP and (<b>b</b>,<b>d</b>,<b>f</b>) is the experimental result of the CNN.</p> "> Figure 10
<p>Comparison of results before and after superpixel constraints under three data. (<b>a</b>,<b>b</b>) is the experimental result under ESAR data; (<b>c</b>,<b>d</b>) is the experimental result under GF-3 data; and (<b>e</b>,<b>f</b>) is the experimental result under RADASAT-2 data. (<b>a</b>,<b>c</b>,<b>e</b>) are experimental results without superpixel constraints and (<b>b</b>,<b>d</b>,<b>f</b>) are experimental results after using superpixel constraints.</p> "> Figure 11
<p>Sample selection and building extraction results. (<b>a</b>) is the samples from the true surface map; (<b>b</b>) is the result of building extraction using (<b>a</b>) samples; (<b>c</b>) consists of positive and negative samples (dark green); and (<b>d</b>) is the result of building extraction using (<b>c</b>) samples.</p> "> Figure 12
<p>Accuracy of building extraction results under different image block sizes.</p> "> Figure 13
<p>Building elements used for training account for the accuracy of building extraction results at different percentages of all building pixels.</p> ">
Abstract
:1. Introduction
2. Method
2.1. Building Feature Set Extraction from SAR Image
2.1.1. PauliRGB
2.1.2. G0 Statistical Texture Parameter
2.2. Preliminary Building Extraction by CNN
2.2.1. Convolution Layer
2.2.2. Pooling Layer
2.2.3. Fully Connected Layer
2.3. SLIC Superpixel Generation and Superpixel Constraint
2.3.1. SLIC Superpixel Generation
- Generate center seed points: Firstly, we generate a PauliRGB gradient image. Secondly, select the seed point as the initial center of the superpixel according to the step S sampling. Finally, adjust the seed point to the lowest point of the gradient image in the local S ∗ S range;
- Local K-means: First, the distance of each pixel to the center of the superpixel is calculated in the range of 2S ∗ 2S of each superpixel center and divide the pixel into the nearest superpixel. Second, SLIC’s search scope is limited to 2S ∗ 2S, which speeds up algorithm convergence. The distance between two pixels is measured in d. Third, we assume that the Pauli decomposition feature vectors of pixel (, ) and pixel (, ) are (,,) and (,, ). The computational formula of spatial distance, Pauli distance, and distance d is defined as follows:After the calculation, we update the center of each superpixel. Next, we repeat the above steps until convergence or it reaches the maximum number of iterations. Finally, a superpixel of approximately S ∗ S size can generate;
- Post-cluster processing: The superpixels with less than a certain number of pixels are merged into the nearest superpixel to obtain the final PolSAR superpixel image.
2.3.2. Superpixel Constraint
3. Experiment and Results
3.1. Study Area and Data Set
3.2. Sample Construction and Network Parameters
3.3. Building Extraction Results and Analysis
3.3.1. ESAR
3.3.2. GF-3
3.3.3. RADARSAT-2
4. Discussion
4.1. Method Characteristic Analysis
4.1.1. Combination of Polarization and Statistical Features
4.1.2. CNN’s Use of Spatial Information
4.1.3. Effect Analysis of Superpixel
4.2. Parameter Impact Analysis
4.2.1. Discussion of Sample Selection
4.2.2. Different Image Block Sizes on Building Extraction
4.2.3. The Size of Different Training Samples on Building Extraction
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Du, P.; Samat, A.; Waske, B.; Liu, S.; Li, Z. Random forest and rotation forest for fully polarized SAR image classification using polarimetric and spatial features. ISPRS J. Photogramm. Remote Sens. 2015, 105, 38–53. [Google Scholar] [CrossRef]
- Li, X.; Guo, H.; Zhang, L.; Chen, X.; Liang, L. A new approach to collapsed building extraction using RADARSAT-2 polarimetric SAR imagery. IEEE Geosci. Remote Sens. Lett. 2012, 9, 677–681. [Google Scholar]
- Xiang, D.; Tang, T.; Ban, Y.; Su, Y.; Kuang, G. Unsupervised polarimetric SAR urban area classifification based on model-based decomposition with cross scattering. ISPRS J. Photogramm. Remote Sens. 2016, 116, 86–100. [Google Scholar] [CrossRef]
- Niu, X.; Ban, Y. Multi-temporal RADARSAT-2 polarimetric SAR data for urban land-cover classification using an object-based support vector machine and a rule-based approach. Int. J. Remote Sens. 2013, 34, 1–26. [Google Scholar] [CrossRef]
- Xiang, D.; Ban, Y.; Su, Y. Model-Based Decomposition With Cross Scattering for Polarimetric SAR Urban Areas. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2496–2500. [Google Scholar] [CrossRef]
- Lee, J.S.; Grunes, M.R.; Kwok, R. Classification of multi-look polarimetric SAR imagery based on complex Wishart distribution. Int. J. Remote Sens. 1994, 15, 2299–2311. [Google Scholar] [CrossRef]
- Deng, L.; Wang, C. Improved building extraction with integrated decomposition of time-frequency and entropy-alpha using polarimetric SAR data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 4058–4068. [Google Scholar] [CrossRef]
- Xu, Q.; Chen, Q.; Yang, S.; Liu, X. Superpixel-Based Classification Using K Distribution and Spatial Context for Polarimetric SAR Images. Remote Sens. 2016, 8, 619. [Google Scholar] [CrossRef] [Green Version]
- Chen, Q.; Yang, H.; Li, L.; Liu, X. A Novel Statistical Texture Feature for SAR Building Damage Assessment in Different Polarization Modes. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 13, 154–165. [Google Scholar] [CrossRef]
- Ping, J.; Liu, X.; Chen, Q.; Shao, F. A Multi-scale SVM-CRF Model for Buildings Extraction from Polarimetric SAR Images. Remote Sens. Technol. Appl. 2017, 32, 475–482. [Google Scholar]
- Zhai, W.; Shen, H.; Huang, C.; Pei, W. Fusion of polarimetric and texture information for urban building extraction from fully polarimetric SAR imagery. Remote Sens. Lett. 2016, 7, 31–40. [Google Scholar] [CrossRef]
- Quan, S.; Xiong, B.; Xiang, D.; Zhao, L.; Zhang, S.; Kuang, G. Eigenvalue-Based Urban Area Extraction Using Polarimetric SAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 458–471. [Google Scholar] [CrossRef]
- Pellizzeri, T.M. Classification of polarimetric SAR images of suburban areas using joint annealed segmentation and “H/A/α” polarimetric decomposition. ISPRS-J. Photogramm. Remote Sens. 2003, 58, 55–70. [Google Scholar] [CrossRef]
- Yan, L.; Zhang, J.; Huang, G.; Zhao, Z. Building Footprints Extraction from PolSAR Image Using Multi-Features and Edge Information. In Proceedings of the 2011 International Symposium on Image and Data Fusion, Tengchong, China, 9–11 August 2011. [Google Scholar]
- Deng, L.; Yan, Y.; Sun, C. Use of Sub-Aperture Decomposition for Supervised PolSAR Classification in Urban Area. Remote Sens. 2015, 7, 1380–1396. [Google Scholar] [CrossRef] [Green Version]
- Wurm, M.; Taubenbck, H.; Weigand, M.; Schmitt, A. Slum mapping in polarimetric SAR data using spatial features. Remote Sens. Environ. 2017, 194, 190–204. [Google Scholar] [CrossRef]
- De, S.; Bruzzone, L.; Bhattacharya, A.; Bovolo, F.; Chaudhuri, S. A Novel Technique Based on Deep Learning and a Synthetic Target Database for Classification of Urban Areas in PolSAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 154–170. [Google Scholar] [CrossRef]
- Bi, H.; Xu, F.; Wei, Z.; Xue, Y.; Xu, Z. An Active Deep Learning Approach for Minimally Supervised PolSAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9378–9395. [Google Scholar] [CrossRef]
- Lecun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436. [Google Scholar] [CrossRef]
- Zhou, Y.; Wang, H.; Xu, F.; Jin, Y. Polarimetric SAR Image Classification Using Deep Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1935–1939. [Google Scholar] [CrossRef]
- Chen, S.W.; Tao, C.S. PolSAR Image Classification Using Polarimetric-Feature-Driven Deep Convolutional Neural Network. IEEE Geosci. Remote Sens. Lett. 2018, 15, 627–631. [Google Scholar] [CrossRef]
- Wang, M.; Liu, X.; Gao, Y.; Ma, X.; Soomro, N.Q. Superpixel segmentation: A benchmark. Signal Process. Image Commun. 2017, 56, 28–39. [Google Scholar] [CrossRef]
- Chen, Q.; Cao, W.; Shang, J.; Liu, J.; Liu, X. Superpixel-Based Cropland Classification of SAR Image With Statistical Texture and Polarization Features. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
- Lin, X.; Wang, W.; Yang, E. Urban construction area extraction using circular polarimetric correlation coefficient. In Proceedings of the 2013 IEEE International Conference on Imaging Systems and Techniques (IST), Beijing, China, 22–23 October 2013; pp. 359–362. [Google Scholar]
- Gadhiya, T.; Roy, A.K. Superpixel-Driven Optimized Wishart Network for Fast PolSAR Image Classification Using Global k-Means Algorithm. IEEE Trans. Geosci. Remote Sens. 2020, 58, 97–109. [Google Scholar] [CrossRef]
- Zhang, X.; Xia, J.; Tan, X.; Zhou, X.; Wang, T. PolSAR Image Classification via Learned Superpixels and QCNN Integrating Color Features. Remote Sens. 2019, 11, 1831. [Google Scholar] [CrossRef] [Green Version]
- Krogager, E.; Boerner, W.M.; Madsen, S. Feature-motivated Sinclair matrix (sphere/diplane/helix) decomposition and its application to target sorting for land feature classification. In Proceedings of the SPIE Conference on Wideband Interferometric Sensing and Imaging Polarimetry, San Diego, CA, USA, 28–29 July 1997. [Google Scholar]
- Freitas, C.C.; Frery, A.C.; Correia, A.H. The polarimetric G distribution for SAR data analysis. Environmetrics 2005, 16, 13–31. [Google Scholar] [CrossRef]
- Cloude, S.R.; Pottier, E. A review of target decomposition theorems in radar polarimetry. IEEE Trans. Geosci. Remote Sens. 1996, 34, 498–518. [Google Scholar] [CrossRef]
- Miller, R. Probability, Random Variables, and Stochastic Processesby Anthanasios Papoulis. Technometrics 1966, 8, 378–380. [Google Scholar] [CrossRef]
- Beaulieu, J.M.; Touzi, R. Segmentation of textured polarimetric SAR scenes by likelihood approximation. IEEE Trans. Geosci. Remote Sens. 2004, 42, 2063–2072. [Google Scholar] [CrossRef]
- Khan, S.; Guida, R. On fractional moments of multilook polarimetric whitening filter for polarimetric SAR data. IEEE Trans. Geosci. Remote Sens. 2013, 52, 3502–3512. [Google Scholar] [CrossRef] [Green Version]
- Doulgeris, A.P.; Anfinsen, S.N.; Eltoft, T. Classification with a Non-Gaussian Model for PolSAR Data. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2999–3009. [Google Scholar] [CrossRef] [Green Version]
- Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Arel, I.; Rose, D.C.; Karnowski, T.P. Deep Machine Learning—A New Frontier in Artificial Intelligence Research [Research Frontier]. IEEE Comput. Intell. Mag. 2010, 5, 13–18. [Google Scholar] [CrossRef]
- Strigl, D.; Kofler, K.; Podlipnig, S. Performance and scalability of gpu-based convolutional neural networks. In Proceedings of the 2010 18th Euromicro Conference on Parallel, Pisa, Italy, 17–19 February 2010. [Google Scholar]
- Mnih, V.; Hinton, G.E. Learning to detect roads in high-resolution aerial images. In Proceedings of the 11th European Conference on Computer Vision, Heraklion, Greece, 5–11 September 2010; pp. 210–223. [Google Scholar]
- Zhao, W.; Du, S. Learning multiscale and deep representations for classifying remotely sensed imagery. ISPRS-J. Photogramm. Remote Sens. 2016, 113, 155–165. [Google Scholar] [CrossRef]
- Bengio, Y. Learning Deep Architectures for AI. Found. Trends Mach. Learn. 2009, 2, 1–127. [Google Scholar] [CrossRef]
- Kavzoglu, T.; Colkesen, I. A kernel functions analysis for support vector machines for land cover classification. Int. J. Appl. Earth Obs. Geoinf. 2009, 11, 352–359. [Google Scholar] [CrossRef]
- Powers, D.M. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv 2020, arXiv:2010.16061. [Google Scholar]
- Zhang, C.; Pan, X.; Li, H.; Gardiner, A.; Sargent, I.; Hare, J.; Atkinson, P.M. A hybrid MLP-CNN classifier for very fine resolution remotely sensed image classification. ISPRS-J. Photogramm. Remote Sens. 2018, 140, 133–144. [Google Scholar] [CrossRef] [Green Version]
Feature | Method | AR (%) | FAR (%) | F1-Score (%) |
---|---|---|---|---|
Eigenvalue | Threshold | 22.20 | 39.23 | 32.52 |
PauliRGB + G0 | SVM | 61.85 | 58.15 | 49.92 |
6D-Vector [38] | CNN | 85.18 | 27.89 | 78.10 |
Polarimetric Features [39] | CNN | 85.64 | 45.54 | 66.58 |
PauliRGB + G0 | CNN | 88.05 | 25.01 | 80.99 |
PauliRGB + G0 | CNN + Superpixel | 86.14 | 17.61 | 84.22 |
Feature | Method | AR (%) | FAR (%) | F1-Score (%) |
---|---|---|---|---|
Eigenvalue | Threshold | 67.02 | 41.35 | 62.55 |
PauliRGB + G0 | SVM | 78.95 | 41.63 | 67.11 |
6D-Vector [38] | CNN | 94.69 | 24.91 | 83.75 |
Polarimetric Features [39] | CNN | 95.33 | 24.09 | 84.51 |
PauliRGB + G0 | CNN | 95.56 | 15.45 | 89.71 |
PauliRGB + G0 | CNN + Superpixel | 94.97 | 12.2 | 91.24 |
Feature | Method | AR (%) | FAR (%) | F1-Score (%) |
---|---|---|---|---|
Eigenvalue | Threshold | 64.11 | 25.13 | 69.07 |
PauliRGB + G0 | SVM | 84.03 | 45.06 | 66.44 |
6D-Vector [38] | CNN | 93.62 | 29.99 | 80.11 |
Polarimetric Features [39] | CNN | 94.29 | 30.82 | 79.80 |
PauliRGB + G0 | CNN | 94.37 | 21.76 | 85.55 |
PauliRGB + G0 | CNN + Superpixel | 93.64 | 17.89 | 87.49 |
E-SAR | GF-3 | RADASAT-2 | |||||||
---|---|---|---|---|---|---|---|---|---|
AR (%) | FAR (%) | F1-Score (%) | AR (%) | FAR (%) | F1-Score (%) | AR (%) | FAR (%) | F1-Score (%) | |
PauliRGB | 80.05 | 30.49 | 74.41 | 93.29 | 20.54 | 85.82 | 93.99 | 29.4 | 80.63 |
PauliRGB + G0 | 88.05 | 25.01 | 80.99 | 95.56 | 15.45 | 89.71 | 94.37 | 21.76 | 85.55 |
E-SAR | GF-3 | RADASAT-2 | |||||||
---|---|---|---|---|---|---|---|---|---|
AR (%) | FAR (%) | F1-Score (%) | AR (%) | FAR (%) | F1-Score (%) | AR (%) | FAR (%) | F1-Score (%) | |
MLP | 75.36 | 57.67 | 54.21 | 81.37 | 40.94 | 0.6844 | 79.93 | 39.63 | 68.78 |
CNN | 88.05 | 25.01 | 80.99 | 95.56 | 15.45 | 0.8971 | 94.37 | 21.76 | 85.55 |
E-SAR | GF-3 | RADASAT-2 | |||||||
---|---|---|---|---|---|---|---|---|---|
AR (%) | FAR (%) | F1-Score (%) | AR (%) | FAR (%) | F1-Score (%) | AR (%) | FAR (%) | F1-Score (%) | |
Non-superpixel | 88.05 | 25.01 | 80.99 | 94.37 | 21.76 | 85.55 | 95.56 | 15.45 | 89.71 |
Superpixel | 86.14 | 17.61 | 84.22 | 93.64 | 17.89 | 87.49 | 94.97 | 12.2 | 94.12 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, M.; Shen, Q.; Xiao, Y.; Liu, X.; Chen, Q. PolSAR Image Building Extraction with G0 Statistical Texture Using Convolutional Neural Network and Superpixel. Remote Sens. 2023, 15, 1451. https://doi.org/10.3390/rs15051451
Li M, Shen Q, Xiao Y, Liu X, Chen Q. PolSAR Image Building Extraction with G0 Statistical Texture Using Convolutional Neural Network and Superpixel. Remote Sensing. 2023; 15(5):1451. https://doi.org/10.3390/rs15051451
Chicago/Turabian StyleLi, Mei, Qikai Shen, Yun Xiao, Xiuguo Liu, and Qihao Chen. 2023. "PolSAR Image Building Extraction with G0 Statistical Texture Using Convolutional Neural Network and Superpixel" Remote Sensing 15, no. 5: 1451. https://doi.org/10.3390/rs15051451
APA StyleLi, M., Shen, Q., Xiao, Y., Liu, X., & Chen, Q. (2023). PolSAR Image Building Extraction with G0 Statistical Texture Using Convolutional Neural Network and Superpixel. Remote Sensing, 15(5), 1451. https://doi.org/10.3390/rs15051451