PCBA-Net: Pyramidal Convolutional Block Attention Network for Synthetic Aperture Radar Image Change Detection
<p>Generation of pseudo-labels of multi-temporal SAR images.</p> "> Figure 2
<p>The architecture of the proposed pyramidal convolutional block attention network (PCBA-Net).</p> "> Figure 3
<p>Sketch of PCBA architecture.</p> "> Figure 4
<p>Grouped Convolution. <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mi>g</mi> </semantics></math> denote the number of input feature maps, the number of output feature maps, and the number of groups, respectively. <math display="inline"><semantics> <mrow> <mi>g</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> in the example.</p> "> Figure 5
<p>Ottawa dataset. (<b>a</b>) Image acquired in July 1997. (<b>b</b>) Image acquired in August 1997. (<b>c</b>) Ground-truth map.</p> "> Figure 6
<p>San Francisco dataset. (<b>a</b>) Image acquired in August 2003. (<b>b</b>) Image acquired in May 2004. (<b>c</b>) Ground-truth map.</p> "> Figure 7
<p>Sulzberger dataset. (<b>a</b>) Image acquired on 11 March 2011. (<b>b</b>) Image acquired on 16 March 2011. (<b>c</b>) Ground-truth map.</p> "> Figure 8
<p>Yellow River A Dataset. (<b>a</b>) Image acquired in June 2008. (<b>b</b>) Image acquired in June 2009. (<b>c</b>) Ground-truth map.</p> "> Figure 9
<p>Yellow River B Dataset. (<b>a</b>) Image acquired in June 2008. (<b>b</b>) Image acquired in June 2009. (<b>c</b>) Ground-truth map.</p> "> Figure 10
<p>Yellow River C Dataset. (<b>a</b>) Image acquired in June 2008. (<b>b</b>) Image acquired in June 2009. (<b>c</b>) Ground-truth map.</p> "> Figure 11
<p>Visualization of comparative experimental results for the Ottawa dataset. (<b>a</b>) Ground-truth image. (<b>b</b>) Result of PCANet. (<b>c</b>) Result of CNN. (<b>d</b>) Result of CWNN. (<b>e</b>) Result of DDNet. (<b>f</b>) Result of SAFNet. (<b>g</b>) Result of the presented PCBA-Net.</p> "> Figure 12
<p>Visualization of comparative experiments for the San Francisco dataset. (<b>a</b>) Ground-truth image. (<b>b</b>) Result of PCANet. (<b>c</b>) Result of CNN. (<b>d</b>) Result of CWNN. (<b>e</b>) Result of DDNet. (<b>f</b>) Result of SAFNet. (<b>g</b>) Result of the presented PCBA-Net.</p> "> Figure 13
<p>Visualization of comparative experiments for the Sulzberger dataset. (<b>a</b>) Ground-truth image. (<b>b</b>) Result of PCANet. (<b>c</b>) Result of CNN. (<b>d</b>) Result of CWNN. (<b>e</b>) Result of DDNet. (<b>f</b>) Result of SAFNet. (<b>g</b>) Result of the presented PCBA-Net.</p> "> Figure 14
<p>Visualization of comparative experiments for the Yellow River A dataset. (<b>a</b>) Ground-truth image. (<b>b</b>) Result of PCANet. (<b>c</b>) Result of CNN. (<b>d</b>) Result of CWNN. (<b>e</b>) Result of DDNet. (<b>f</b>) Result of SAFNet. (<b>g</b>) Result of the presented PCBA-Net.</p> "> Figure 15
<p>Visualization of comparative experiments for the Yellow River B dataset. (<b>a</b>) Ground-truth image. (<b>b</b>) Result of PCANet. (<b>c</b>) Result of CNN. (<b>d</b>) Result of CWNN. (<b>e</b>) Result of DDNet. (<b>f</b>) Result of SAFNet. (<b>g</b>) Result of the presented PCBA-Net.</p> "> Figure 16
<p>Visualization of comparative experiments for the Yellow River C dataset. (<b>a</b>) Ground-truth image. (<b>b</b>) Result of PCANet. (<b>c</b>) Result of CNN. (<b>d</b>) Result of CWNN. (<b>e</b>) Result of DDNet. (<b>f</b>) Result of SAFNet. (<b>g</b>) Result of the presented PCBA-Net.</p> "> Figure 17
<p>The PCC values with different numbers of PCBA blocks.</p> "> Figure 18
<p>The PCC value with different values of r.</p> ">
Abstract
:1. Introduction
2. Methodology
2.1. Reliable Samples Generation
- (1)
- Use the FCM algorithm to cluster the into two clusters: changed cluster () and unchanged cluster (). The number of pixels in the changed cluster () is denoted as . The upper bound of the change class is set as .
- (2)
- Use the FCM algorithm to cluster into five clusters: , , , , and . The five clusters are sorted in descending order by the mean value of each cluster. The cluster with a larger mean value has a higher probability to be changed and vice versa. The number of pixels in the five clusters are denoted as , , , , and , respectively. The pixels in were assigned to the changed class . Set parameters .
- (3)
- Set .
- (4)
- If , assign the pixels in to the intermediate class . Otherwise, the pixels in should be assigned to the unchanged class . Go to step 3 and continue until .
2.2. Overview of Pyramidal Convolutional Block Attention Network
2.3. Pyramidal Convolutional Block Attention Module
2.3.1. The PyConv Block
2.3.2. Model Parameters and Floating-Point Operations (FLOPs) of PyConv
2.3.3. Convolutional Block Attention Module
3. Experimental Results
3.1. Dataset
3.2. Evaluation Criteria
3.3. Experimental Setup
3.4. Experimental Results and Comparison
3.4.1. Results of the Ottawa Dataset
3.4.2. Results for the San Francisco Dataset
3.4.3. Results for the Sulzberger Dataset
3.4.4. Results for the Yellow River Datasets
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Radke, R.J.; Andra, S.; Al-Kofahi, O.; Roysam, B. Image change detection algorithms: A systematic survey. IEEE Trans. Image Process. 2005, 14, 294–307. [Google Scholar]
- Gong, M.; Yang, H.; Zhang, P. Feature learning and change feature classification based on deep learning for ternary change detection in SAR images. ISPRS J. Photogramm. Remote Sens. 2017, 129, 212–225. [Google Scholar] [CrossRef]
- Zhang, H.; Liu, W.; Shi, J.; Fei, T.; Zong, B. Joint detection threshold optimization and illumination time allocation strategy for cognitive tracking in a networked radar system. IEEE Trans. Signal Process. 2022. [Google Scholar] [CrossRef]
- Liang, X.; Chen, B.; Chen, W.; Wang, P.; Liu, H. Unsupervised Radar Target Detection under Complex Clutter Background Based on Mixture Variational Autoencoder. Remote Sens. 2022, 14, 4449. [Google Scholar]
- Brunner, D.; Lemoine, G.; Bruzzone, L. Earthquake damage assessment of buildings using VHR optical and SAR imagery. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2403–2420. [Google Scholar] [CrossRef] [Green Version]
- Yousif, O.; Ban, Y. Improving SAR-based urban change detection by combining MAP-MRF classifier and nonlocal means similarity weights. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4288–4300. [Google Scholar] [CrossRef]
- Lunetta, R.S.; Knight, J.F.; Ediriwickrema, J.; Lyon, J.G.; Worthy, L.D. Land-cover change detection using multi-temporal MODIS NDVI data. Remote Sens. Environ. 2006, 105, 142–154. [Google Scholar] [CrossRef]
- Pantze, A.; Santoro, M.; Fransson, J.E. Change detection of boreal forest using bi-temporal ALOS PALSAR backscatter data. Remote Sens. Environ. 2014, 155, 120–128. [Google Scholar]
- Li, Y.; Gong, M.; Jiao, L.; Li, L.; Stolkin, R. Change-detection map learning using matching pursuit. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4712–4723. [Google Scholar] [CrossRef]
- Yuan, Y.; Lv, H.; Lu, X. Semi-supervised change detection method for multi-temporal hyperspectral images. Neurocomputing 2015, 148, 363–375. [Google Scholar] [CrossRef]
- Moser, G.; Serpico, S.B. Unsupervised change detection from multichannel SAR data by Markovian data fusion. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2114–2128. [Google Scholar] [CrossRef]
- Bruzzone, L.; Prieto, D.F. An adaptive semiparametric and context-based approach to unsupervised change detection in multitemporal remote-sensing images. IEEE Trans. Image Process. 2002, 11, 452–466. [Google Scholar] [CrossRef] [PubMed]
- Li, Y.; Peng, C.; Chen, Y.; Jiao, L.; Zhou, L.; Shang, R. A deep learning method for change detection in synthetic aperture radar images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5751–5763. [Google Scholar] [CrossRef]
- Qu, X.; Gao, F.; Dong, J.; Du, Q.; Li, H.-C. Change detection in synthetic aperture radar images using a dual-domain network. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
- Moser, G.; Serpico, S.B. Generalized minimum-error thresholding for unsupervised change detection from SAR amplitude imagery. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2972–2982. [Google Scholar] [CrossRef]
- Bazi, Y.; Bruzzone, L.; Melgani, F. Automatic identification of the number and values of decision thresholds in the log-ratio image for change detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2006, 3, 349–353. [Google Scholar] [CrossRef] [Green Version]
- Inglada, J.; Mercier, G. A new statistical similarity measure for change detection in multitemporal SAR images and its extension to multiscale change analysis. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1432–1445. [Google Scholar] [CrossRef] [Green Version]
- Hou, B.; Wei, Q.; Zheng, Y.; Wang, S. Unsupervised change detection in SAR image based on Gauss-log ratio image fusion and compressed projection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 3297–3317. [Google Scholar] [CrossRef]
- Ma, J.; Gong, M.; Zhou, Z. Wavelet fusion on ratio images for change detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2012, 9, 1122–1126. [Google Scholar] [CrossRef]
- Zheng, Y.; Jiao, L.; Liu, H.; Zhang, X.; Hou, B.; Wang, S. Unsupervised saliency-guided SAR image change detection. Pattern Recognit. 2017, 61, 309–326. [Google Scholar] [CrossRef]
- Zhuang, H.; Hao, M.; Deng, K.; Zhang, K.; Wang, X.; Yao, G. Change detection in SAR images via ratio-based gaussian kernel and nonlocal theory. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–15. [Google Scholar] [CrossRef]
- Sun, Y.; Lei, L.; Li, X.; Tan, X.; Kuang, G. Structure consistency-based graph for unsupervised change detection with homogeneous and heterogeneous remote sensing images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–21. [Google Scholar] [CrossRef]
- Kittler, J.; Illingworth, J. Minimum error thresholding. Pattern Recognit. 1986, 19, 41–47. [Google Scholar] [CrossRef]
- Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B 1977, 39, 1–22. [Google Scholar]
- Su, L.; Gong, M.; Sun, B.; Jiao, L. Unsupervised change detection in SAR images based on locally fitting model and semi-EM algorithm. Int. J. Remote Sens. 2014, 35, 621–650. [Google Scholar] [CrossRef]
- Celik, T. Unsupervised change detection in satellite images using principal component analysis and $ k $-means clustering. IEEE Geosci. Remote Sens. Lett. 2009, 6, 772–776. [Google Scholar] [CrossRef]
- Gong, M.; Zhou, Z.; Ma, J. Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering. IEEE Trans. Image Process. 2011, 21, 2141–2151. [Google Scholar] [CrossRef]
- Krinidis, S.; Chatzis, V. A robust fuzzy local information C-means clustering algorithm. IEEE Trans. Image Process. 2010, 19, 1328–1337. [Google Scholar]
- Tian, D.; Gong, M. A novel edge-weight based fuzzy clustering method for change detection in SAR images. Inf. Sci. 2018, 467, 415–430. [Google Scholar] [CrossRef]
- Gao, F.; Dong, J.; Li, B.; Xu, Q. Automatic change detection in synthetic aperture radar images based on PCANet. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1792–1796. [Google Scholar] [CrossRef]
- Gao, F.; Wang, X.; Gao, Y.; Dong, J.; Wang, S. Sea ice change detection in SAR images based on convolutional-wavelet neural networks. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1240–1244. [Google Scholar] [CrossRef]
- Gao, Y.; Gao, F.; Dong, J.; Du, Q.; Li, H.-C. Synthetic Aperture Radar Image Change Detection via Siamese Adaptive Fusion Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 10748–10760. [Google Scholar] [CrossRef]
- Liu, F.; Jiao, L.; Tang, X.; Yang, S.; Ma, W.; Hou, B. Local restricted convolutional neural network for change detection in polarimetric SAR images. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 818–833. [Google Scholar] [CrossRef] [PubMed]
- Zhao, G.; Peng, Y. Semisupervised SAR image change detection based on a siamese variational autoencoder. Inf. Process. Manag. 2022, 59, 102726. [Google Scholar] [CrossRef]
- Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Object detectors emerge in deep scene cnns. arXiv 2014, arXiv:1412.6856. [Google Scholar]
- Duta, I.C.; Liu, L.; Zhu, F.; Shao, L. Pyramidal convolution: Rethinking convolutional neural networks for visual recognition. arXiv 2020, arXiv:2006.11538. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the Neural Information Processing Systems (NeurIPS), Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Yu, Y.; Acton, S.T. Speckle reducing anisotropic diffusion. IEEE Trans. Image Process. 2002, 11, 1260–1270. [Google Scholar]
- Rosenfield, G.H.; Fitzpatrick-Lins, K. A coefficient of agreement as a measure of thematic classification accuracy. Photogramm. Eng. Remote Sens. 1986, 52, 223–227. [Google Scholar]
Layers | Types | Activation Function | Output Shape | Kernel Size | Number of Filters | Stride | Padding | Groups |
---|---|---|---|---|---|---|---|---|
0 | Input | - | 7 × 7 × 2 | - | - | - | - | - |
1 | Conv1 × 1 | - | 7 × 7 × 64 | 1 × 1 | 64 | 1 | 0 | - |
2 | PyConv1 | ReLU | 7 × 7 × 128 | [3 × 3, 5 × 5, 7 × 7] | [32, 32, 64] | 1 | [1, 2, 3] | [1, 4, 8] |
3 | CBAM1 | - | 7 × 7 × 128 | - | - | - | - | - |
4 | PyConv2 | ReLU | 7 × 7 × 256 | [3 × 3, 5 × 5, 7 × 7] | [64, 64, 128] | 1 | [1, 2, 3] | [1, 4, 8] |
5 | CBAM2 | - | 7 × 7 × 256 | - | - | - | - | - |
6 | PyConv3 | ReLU | 7 × 7 × 256 | [3 × 3, 5 × 5, 7 × 7] | [64, 64, 128] | 1 | [1, 2, 3] | [1, 4, 8] |
7 | CBAM3 | - | 7 × 7 × 256 | - | - | - | - | - |
8 | PyConv4 | ReLU | 7 × 7 × 256 | [3 × 3, 5 × 5, 7 × 7] | [64, 64, 128] | 1 | [1, 2, 3] | [1, 4, 8] |
9 | CBAM4 | - | 7 × 7 × 256 | - | - | - | - | - |
10 | FC | - | 7 × 7 × 5 | 1 × 1 | 5 | 1 | 0 | - |
11 | Linear1 | - | 245 | - | - | - | - | - |
12 | Linear2 | - | 10 | - | - | - | - | - |
13 | Softmax | - | 2 | - | - | - | - | - |
Method | FP | FN | OE | PCC (%) | KC (%) | F1 (%) |
---|---|---|---|---|---|---|
PCANet | 932 | 1076 | 2008 | 98.02 | 92.54 | 93.72 |
CNN | 1546 | 693 | 2239 | 97.79 | 91.89 | 93.21 |
CWNN | 1291 | 434 | 1725 | 98.30 | 93.75 | 94.77 |
DDNet | 1353 | 618 | 1971 | 98.06 | 92.84 | 94.00 |
SAFNet | 1010 | 973 | 1983 | 98.05 | 92.67 | 93.83 |
PCBA-Net | 416 | 854 | 1270 | 98.75 | 95.25 | 95.99 |
Method | FP | FN | OE | PCC (%) | KC (%) | F1 (%) |
---|---|---|---|---|---|---|
PCANet | 175 | 559 | 734 | 98.88 | 91.23 | 91.83 |
CNN | 221 | 565 | 786 | 98.80 | 90.65 | 91.29 |
CWNN | 437 | 295 | 732 | 98.88 | 91.70 | 92.30 |
DDNet | 255 | 441 | 696 | 98.94 | 91.85 | 92.42 |
SAFNet | 129 | 509 | 638 | 99.03 | 92.38 | 92.90 |
PCBA-Net | 169 | 345 | 514 | 99.22 | 93.99 | 94.41 |
Method | FP | FN | OE | PCC (%) | KC (%) | F1 (%) |
---|---|---|---|---|---|---|
PCANet | 282 | 547 | 829 | 98.74 | 95.90 | 96.68 |
CNN | 929 | 515 | 1444 | 97.80 | 93.00 | 94.37 |
CWNN | 750 | 257 | 1007 | 98.46 | 95.13 | 96.08 |
DDNet | 329 | 734 | 1063 | 98.38 | 94.72 | 95.72 |
SAFNet | 150 | 898 | 1048 | 98.40 | 94.74 | 95.72 |
PCBA-Net | 280 | 387 | 667 | 98.98 | 96.71 | 97.34 |
Method | FP | FN | OE | PCC (%) | KC (%) | F1 (%) |
---|---|---|---|---|---|---|
PCANet | 58 | 1063 | 1121 | 98.74 | 87.59 | 88.24 |
CNN | 509 | 1272 | 1781 | 98.00 | 80.73 | 81.78 |
CWNN | 500 | 672 | 1172 | 98.68 | 88.00 | 88.70 |
DDNet | 204 | 813 | 1017 | 98.86 | 89.16 | 89.76 |
SAFNet | 229 | 844 | 1073 | 98.80 | 88.55 | 89.19 |
PCBA-Net | 277 | 550 | 827 | 99.07 | 91.45 | 91.95 |
Method | FP | FN | OE | PCC (%) | KC (%) | F1 (%) |
---|---|---|---|---|---|---|
PCANet | 604 | 1171 | 1775 | 98.63 | 76.95 | 77.65 |
CNN | 375 | 1253 | 1628 | 98.74 | 78.03 | 78.67 |
CWNN | 1507 | 483 | 1990 | 98.46 | 78.34 | 79.13 |
DDNet | 1049 | 459 | 1508 | 98.83 | 82.83 | 83.43 |
SAFNet | 365 | 1210 | 1575 | 98.78 | 78.83 | 79.45 |
PCBA-Net | 581 | 907 | 1488 | 98.85 | 81.22 | 81.82 |
Method | FP | FN | OE | PCC (%) | KC (%) | F1 (%) |
---|---|---|---|---|---|---|
PCANet | 17,345 | 3 | 17,348 | 86.23 | 11.66 | 13.42 |
CNN | 279 | 349 | 628 | 99.50 | 75.83 | 76.09 |
CWNN | 13,223 | 51 | 13,274 | 89.47 | 14.68 | 16.35 |
DDNet | 100 | 474 | 574 | 99.54 | 75.06 | 75.28 |
SAFNet | 155 | 190 | 345 | 99.73 | 86.90 | 87.03 |
PCBA-Net | 53 | 252 | 305 | 99.76 | 87.66 | 87.79 |
Method | Parameters | FLOPs | Training Time | Testing Time |
---|---|---|---|---|
CNN | 1.406 K | 10.224 K | 3 ms | 2 ms |
CWNN | 1.49 K | 95.04 K | 7 ms | 5 ms |
DDNet | 4.457 K | 116.545 K | 20 ms | 15 ms |
SAFNet | 211.798 K | 3.988 M | 10 ms | 7 ms |
PCBA-Net | 1.873 M | 70.249 M | 70 ms | 65 ms |
Kernels of PyConv | Ottawa | San Francisco | Sulzberger | Yellow River A | Yellow River B | Yellow River C |
---|---|---|---|---|---|---|
3 × 3, 5 × 5 | 98.65 | 99.14 | 98.83 | 98.93 | 98.80 | 99.67 |
3 × 3, 7 × 7 | 98.66 | 99.11 | 98.92 | 98.97 | 98.82 | 99.71 |
5 × 5, 7 × 7 | 98.70 | 99.19 | 98.92 | 99.02 | 98.79 | 99.74 |
3 × 3, 5 × 5, 7 × 7 | 98.75 | 99.22 | 98.98 | 99.07 | 98.85 | 99.76 |
CNN | PyConv | CBAM | Ottawa | San Francisco | Sulzberger | Yellow River A | Yellow River B | Yellow River C |
---|---|---|---|---|---|---|---|---|
✓ | ✗ | ✗ | 98.57 | 99.06 | 98.83 | 98.14 | 98.67 | 99.53 |
✓ | ✗ | ✓ | 98.62 | 99.09 | 98.88 | 98.32 | 98.76 | 99.61 |
✗ | ✓ | ✗ | 98.68 | 99.20 | 98.87 | 98.53 | 98.77 | 99.67 |
✗ | ✓ | ✓ | 98.75 | 99.22 | 98.98 | 99.07 | 98.85 | 99.76 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xia, Y.; Xu, X.; Pu, F. PCBA-Net: Pyramidal Convolutional Block Attention Network for Synthetic Aperture Radar Image Change Detection. Remote Sens. 2022, 14, 5762. https://doi.org/10.3390/rs14225762
Xia Y, Xu X, Pu F. PCBA-Net: Pyramidal Convolutional Block Attention Network for Synthetic Aperture Radar Image Change Detection. Remote Sensing. 2022; 14(22):5762. https://doi.org/10.3390/rs14225762
Chicago/Turabian StyleXia, Yufa, Xin Xu, and Fangling Pu. 2022. "PCBA-Net: Pyramidal Convolutional Block Attention Network for Synthetic Aperture Radar Image Change Detection" Remote Sensing 14, no. 22: 5762. https://doi.org/10.3390/rs14225762
APA StyleXia, Y., Xu, X., & Pu, F. (2022). PCBA-Net: Pyramidal Convolutional Block Attention Network for Synthetic Aperture Radar Image Change Detection. Remote Sensing, 14(22), 5762. https://doi.org/10.3390/rs14225762