An Integrated Counterfactual Sample Generation and Filtering Approach for SAR Automatic Target Recognition with a Small Sample Set
<p>The schematic diagram of counterfactual target samples. Some generated sample looks no different from the real sample. However, it may causes a significant decrease in the performance of the ATR model. The effect of counterfactual target sample on the ATR model may be opposite to the effect perceived by the naked eye.</p> "> Figure 2
<p>The implementation motivation about the proposed approach.</p> "> Figure 3
<p>The architecture overview of the proposed generation model. The generation model consists of a deconvolution network and a convolution network. The condition information <span class="html-italic">y</span> refers to the target label condition. <span class="html-italic">z</span> means the initialization noise when the deconvolution network is initialized. <math display="inline"><semantics> <mrow> <mi>G</mi> <mo>(</mo> <mi>z</mi> <mo>)</mo> </mrow> </semantics></math> indicates the generated counterfactual target samples.</p> "> Figure 4
<p>The architecture of the proposed approach. The proposed approach consists of a generation model and a batch of SVMs with differences. The yellow dashed line indicates the proposed filtering process that uses and labels the generated samples.</p> "> Figure 5
<p>(<b>Top</b>) SAR images and (<b>Bottom</b>) optical images of ten categories of target samples. According to the order: 2S1, BMP2, BRDM2, BTR60, BTR70, D7, T62, T72, ZIL131, and ZSU23/4.</p> "> Figure 6
<p>The SAR images of ten categories target samples. According to the order: 2S1, BMP2, BRDM2, BTR60, BTR70, D7, T62, T72, ZIL131, and ZSU23/4. Row 1: the real SAR images of ten categories target samples. Row 2: The SAR images of ten categories target samples generated by the proposed model. Row 3: The SAR images of ten categories target samples generated by conditional GAN. Row 4: The SAR images of ten categories target samples generated by conditional deep convolutional GAN.</p> "> Figure 7
<p>The comparison chart of the recognition performance of the ATR model obtained through different training sample sets. Only the target samples generated by the proposed generation model can be filtered.</p> "> Figure 8
<p>The filtered counterfactual target samples of ten categories. According to the order: 2S1, BMP2, BRDM2, BTR60, BTR70, D7, T62, T72, ZIL131, and ZSU23/4. Row 1: The counterfactual target samples filtered by proposed filtering process. Row 2: The counterfactual target samples filtered by SVM.</p> "> Figure 9
<p>The ATR model recognition performance comparison results obtained by different filtering methods. All recognition models are identical to the architecture of the above ATR model.</p> "> Figure 10
<p>The recognition performance of different ATR models. The dotted line represents the recognition performance achieved by the two ATR models when facing the small sample set. The solid line indicates the recognition rate of ATR models when the number of SAR target samples is sufficient.</p> "> Figure 11
<p>The recognition performance comparison results of different recognition models. The solid line indicates the recognition rate of different recognition models when the amount of SAR target samples is sufficient. The dotted line indicates the recognition rate of different recognition models when the ATR model solves the dilemma of a small sample set.</p> "> Figure 12
<p>The comparing results of ship recognition performance between different recognition models. The solid line indicates the recognition rate of different recognition models when the amount of SAR target samples is sufficient. The dotted line indicates the recognition rate of different recognition models when the ATR model solves the dilemma of a small sample set.</p> ">
Abstract
:1. Introduction
2. Related Work
2.1. The Generation of Counterfactual Target Samples
2.2. The Filtering of Counterfactual Target Samples
3. The Proposed Approach
3.1. Some Important Motivations about the Proposed Approach
3.2. The Generation of Counterfactual Target Samples
3.3. The Filtering of Counterfactual Target Samples
4. Experimental Results
4.1. Experiment Arrangement and Experiment Requirements
4.2. The Ablation Experiments of the Proposed Generation Component
4.3. The Ablation Experiments of the Proposed Filtering Component
4.4. The Recognition Performance of the Proposed Approach
5. Discussion
5.1. The Analysis of the Proposed Generation Component
5.2. The Analysis of the Proposed Filtering Component
5.3. The Analysis of Recognition Performance
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Sun, Y.; Liu, Z.; Todorovic, S.; Li, J. Adaptive boosting for SAR automatic target recognition. IEEE Trans. Aerosp. Electron. Syst. 2007, 43, 112–125. [Google Scholar] [CrossRef]
- Jianxiong, Z.; Zhiguang, S.; Xiao, C.; Qiang, F. Automatic Target Recognition of SAR Images Based on Global Scattering Center Model. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3713–3729. [Google Scholar] [CrossRef]
- O’Sullivan, J.; Devore, M.; Kedia, V.; Miller, M. SAR ATR performance using a conditionally Gaussian model. IEEE Trans. Aerosp. Electron. Syst. 2001, 37, 91–108. [Google Scholar] [CrossRef] [Green Version]
- Srinivas, U.; Monga, V.; Raj, R.G. SAR Automatic Target Recognition Using Discriminative Graphical Models. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 591–606. [Google Scholar] [CrossRef]
- Lu, C.; Feng, J.; Chen, Y.; Liu, W.; Lin, Z.; Yan, S. Tensor Robust Principal Component Analysis with a New Tensor Nuclear Norm. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 925–938. [Google Scholar] [CrossRef] [Green Version]
- Deng, X.; Tian, X.; Chen, S.; Harris, C.J. Nonlinear Process Fault Diagnosis Based on Serial Principal Component Analysis. IEEE Trans. Neural Netw. Learn. Syst. 2016, 29, 560–572. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yang, J.; Zhang, D.; Frangi, A.F.; Yang, J.Y. Two-dimensional PCA: A new approach to appearance-based face representation and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 131–137. [Google Scholar] [CrossRef] [Green Version]
- Jia, Y.; Kwong, S.; Hou, J.; Wu, W. Semi-Supervised Non-Negative Matrix Factorization With Dissimilarity and Similarity Regularization. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 2510–2521. [Google Scholar] [CrossRef]
- Hu, P.; Peng, D.; Sang, Y.; Xiang, Y. Multi-View Linear Discriminant Analysis Network. IEEE Trans. Image Process. 2019, 28, 5352–5365. [Google Scholar] [CrossRef]
- Li, C.-N.; Shao, Y.-H.; Yin, W.; Liu, M.-Z. Robust and Sparse Linear Discriminant Analysis via an Alternating Direction Method of Multipliers. IEEE Trans. Neural Networks Learn. Syst. 2019, 31, 915–926. [Google Scholar] [CrossRef]
- Richhariya, B.; Tanveer, M. A reduced universum twin support vector machine for class imbalance learning. Pattern Recognit. 2020, 102, 107150. [Google Scholar] [CrossRef]
- Zhao, Q.; Principe, J.C. Support vector machines for SAR automatic target recognition. IEEE Trans. Aerosp. Electron. Syst. 2001, 37, 643–654. [Google Scholar] [CrossRef] [Green Version]
- Kechagias-Stamatis, O.; Aouf, N.; Belloni, C. SAR Automatic Target Recognition based on Convolutional Neural Networks. In Proceedings of the International Conference on Radar Systems, Belfast, UK, 23–26 October 2017. [Google Scholar] [CrossRef]
- Pei, J.; Huang, Y.; Huo, W.; Zhang, Y.; Yang, J.; Yeo, T.-S. SAR Automatic Target Recognition Based on Multiview Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 2196–2210. [Google Scholar] [CrossRef]
- Zhang, X.-Y.; Yin, F.; Zhang, Y.-M.; Liu, C.-L.; Bengio, Y. Drawing and Recognizing Chinese Characters with Recurrent Neural Network. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 849–862. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Liu, X.; Huang, Y.; Pei, J.; Yang, J. Sample discriminant analysis for SAR ATR. IEEE Geosci. Remote Sens. Lett. 2014, 11, 2120–2124. [Google Scholar]
- Karjalainen, A.I.; Mitchell, R.; Vazquez, J. Training and Validation of Automatic Target Recognition Systems using Generative Adversarial Networks. In Proceedings of the 2019 Sensor Signal Processing for Defence Conference (SSPD), Brighton, UK, 9–10 May 2019. [Google Scholar] [CrossRef]
- Choe, J.; Park, S.; Kim, K.; Park, J.H.; Kim, D.; Shim, H. Face Generation for Low-Shot Learning Using Generative Adversarial Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017. [Google Scholar] [CrossRef]
- Shin, H.-C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans. Med. Imaging 2016, 35, 1285–1298. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Xie, M.; Jean, N.; Burke, M.; Lobell, D.; Ermon, S. Transfer learning from deep features for remote sensing and poverty mapping. In Proceedings of the 30th AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; Volume 30. [Google Scholar]
- Huang, Z.; Pan, Z.; Lei, B. Transfer Learning with Deep Convolutional Neural Network for SAR Target Classification with Limited Labeled Data. Remote Sens. 2017, 9, 907. [Google Scholar] [CrossRef] [Green Version]
- Wong, S.C.; Gatt, A.; Stamatescu, V.; McDonnell, M.D. Understanding data augmentation for classification: When to warp. In Proceedings of the 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Gold Coast, QLD, Australia, 30 November–2 December 2016. [Google Scholar]
- Malmgren-Hansen, D.; Kusk, A.; Dall, J.; Nielsen, A.A.; Engholm, R.; Skriver, H. Improving SAR Automatic Target Recognition Models With Transfer Learning From Simulated Data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1484–1488. [Google Scholar] [CrossRef] [Green Version]
- Zhong, Z.; Zheng, L.; Zheng, Z.; Li, S.; Yang, Y. CamStyle: A Novel Data Augmentation Method for Person Re-Identification. IEEE Trans. Image Process. 2018, 28, 1176–1190. [Google Scholar] [CrossRef]
- Guo, J.; Lei, B.; Ding, C.; Zhang, Y. Synthetic Aperture Radar Image Synthesis by Using Generative Adversarial Nets. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1111–1115. [Google Scholar] [CrossRef]
- Gao, F.; Yang, Y.; Wang, J.; Sun, J.; Yang, E.; Zhou, H. A Deep Convolutional Generative Adversarial Networks (DCGANs)-Based Semi-Supervised Method for Object Recognition in Synthetic Aperture Radar (SAR) Images. Remote Sens. 2018, 10, 846. [Google Scholar] [CrossRef] [Green Version]
- Schwegmann, C.P.; Kleynhans, W.; Salmon, B.P.; Mdakane, L.; Meyer, R. Synthetic aperture radar ship discrimination, generation and latent variable extraction using information maximizing generative adversarial networks. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 2263–2266. [Google Scholar] [CrossRef]
- Zheng, Z.; Zheng, L.; Yang, Y. Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in Vitro. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 3774–3782. [Google Scholar] [CrossRef] [Green Version]
- Cui, Z.; Zhang, M.; Cao, Z.; Cao, C. Image Data Augmentation for SAR Sensor via Generative Adversarial Nets. IEEE Access 2019, 7, 42255–42268. [Google Scholar] [CrossRef]
- Tang, H.; Zhao, Y.; Lu, H. Unsupervised Person Re-Identification with Iterative Self-Supervised Domain Adaptation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 1536–1543. [Google Scholar]
- Zhang, M.; Cui, Z.; Wang, X.; Cao, Z. Data Augmentation Method of SAR Image Dataset. In Proceedings of the 2018 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2018), Valencia, Spain, 22–27 July 2018; pp. 5292–5295. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Bengio, Y. Generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems 27 (NIPS 2014), Montreal, QC, Canada, 8–11 December 2014; pp. 2672–2680. [Google Scholar]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
- Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 214–223. [Google Scholar]
- Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C. Improved training of wasserstein gans. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; pp. 5767–5777. [Google Scholar]
- Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
- Cao, C.; Cao, Z.; Cui, Z. LDGAN: A Synthetic Aperture Radar Image Generation Method for Automatic Target Recognition. IEEE Trans. Geosci. Remote Sens. 2019, 58, 3495–3508. [Google Scholar] [CrossRef]
- Blum, A.; Mitchell, T. Combining labeled and unlabeled data with co-training. In Proceedings of the COLT98: The 11th Annual Conference on Computational Learning Theroy, Madison, WI, USA, 5–8 July 1995; pp. 92–100. [Google Scholar]
- Balcan, M.F.; Blum, A.; Yang, K. Co-training and expansion: Towards bridging theory and practice. In Proceedings of the Advances in Neural Information Processing Systems 28 (NIPS 2015), Montreal, QC, Canada, 7–12 December 2015; pp. 89–96. [Google Scholar]
- Wang, W.; Zhou, Z.H. Analyzing co-training style algorithms. In European Conference on Machine Learning; Springer: Berlin/Heidelberg, Germany, 2017; pp. 454–465. [Google Scholar]
- Wang, W.; Zhou, Z.H. Co-training with insufficient views. In Proceedings of the Asian conference on machine learning, Canberra, Australia, 13–15 November 2013; pp. 467–482. [Google Scholar]
- Keydel, E.R.; Lee, S.W.; Moore, J.T. MSTAR extended operating conditions: A tutorial. In Proceedings of the Aerospace/Defense Sensing and Controls, Orlando, FL, USA, 10 June 1996; pp. 228–242. [Google Scholar]
- Huang, L.; Liu, B.; Li, B.; Guo, W.; Yu, W.; Zhang, Z.; Yu, W. OpenSARShip: A dataset dedicated to Sentinel-1 ship interpretation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 11, 195–208. [Google Scholar] [CrossRef]
Efficiency Indicators | Conditional GAN | Conditional Deep Convolutional GAN | Proposed Generation Component |
---|---|---|---|
total number of generated samples | 1000 | 1000 | 1819 |
the type of generated sample | unable to filter | unable to filter | need to filter |
filtering pass rate | 0% | 0% | 16.49% |
total number of filtered samples | 0 | 0 | 300 |
Target Type | The Number of Train Set | The Number of Test Set |
---|---|---|
bulkcarrier | 200 | 200 |
cargo | 200 | 200 |
containership | 80 | 80 |
generalcargo | 205 | 204 |
tanker | 76 | 76 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cao, C.; Cui, Z.; Cao, Z.; Wang, L.; Yang, J. An Integrated Counterfactual Sample Generation and Filtering Approach for SAR Automatic Target Recognition with a Small Sample Set. Remote Sens. 2021, 13, 3864. https://doi.org/10.3390/rs13193864
Cao C, Cui Z, Cao Z, Wang L, Yang J. An Integrated Counterfactual Sample Generation and Filtering Approach for SAR Automatic Target Recognition with a Small Sample Set. Remote Sensing. 2021; 13(19):3864. https://doi.org/10.3390/rs13193864
Chicago/Turabian StyleCao, Changjie, Zongyong Cui, Zongjie Cao, Liying Wang, and Jianyu Yang. 2021. "An Integrated Counterfactual Sample Generation and Filtering Approach for SAR Automatic Target Recognition with a Small Sample Set" Remote Sensing 13, no. 19: 3864. https://doi.org/10.3390/rs13193864
APA StyleCao, C., Cui, Z., Cao, Z., Wang, L., & Yang, J. (2021). An Integrated Counterfactual Sample Generation and Filtering Approach for SAR Automatic Target Recognition with a Small Sample Set. Remote Sensing, 13(19), 3864. https://doi.org/10.3390/rs13193864