Radar Emitter Signal Recognition Based on One-Dimensional Convolutional Neural Network with Attention Mechanism
<p>The structure of a one-dimensional attention unit (AU-1D).</p> "> Figure 2
<p>The structure of one-dimensional convolutional neural network with an attention mechanism (CNN-1D-AM).</p> "> Figure 3
<p>The average recognition rates of CNN-1D-AM on the training dataset and validation dataset with different quantity of training epochs.</p> "> Figure 4
<p>The recognition rates of CNN-1D-AM with 11 values of signal-to-noise ratio (SNR) on the validation dataset.</p> "> Figure 5
<p>The recognition rates of CNN-1D-AM with 11 values of SNR on the testing dataset.</p> "> Figure 6
<p>The confusion matrices of CNN-1D-AM, based on average recognition rates.</p> "> Figure 7
<p>The features filtered by the layer before the attention unit.</p> "> Figure 8
<p>The features weighted by the attention unit.</p> "> Figure 9
<p>The weights of the attention unit.</p> "> Figure 10
<p>Recognition accuracy of different methods and models (CNN-1D-AM, CNN-1D-Normal, ResNet18, VGG13, SSAE, SVM, DNN1, DNN2, DNN3, DNN4, SAE1, SAE2, SAE3) with each value of SNR on the testing dataset.</p> ">
Abstract
:1. Introduction
2. One-Dimensional Convolutional Neural Network with Attention Mechanism (CNN-1D-AM)
2.1. One-Dimensional Convolution
2.2. Attention Unit
2.3. CNN-1D-AM
3. Experiments and Discussions
3.1. Dataset
3.2. Experiments of CNN-1D-AM
3.3. Learned Features
3.4. Comparison of Other Methods
4. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Bouchou, M.; Wang, H.; Lakhdari, M.E.H. Automatic digital modulation recognition based on stacked sparse autoencoder. In Proceedings of the 2017 IEEE 17th International Conference on Communication Technology (ICCT), Chengdu, China, 27–30 October 2017; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2017; pp. 28–32. [Google Scholar]
- Park, C.-S.; Choi, J.-H.; Nah, S.-P.; Jang, W.; Kim, D.Y. Automatic Modulation Recognition of Digital Signals using Wavelet Features and SVM. In Proceedings of the 10th International Conference on Advanced Communication Technology, Gangwon-Do, Korea, 17–20 February 2008; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2008; Volume 1, pp. 387–390. [Google Scholar]
- Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Qu, Z.; Hou, C.; Hou, C.; Wang, W. Radar Signal Intra-Pulse Modulation Recognition Based on Convolutional Neural Network and Deep Q-Learning Network. IEEE Access 2020, 8, 49125–49136. [Google Scholar] [CrossRef]
- Shao, G.; Chen, Y.; Wei, Y. Deep Fusion for Radar Jamming Signal Classification Based on CNN. IEEE Access 2020, 8, 117236–117244. [Google Scholar] [CrossRef]
- Wang, F.; Yang, C.; Huang, S.; Wang, H. Automatic modulation classification based on joint feature map and convolutional neural network. IET Radar Sonar Navig. 2019, 13, 998–1003. [Google Scholar] [CrossRef]
- Liu, Z.; Shi, Y.; Zeng, Y.; Gong, Y. Radar Emitter Signal Detection with Convolutional Neural Network. In Proceedings of the 2019 IEEE 11th International Conference on Advanced Infocomm Technology (ICAIT), Jinan, China, 18–20 October 2019; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2019; pp. 48–51. [Google Scholar]
- Cain, L.; Clark, J.; Pauls, E.; Ausdenmoore, B.; Clouse, R.; Josue, T. Convolutional neural networks for radar emitter classification. In Proceedings of the 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 8–10 January 2018; pp. 79–83. [Google Scholar]
- Xiao, Y.; Wei, X.Z. Specific emitter identification of radar based on one dimensional convolution neural network. J. Phys. Conf. Ser. 2020, 1550. [Google Scholar] [CrossRef]
- Akyon, F.C.; Alp, Y.K.; Gok, G.; Arikan, O. Classification of Intra-Pulse Modulation of Radar Signals by Feature Fusion Based Convolutional Neural Networks. In Proceedings of the 2018 26th European Signal Processing Conference (EUSIPCO), Rome, Italy, 3–7 September 2018; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2018. [Google Scholar]
- Chen, L.; Zhang, H.; Xiao, J.; Nie, L.; Shao, J.; Liu, W.; Chua, T.-S. SCA-CNN: Spatial and Channel-Wise Attention in Convolutional Networks for Image Captioning. arXiv 2016, arXiv:1611.05594. [Google Scholar]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 2011–2023. [Google Scholar] [CrossRef] [Green Version]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Zeiler, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014. [Google Scholar]
- Zagoruyko, S.; Komodakis, N. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Yang, C.; He, Z.; Peng, Y.; Wang, Y.; Yang, J. Deep Learning Aided Method for Automatic Modulation Recognition. IEEE Access 2019, 7, 109063–109068. [Google Scholar] [CrossRef]
- Lim, H.-S.; Jung, J.; Lee, J.-E.; Park, H.-M.; Lee, S. DNN-Based Human Face Classification Using 61 GHz FMCW Radar Sensor. IEEE Sens. J. 2020, 20, 12217–12224. [Google Scholar] [CrossRef]
- Yuan, X.; Huang, B.; Wang, Y.; Yang, C.; Gui, W.-H. Deep Learning-Based Feature Representation and Its Application for Soft Sensor Modeling with Variable-Wise Weighted SAE. IEEE Trans. Ind. Inform. 2018, 14, 3235–3243. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
Project | Parameter |
---|---|
CPU | Intel Silver 4110 |
GPU | P400 + P40 |
RAM | 64 GB |
System Version | Centos 7 |
Simulation Software | MATLAB2020a, Python3.7, Keras 2.2.4 |
Signal Type | Carrier Frequency | Parameter |
---|---|---|
CW | 200 MHz~220 MHz | None |
LFM | 200 MHz~220 MHz | Frequency bandwidth: 50 MHz to 60 MHz |
NLFM | 200 MHz~220 MHz | Frequency of modulation signal ranges from 10 MHz to 12 MHz |
BPSK | 200 MHz~220 MHz | 13-bit Barker code Width of each symbol is 0.038 us |
QPSK | 200 MHz~220 MHz | 16-bit Frank code Width of each symbol is 0.03 us |
BFSK | 200 MHz~220 MHz 300 MHz~320 MHz | 13-bit Barker code Width of each symbol is 0.038 us |
QFSK | 100 MHz~110 MHz 150 MHz~160 MHz 200 MHz~210 MHz 250 MHz~260 MHz | 16-bit Frank code Width of each symbol is 0.03 us |
Model | CNN-1D-AM |
---|---|
Quantity of parameters | 3,554,504 |
Time per epoch | 55 s |
Neurons of the Layers | DNN1 | DNN2 | DNN3 | DNN4 |
---|---|---|---|---|
Input layer | 1024 | |||
First hidden layer | 512 | 512 | 256 | 512 |
Second hidden layer | 256 | 256 | 64 | 256 |
Third hidden layer | 128 | N/A | N/A | 128 |
Fourth hidden layer | N/A | N/A | N/A | 64 |
Output layer | 7 |
SAE Model | Parts of SAE | First Auto-Encoder | Second Auto-Encoder | Third Auto-Encoder | Classifier |
---|---|---|---|---|---|
SAE1 | Input layer | 1024 | 512 | 256 | 128 |
Hidden layer | 512 | 256 | 128 | N/A | |
Output layer | 1024 | 512 | 256 | 7 | |
SAE2 | Input layer | 1024 | 512 | N/A | 256 |
Hidden layer | 512 | 256 | N/A | ||
Output layer | 1024 | 512 | 7 | ||
SAE1 | Input layer | 1024 | N/A | 512 | |
Hidden layer | 512 | N/A | |||
Output layer | 1024 | 7 |
Model | CNN-1D-AM | CNN-1D-Normal | ResNet18 | VGG13 |
---|---|---|---|---|
Quantity of parameters | 3,554,504 | 3,520,903 | 4,465,543 | 5,761,863 |
Time per epoch | 55 s | 50 s | 101 s | 80 s |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, B.; Yuan, S.; Li, P.; Jing, Z.; Huang, S.; Zhao, Y. Radar Emitter Signal Recognition Based on One-Dimensional Convolutional Neural Network with Attention Mechanism. Sensors 2020, 20, 6350. https://doi.org/10.3390/s20216350
Wu B, Yuan S, Li P, Jing Z, Huang S, Zhao Y. Radar Emitter Signal Recognition Based on One-Dimensional Convolutional Neural Network with Attention Mechanism. Sensors. 2020; 20(21):6350. https://doi.org/10.3390/s20216350
Chicago/Turabian StyleWu, Bin, Shibo Yuan, Peng Li, Zehuan Jing, Shao Huang, and Yaodong Zhao. 2020. "Radar Emitter Signal Recognition Based on One-Dimensional Convolutional Neural Network with Attention Mechanism" Sensors 20, no. 21: 6350. https://doi.org/10.3390/s20216350