Next Article in Journal
Measurement-Device-Independent Two-Party Cryptography with Error Estimation
Previous Article in Journal
Cloud-Based Monitoring of Thermal Anomalies in Industrial Environments Using AI and the Internet of Robotic Things
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Letter

Radar Emitter Signal Recognition Based on One-Dimensional Convolutional Neural Network with Attention Mechanism

1
School of Electronic Engineering, Xidian University, Xi’an 710071, China
2
Science and Technology on Electronic Information Control Laboratory, Chengdu 610036, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(21), 6350; https://doi.org/10.3390/s20216350
Submission received: 10 October 2020 / Revised: 2 November 2020 / Accepted: 4 November 2020 / Published: 7 November 2020
(This article belongs to the Section Remote Sensors)

Abstract

:
As the real electromagnetic environment grows complex and the quantity of radar signals turns massive, traditional methods, which require a large amount of prior knowledge, are time-consuming and ineffective for radar emitter signal recognition. In recent years, convolutional neural network (CNN) has shown its superiority in recognition so that experts have applied it in radar signal recognition. However, in the field of radar emitter signal recognition, the data are usually one-dimensional (1-D), which takes more time and storage space than by using the original two-dimensional CNN model directly. Moreover, the features extracted from convolutional layers are redundant so that the recognition accuracy is low. In order to solve these problems, this paper proposes a novel one-dimensional convolutional neural network with an attention mechanism (CNN-1D-AM) to extract more discriminative features and recognize the radar emitter signals. In this method, features of the given 1-D signal sequences are extracted directly by the 1-D convolutional layers and are weighted in accordance with their importance to recognition by the attention unit. The experiments based on seven different radar emitter signals indicate that the proposed CNN-1D-AM has the advantages of high accuracy and superior performance in radar emitter signal recognition.

1. Introduction

Radar emitter signal recognition is a technology used to obtain information about radar systems by intercepting and analyzing their signals. The features of radar signals are always extracted manually based on traditional methods. Much research has been done on feature extraction. Bouchou et al. [1] calculated eight key features, including higher-order cumulants (HOC), and used stacked sparse autoencoder (SSAE) to recognize seven different digital modulation signals. Park et al. [2] used wavelet features and support vector machines (SVM) to recognize eight different digital modulation signals. However, as the real electromagnetic environment grows complex and the quantity of radar signals turns massive, the performance of traditional methods, which require a great deal of prior knowledge and time, is poor when the radar emitter signals are on low signal-to-noise ratio (SNR).
It is expected to develop a generic and effective method that can automatically extract features from radar signals. Deep learning [3] has attracted great attention in the field of artificial intelligence, and convolutional neural network (CNN) [4,5] performs well in recognition. A large amount of research on radar emitter signal recognition has been carried out using CNN. Qu et al. [6] trained a CNN model and deep Q-learning network, which use time-frequency images extracted by Cohen class time-frequency distribution as the input. Shao et al. [7] proposed a deep fusion method based on CNN, which provides competitive results in terms of classification accuracy. Wang et al. [8] combined the time-frequency maps and instantaneous autocorrelation maps of radar signals and used the joint feature maps as the input of CNN, which overcomes the weakness of a single feature map for the classification. Liu et al. [9] proposed an algorithm of radar emitter signal recognition, which uses the time-frequency images as the input of CNN. Cain et al. [10] combined radar frequency, pulse width and pulse repetition interval and used CNN for individual radar identification. Xiao et al. [11] proposed a method based on CNN, which uses the frequency features of automatic dependent surveillance broadcast (ADS-B) signal. Akyon et al. [12] classify the intra-pulse modulation of radar signals based on feature fusion and CNN.
However, in the field of radar emitter signal recognition, most of the sampled radar signals are one-dimensional (1-D) time-domain sequences. If we use the original two-dimensional (2-D) CNN models directly, it will take more time and storage space to transfer the sequences from 1-D form to 2-D form. Moreover, the dimensional transformation will result in poor real-time performance when the 2-D CNN models are used in practical applications. Although CNN models focus on global information and are able to extract features, the weights of the features are not the same, which means that the redundant and useless features can make recognition accuracy suppressed. Considering these limitations, this paper proposes a novel one-dimensional convolutional neural network with an attention mechanism (CNN-1D-AM) to extract features directly from original radar signals sequence in the time domain and focus on the key information of extracted features for radar emitter signal recognition.
The contribution of this paper can be concluded as follows:
(1) The 1-D convolutional layers can directly extract the feature from the time-domain sequences of radar signals. Moreover, compared with 2-D structure, 1-D convolutional layers save time in the dimensional transformation of radar signals, which makes the model better real-time performance in practical applications.
(2) A unit that employs an attention mechanism [13,14] is added to automatically weight the feature maps given by 1-D convolutional layers so that the important features can obtain more weights and the features which have negative impacts on recognition can be inhibited. The experimental results show that the proposed CNN-1D-AM can achieve high accuracy and has superior performance in radar emitter signal recognition.
This paper is organized as follows: In Section 2, the proposed CNN-1D-AM, which uses 1-D convolution and an attention mechanism, is introduced in detail. The experiments and discussions of the proposed methods and other compared methods are shown in Section 3. The conclusion is presented in Section 4.

2. One-Dimensional Convolutional Neural Network with Attention Mechanism (CNN-1D-AM)

2.1. One-Dimensional Convolution

CNN are usually designed to process 2-D data, especially images. As radar emitter signals are mainly in 1-D form and dimensional transformation is time-consuming, this paper proposed 1-D convolutional layers for feature extraction. The 1-D convolutional layers decrease the number of parameters compared with traditional 2-D convolutional layers. Moreover, the 1-D signals in the time domain are no longer converted into 2-D feature maps, which saves time and storage space.
Given the 1-D signal sequences { x i } i = 1 N where x i is the i t h sample and N is the number of sequences. Assume that there are K filters in the first 1-D convolutional layer and L is the length of one signal sequence, which is the same as the input shape of the layer. Then the output of the filter in 1-D convolutional layer can be written as follows:
y i k = f ( w k x i + b k )
where y i k denotes the output of the k t h filter, f ( ) is the activation function, w k and b k are the weight and bias of the k t h filter, and ‘ ’ means convolution computation. When padding the edge of output result with zero, the output of 1-D convolutional layer can be written as Y R L × K .
Similar to 2-D CNN, a pooling layer is connected after the convolutional layers in 1-D CNN. The output of 1-D pooling layer can be written as Y ˜ R L r × K , where r is the rate of downsampling. A typical structure of CNN can be written as follows:
x i Y 1 Y ˜ 1 Y 2 Y ˜ 2 Y i Y ˜ i
where Y i denotes the output matrix of the i t h convolutional layer and Y ˜ i is the output matrix of the i t h pooling layer.

2.2. Attention Unit

In recent years, Woo et al. [15] proposed the convolutional block attention module (CBAM) in a 2-D CNN. CBAM has proven that the order of the channel attention first and the spatial attention later performs better. This paper proposes the one-dimensional attention unit (AU-1D), which is similar to the order of the original CBAM. The AU-1D is added between the last pooling layer and the first full connection layer, where the unit helps to capture the essential features and suppress the less important information. The structure of the proposed AU-1D is shown in Figure 1.
Given a feature map F i n R W × C , where W is the length of the map, and C is the number of channels. AU-1D first extracts the channel features by two ways of pooling. The max-pooling function and average-pooling function in the channel domain can be written as follows:
c 1 = M a x P o o l ( F i n ) = max ( F i n ( 1 i W , C ) )
c 2 = A v e r a g e P o o l ( F i n ) =   1 W i W F i n ( i , C )
where c 1 R 1 × C and c 2 R 1 × C are two different vectors calculated by different ways of pooling. Then, a multilayer perceptron (MLP) is used to extract features from c 1 and c 2 further. By activating the vector which is merged by two output feature vectors from MLP, the map of channel attention O u t _ c R 1 × C is produced. This process is shown as follows:
O u t _ c = A c t i v a t e ( M L P ( c 1 ) + M L P ( c 2 ) )
The map of channel attention can be considered as a feature detector [16]. It refers to the weight for each channel in the feature map. Different convolutional kernels extract different information in the channel domain. The map of channel attention refers to the weight of each channel. The more useful information the channel brings, the more weight the channel obtains.
Then, the middle-regained feature map F m i d is obtained through the process of multiplying O u t _ c and the original feature map F i n . This process is shown as follows:
F m i d = F i n O u t _ c = F i n σ ( W M L P ( c 1 ) + W M L P ( c 2 ) )
where stands for multiply computation, σ denotes the sigmoid function, W M L P denotes the weights of MLP.
In spatial feature extraction, there are two ways of pooling whose pooling-axes [17] are different from that in channel feature extraction. The max-pooling function and average-pooling function in the spatial domain can be written as follows:
s 1 = M a x P o o l ( F m i d ) = max ( F m i d ( W , 1 j C ) )
s 2 = A v e r a g e P o o l ( F m i d ) =   1 C j C F m i d ( W , j )
where s 1 R W × 1 and s 2 R W × 1 are two different vectors calculated by different ways of pooling. s 1 and s 2 are concatenated into a fusion vector s R W × 2 . The Conv1d unit extracted information from s . By activating the output of the Conv1d unit, the map of spatial attention O u t _ s R W × 2 is produced. This process is shown as follows:
s = [ s 1 ; s 2 ]
O u t _ s = A c t i v a t e ( c o n v 1 d ( s ) )
where c o n v 1 d ( ) is the computation of 1-D convolution.
The map of spatial attention reflects the importance of features in different areas. Not all areas in the feature map are equally important to the recognition, but the areas which are relevant to the task of recognition should be concerned more.
Finally, the regained feature map F o u t is obtained through the process of multiplying O u t _ s and the original feature map F m i d . This process is written as follows:
F o u t = F m i d O u t _ s = F m i d σ ( W c o n v 1 d ( [ s 1 ; s 2 ] ) )
where W c o n v 1 d denotes the weights of convolutional layers.
Through the AU-1D, the feature maps extracted from the 1-D convolutional layers will be weighted. The most useful information in the feature maps weights higher, and the useless information will be suppressed. In this way, the network can extract more effective features and improve the performance of recognition.

2.3. CNN-1D-AM

According to the analysis of the 1-D convolution and attention unit, the structure of the CNN-1D model with attention mechanism (CNN-1D-AM), this paper proposed is shown in Figure 2.
In Figure 2, ‘Input’ is the layer, which uses the sequence of radar emitter signals in the time domain. ‘Output’ is the layer with a certain number of neurons, which refers to the number of signal types. ‘Conv1d Unit’ contains one convolutional layer, one max-pooling layer and one batch-normalization layer. The size of the convolutional kernels is 33 in four ‘Conv1d Units,’ and the number of filters is 32, 64, 128, 256 in turns. ‘Dense Unit’ contains one full connection layer.
To reduce the influence of different amplitudes on recognition, the amplitude normalization for the original data is needed. The original data are the radar emitter signals in the time domain. The expression of amplitude normalization is shown as follows:
d ( i , j ) = r ( i , 1 j H ) m a x ( a b s ( r ( i , 1 j H ) ) ) , 1 i N
where r R N × H are the original data sequences in the time domain, d R N × H are the normalized data sequences in the time domain, N is the number of samples, and H is the length of each sample. The result of amplitude normalization is the input of the CNN-1D-AM model for recognition.
The activation function in the last layer is the ‘SoftMax’ function so that the probability for each type of signal in recognition can be obtained. The final probability for each type of signals is shown as follows:
y ^ i = P ( y = i | o u t ) = e o u t i i = 1 T e o u t i
where y ^ = [ y ^ 1 , y ^ 2 , , y ^ T ] , o u t i = [ o u t 1 , o u t 2 , , o u t T ] . y ^ i refers to the probability that the input data are recognized as class i . o u t i is the output of the i t h neuron in the final output layer, which contains T neurons in total. The category corresponding to the maximum y ^ is the classification result of CNN-1D-AM.
The cross-entropy (CE) function is selected as the cost function. The CE function is written as follows:
L ( θ ) = i = 1 T y i ln ( y ^ i ) = i = 1 T y i ln ( g ( θ , x ) i )
where y is the one-hot coded result of data label, g ( θ , x ) denotes the output of CNN-1D-AM with x as the input, θ is the weights of the model, L ( θ ) is the result of the CE function.
Adaptive moment estimation (ADAM) [18] is chosen as the optimization algorithm. According to (14), this algorithm can be written as follows:
g θ L ( θ )
m β 1 m + ( 1 β 1 ) g
v β 2 v + ( 1 β 2 ) g 2
m m / ( 1 β 1 T )
θ θ α m / ( v + ε )
where g is the gradient of L ( θ ) by its gradient operator θ , m and v are the moment vectors with 0 as their initial value, β 1 and β 2 are constants, usually set to 0.9 and 0.999, α is the learning rate, ε is a smoothing parameter, typically set to 10 8 .

3. Experiments and Discussions

The experiment platform parameters for algorithm implementation are shown in Table 1.

3.1. Dataset

Seven different varieties of radar emitter signals were used to validate the effectiveness of the proposed algorithm, namely, continuous wave (CW), linear frequency wave (LFM), nonlinear frequency wave (NLFM), binary phase-shift keying (BPSK), quadrature phase-shift keying (QPSK), binary frequency shift keying (BFSK) and quadrature frequency shift keying (QFSK). These seven different types of modulation are commonly used in radar systems. The specific parameters of the signals are shown in Table 2. The carrier frequency and frequency bandwidth change within a certain range, which meets the changing characteristics of the electromagnetic environment.
The datasets in the experiment were produced like this:
(1) First, we generated seven types of radar emitter signals with different values of SNR. The type of noise was Gaussian white noise, and the passband ranged from 90 MHz to 340 MHz. The SNR for each type of signal ranged from −10 dB to 0 dB with 1 dB step, totaling 11 values. The number of samples for each type of signal with each value of SNR was 7000.
(2) Second, we divided the samples into three different datasets. As (1) shows, 7000 samples for each type of signal with each value of SNR were divided into training dataset with 1600 samples, validation dataset with 400 samples and testing dataset with 5000 samples.
(3) Third, we made the final datasets. The final training dataset with 123,200 samples, the final validation dataset with 30,800 samples and the final testing dataset with 385,000 samples were combined by the datasets in (2).

3.2. Experiments of CNN-1D-AM

The model CNN-1D-AM was trained based on the preprocessed data in Section 3.1. The number of parameters and training time per epoch for CNN-1D-AM is shown in Table 3.
As shown in Table 3, the training time of CNN-1D-AM for each epoch with 123,200 samples was less than one minute, which means that the model was lightly designed and was on low incremental resource consumption.
The average recognition rates for the training dataset and validation dataset during the training session are shown in Figure 3.
Figure 3 shows that after training 50 epochs, the recognition accuracy of CNN-1D-AM on the training dataset reached nearly 100%. Moreover, the recognition accuracy of the model on the validation dataset was over 96%, which denotes that the model converged.
The weights of the neural network with the highest recognition rate on the validation dataset were saved. Under this circumstance, the recognition rate of CNN-1D-AM with 11 values of SNR on the validation dataset is shown in Figure 4.
Figure 4 indicates that the model acquired nearly 100% accuracy when the SNR was above −6 dB. Moreover, the accuracy was less than 90% only when SNR was lower than −9 dB.
In the real applications, the number of samples which need to be tested is always larger than that on the validation dataset. Therefore, the testing dataset with large-scale samples was used to validate the exact real performance of the model. The recognition rate of CNN-1D-AM with 11 values of SNR on the testing dataset is shown in Figure 5.
As shown in Figure 5, the average recognition rate of CNN-1D-AM decreased compared with Figure 4. This is because the number of samples on the testing dataset was about 12.5 times more than that on the validation dataset and 3.125 times more than that on the training dataset. This is equivalent to the situation that a model is trained with fewer samples and is tested with a huge number of samples. When SNR was above −5 dB, the accuracy of recognition on the testing dataset was still close to 100%. Interestingly, the recognition rate fell nearly 1% when the SNR rose from −5 dB to −4 dB.
To figure out the specific recognition results of CNN-1D-AM, the confusion matrix for average recognition performance based on the testing dataset is shown in Figure 6. It was found that the part of low recognition rates could be attributed to the classification of BFSK signals. A portion of the BFSK signals was mainly misidentified as CW signals and BPSK signals. Apart from this, the average recognition rates of the other six types of signals were over 93.5% by calculating.

3.3. Learned Features

In this section, the extracted features of signals by the proposed CNN-1D-AM were investigated. Specifically, a sample from the testing dataset was sent to the CNN-1D-AM model. Some features filtered by the layer before the attention unit and weighted by the attention unit are plotted in Figure 7 and Figure 8. The weights of the attention unit are also shown in Figure 9.
Figure 7 and Figure 8 indicate that the features in different channels and different positions of space were weighted by the attention unit. The relative values of features in some certain channels and in some positions of space turned to zero. Moreover, Figure 9 shows that the features in different positions of space and channels gained weights differently based on the attention unit.

3.4. Comparison of Other Methods

To further evaluate the effectiveness of the proposed method, some traditional methods and state-of-the-art deep learning-based models were used as a comparison.
The traditional methods include SVM [19], which uses seven HOC features as the input; SSAE1, which uses spectral power feature, amplitude feature in the time domain and six HOC features as input. Moreover, the deep learning-based models include CNN and deep neural networks (DNN) [20], stacked autoencoder (SAE) [21].
For the CNN part, the VGG network [22] and ResNet [23] were chosen as the comparison models. As the structure of the proposed CNN-1D-AM is not complicated, for this paper, we chose the specific VGG network, which includes 13 weight layers (VGG13) and the specific ResNet, which includes 18 layers (ResNet18). To make the comparison between methods as fair as possible, both of VGG13 and ResNet18 were transferred from 2-D forms, and the parameters were reset properly according to the literature. Moreover, to investigate the impact of the attention mechanism, a CNN-1D model, which is transferred by deleting the attention unit from the proposed models, was also used as a comparison (CNN-1D-Normal).
For the DNN part, four different models were chosen, and the detail of these models is shown in Table 4. The adjacent layers were fully connected. The differences among the four DNN models were the quantity of layers and the number of neurons in the layers.
In addition, three SAE models were chosen, and their structure is shown in Table 5. The SAE models included at least one autoencoder and one classifier. Moreover, the adjacent layers of autoencoders and the classifier were fully connected.
The datasets used in this session were the same as before. The input of CNN, DNN and SAE models in comparison was the sequences of radar emitter signals in the time domain. Moreover, the input data of SVM and SSAE were calculated according to the same datasets.
Figure 10 shows the recognition accuracy of different methods and models with each value of SNR on the testing dataset. By analysis, the accuracy of convolutional neural network models was higher than other methods, and the performance of CNN-1D-AM this paper proposed was superior to those of other models above-mentioned. Moreover, the comparison between CNN-1D-AM and CNN-1D-Normal shows that AU-1D could improve the recognition accuracy of the network.
Table 6 shows the number of parameters and training time per epoch for convolutional neural network models, which indicated that the CNN-1D-AM model was of higher efficiency and lower consumption of computation.

4. Conclusions

This paper proposes a novel CNN-1D-AM for radar emitter signal recognition. The designed 1-D convolutional layers especially could directly extract features from the time-domain sequences of radar emitter signals. The attention unit was integrated into the CNN-1D model so that the recognition accuracy of a neural network could be improved further. The experimental results indicated that CNN-1D-AM could achieve high accuracy of recognition on seven different radar signals. The comparison results with some traditional methods and deep learning-based models show the superior performance of CNN-1D-AM. In future work, we hope to propose a CNN-1D model with a new attention mechanism, which can increase the accuracy of recognition further.

Author Contributions

Guidance of theoretical analysis, B.W.; operation of the experiments, analysis and writing of the paper: S.Y.; supervision, P.L.; operation of the experiments, Z.J.; software, S.H., Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Fundamental Research Funds for the Central Universities, the Innovation Fund of Xidian University and the National Natural Science Foundation of China (No. 61805189).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bouchou, M.; Wang, H.; Lakhdari, M.E.H. Automatic digital modulation recognition based on stacked sparse autoencoder. In Proceedings of the 2017 IEEE 17th International Conference on Communication Technology (ICCT), Chengdu, China, 27–30 October 2017; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2017; pp. 28–32. [Google Scholar]
  2. Park, C.-S.; Choi, J.-H.; Nah, S.-P.; Jang, W.; Kim, D.Y. Automatic Modulation Recognition of Digital Signals using Wavelet Features and SVM. In Proceedings of the 10th International Conference on Advanced Communication Technology, Gangwon-Do, Korea, 17–20 February 2008; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2008; Volume 1, pp. 387–390. [Google Scholar]
  3. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012. [Google Scholar] [CrossRef]
  5. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  6. Qu, Z.; Hou, C.; Hou, C.; Wang, W. Radar Signal Intra-Pulse Modulation Recognition Based on Convolutional Neural Network and Deep Q-Learning Network. IEEE Access 2020, 8, 49125–49136. [Google Scholar] [CrossRef]
  7. Shao, G.; Chen, Y.; Wei, Y. Deep Fusion for Radar Jamming Signal Classification Based on CNN. IEEE Access 2020, 8, 117236–117244. [Google Scholar] [CrossRef]
  8. Wang, F.; Yang, C.; Huang, S.; Wang, H. Automatic modulation classification based on joint feature map and convolutional neural network. IET Radar Sonar Navig. 2019, 13, 998–1003. [Google Scholar] [CrossRef]
  9. Liu, Z.; Shi, Y.; Zeng, Y.; Gong, Y. Radar Emitter Signal Detection with Convolutional Neural Network. In Proceedings of the 2019 IEEE 11th International Conference on Advanced Infocomm Technology (ICAIT), Jinan, China, 18–20 October 2019; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2019; pp. 48–51. [Google Scholar]
  10. Cain, L.; Clark, J.; Pauls, E.; Ausdenmoore, B.; Clouse, R.; Josue, T. Convolutional neural networks for radar emitter classification. In Proceedings of the 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 8–10 January 2018; pp. 79–83. [Google Scholar]
  11. Xiao, Y.; Wei, X.Z. Specific emitter identification of radar based on one dimensional convolution neural network. J. Phys. Conf. Ser. 2020, 1550. [Google Scholar] [CrossRef]
  12. Akyon, F.C.; Alp, Y.K.; Gok, G.; Arikan, O. Classification of Intra-Pulse Modulation of Radar Signals by Feature Fusion Based Convolutional Neural Networks. In Proceedings of the 2018 26th European Signal Processing Conference (EUSIPCO), Rome, Italy, 3–7 September 2018; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2018. [Google Scholar]
  13. Chen, L.; Zhang, H.; Xiao, J.; Nie, L.; Shao, J.; Liu, W.; Chua, T.-S. SCA-CNN: Spatial and Channel-Wise Attention in Convolutional Networks for Image Captioning. arXiv 2016, arXiv:1611.05594. [Google Scholar]
  14. Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 2011–2023. [Google Scholar] [CrossRef] [Green Version]
  15. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018. [Google Scholar]
  16. Zeiler, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014. [Google Scholar]
  17. Zagoruyko, S.; Komodakis, N. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
  18. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  19. Yang, C.; He, Z.; Peng, Y.; Wang, Y.; Yang, J. Deep Learning Aided Method for Automatic Modulation Recognition. IEEE Access 2019, 7, 109063–109068. [Google Scholar] [CrossRef]
  20. Lim, H.-S.; Jung, J.; Lee, J.-E.; Park, H.-M.; Lee, S. DNN-Based Human Face Classification Using 61 GHz FMCW Radar Sensor. IEEE Sens. J. 2020, 20, 12217–12224. [Google Scholar] [CrossRef]
  21. Yuan, X.; Huang, B.; Wang, Y.; Yang, C.; Gui, W.-H. Deep Learning-Based Feature Representation and Its Application for Soft Sensor Modeling with Variable-Wise Weighted SAE. IEEE Trans. Ind. Inform. 2018, 14, 3235–3243. [Google Scholar] [CrossRef]
  22. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  23. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
Figure 1. The structure of a one-dimensional attention unit (AU-1D).
Figure 1. The structure of a one-dimensional attention unit (AU-1D).
Sensors 20 06350 g001
Figure 2. The structure of one-dimensional convolutional neural network with an attention mechanism (CNN-1D-AM).
Figure 2. The structure of one-dimensional convolutional neural network with an attention mechanism (CNN-1D-AM).
Sensors 20 06350 g002
Figure 3. The average recognition rates of CNN-1D-AM on the training dataset and validation dataset with different quantity of training epochs.
Figure 3. The average recognition rates of CNN-1D-AM on the training dataset and validation dataset with different quantity of training epochs.
Sensors 20 06350 g003
Figure 4. The recognition rates of CNN-1D-AM with 11 values of signal-to-noise ratio (SNR) on the validation dataset.
Figure 4. The recognition rates of CNN-1D-AM with 11 values of signal-to-noise ratio (SNR) on the validation dataset.
Sensors 20 06350 g004
Figure 5. The recognition rates of CNN-1D-AM with 11 values of SNR on the testing dataset.
Figure 5. The recognition rates of CNN-1D-AM with 11 values of SNR on the testing dataset.
Sensors 20 06350 g005
Figure 6. The confusion matrices of CNN-1D-AM, based on average recognition rates.
Figure 6. The confusion matrices of CNN-1D-AM, based on average recognition rates.
Sensors 20 06350 g006
Figure 7. The features filtered by the layer before the attention unit.
Figure 7. The features filtered by the layer before the attention unit.
Sensors 20 06350 g007
Figure 8. The features weighted by the attention unit.
Figure 8. The features weighted by the attention unit.
Sensors 20 06350 g008
Figure 9. The weights of the attention unit.
Figure 9. The weights of the attention unit.
Sensors 20 06350 g009
Figure 10. Recognition accuracy of different methods and models (CNN-1D-AM, CNN-1D-Normal, ResNet18, VGG13, SSAE, SVM, DNN1, DNN2, DNN3, DNN4, SAE1, SAE2, SAE3) with each value of SNR on the testing dataset.
Figure 10. Recognition accuracy of different methods and models (CNN-1D-AM, CNN-1D-Normal, ResNet18, VGG13, SSAE, SVM, DNN1, DNN2, DNN3, DNN4, SAE1, SAE2, SAE3) with each value of SNR on the testing dataset.
Sensors 20 06350 g010
Table 1. Experiment platform parameters.
Table 1. Experiment platform parameters.
ProjectParameter
CPUIntel Silver 4110
GPUP400 + P40
RAM64 GB
System VersionCentos 7
Simulation SoftwareMATLAB2020a, Python3.7, Keras 2.2.4
Table 2. Specific parameters of seven types of radar emitter signals.
Table 2. Specific parameters of seven types of radar emitter signals.
Signal TypeCarrier FrequencyParameter
CW200 MHz~220 MHzNone
LFM200 MHz~220 MHzFrequency bandwidth: 50 MHz to 60 MHz
NLFM200 MHz~220 MHzFrequency of modulation signal
ranges from 10 MHz to 12 MHz
BPSK200 MHz~220 MHz13-bit Barker code
Width of each symbol is 0.038 us
QPSK200 MHz~220 MHz16-bit Frank code
Width of each symbol is 0.03 us
BFSK200 MHz~220 MHz
300 MHz~320 MHz
13-bit Barker code
Width of each symbol is 0.038 us
QFSK100 MHz~110 MHz
150 MHz~160 MHz
200 MHz~210 MHz
250 MHz~260 MHz
16-bit Frank code
Width of each symbol is 0.03 us
Note 1: The pulse width for each type signal is 0.5 us; Note 2: Sampling frequency is 2 GHz.
Table 3. Quantity of parameters and training time per epoch for CNN-1D-AM.
Table 3. Quantity of parameters and training time per epoch for CNN-1D-AM.
ModelCNN-1D-AM
Quantity of parameters3,554,504
Time per epoch55 s
Table 4. The detail of four deep neural networks (DNN) models for radar emitter signal recognition.
Table 4. The detail of four deep neural networks (DNN) models for radar emitter signal recognition.
Neurons of the LayersDNN1DNN2DNN3DNN4
Input layer1024
First hidden layer512512256512
Second hidden layer25625664256
Third hidden layer128N/AN/A128
Fourth hidden layerN/AN/AN/A64
Output layer7
Table 5. The structure of the stacked autoencoder (SAE) model for radar emitter signal recognition.
Table 5. The structure of the stacked autoencoder (SAE) model for radar emitter signal recognition.
SAE ModelParts of SAEFirst Auto-EncoderSecond Auto-EncoderThird Auto-EncoderClassifier
SAE1Input layer1024512256128
Hidden layer512256128N/A
Output layer10245122567
SAE2Input layer1024512N/A256
Hidden layer512256N/A
Output layer10245127
SAE1Input layer1024N/A512
Hidden layer512N/A
Output layer10247
Table 6. The number of parameters and training time per epoch for convolutional neural network models.
Table 6. The number of parameters and training time per epoch for convolutional neural network models.
ModelCNN-1D-AMCNN-1D-NormalResNet18VGG13
Quantity of parameters3,554,5043,520,9034,465,5435,761,863
Time per epoch55 s50 s101 s80 s
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, B.; Yuan, S.; Li, P.; Jing, Z.; Huang, S.; Zhao, Y. Radar Emitter Signal Recognition Based on One-Dimensional Convolutional Neural Network with Attention Mechanism. Sensors 2020, 20, 6350. https://doi.org/10.3390/s20216350

AMA Style

Wu B, Yuan S, Li P, Jing Z, Huang S, Zhao Y. Radar Emitter Signal Recognition Based on One-Dimensional Convolutional Neural Network with Attention Mechanism. Sensors. 2020; 20(21):6350. https://doi.org/10.3390/s20216350

Chicago/Turabian Style

Wu, Bin, Shibo Yuan, Peng Li, Zehuan Jing, Shao Huang, and Yaodong Zhao. 2020. "Radar Emitter Signal Recognition Based on One-Dimensional Convolutional Neural Network with Attention Mechanism" Sensors 20, no. 21: 6350. https://doi.org/10.3390/s20216350

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop