Next Article in Journal
Energy Hybridization with Combined Heat and Power Technologies in Supercritical Water Gasification Processes
Previous Article in Journal
Mathematical Description of Changes of Dried Apple Characteristics during Their Rehydration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Specific Emitter Identification Based on Ensemble Neural Network and Signal Graph

1
Southwest Institute of Electronics Technology, Chengdu 610036, China
2
The School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(11), 5496; https://doi.org/10.3390/app12115496
Submission received: 26 April 2022 / Revised: 19 May 2022 / Accepted: 24 May 2022 / Published: 28 May 2022
(This article belongs to the Topic Machine and Deep Learning)

Abstract

:
Specific emitter identification (SEI) is a technology for extracting fingerprint features from a signal and identifying the emitter. In this paper, the author proposes an SEI method based on ensemble neural networks (ENN) and signal graphs, with the following innovations: First, a signal graph is used to show signal data in a non-Euclidean space. Namely, sequence signal data is constructed into a signal graph to transform the sequence signal from a Euclidian space to a non-Euclidean space. Hence, the graph feature (the feature of the non-Euclidean space) of the signal can be extracted from the signal graph. Second, the ensemble neural network is integrated with a graph feature extractor and a sequence feature extractor, making it available to extract both graph and sequence simultaneously. This ensemble neural network also fuses graph features with sequence features, obtaining an ensemble feature that has both features in Euclidean space and non-Euclidean space. Therefore, the ensemble feature contains more effective information for the identification of the emitter. The study results demonstrate that this SEI method has higher SEI accuracy and robustness than traditional machine learning methods and common deep learning methods.

1. Introduction

Specific emitter identification (SEI) is a process to extract individual features from signals of the same model and batch of communication emitters and identify the specific emitter [1]. The manufacturing process of emitters is random, so emitters, even of the same model and batch, do not have completely identical electrical characteristics [2]. With different electrical parameters, each emitter has unique characteristics and is thus called a fingerprint. The fingerprint characteristics [3] of the signal are due to the effects of in-phase and quadrature-phase imbalance(IQ imbalance), phase noise, harmonic distortion, and nonlinear distortion [4]. As the fingerprint of an emitter is unique, steady, and difficult to imitate, SEI based on the fingerprint feature extracted from an emitter is an effective method to identify the identity of the communication emitter.
The SEI methods can be divided into two categories: manual-feature-based methods and deep learning-based methods. Manual-feature-based methods extract manual features from the emitter signal and then use the machine learning classifier to Identify the specific emitter. Different from manual-feature-based methods, the deep learning-based methods automatically extract the features of the specific emitter signal to perform specific emitter identification.
At present, various methods have already been proposed for extracting the features of an emitter signal [5]. Specifically, the features obtained by spectral analysis of the signal include power spectrum features [6], frequency spectrum features [7], Hilbert spectrum features [8], and variational mode decomposition spectrum features [9]. The features generated in modulation–demodulation include non-linear features of the power amplifier [10], phase error, IQ offset [11], and carrier frequency offset [12,13]. Moreover, there are also many other emitter signal feature extraction and identification methods. For example, in [6], SEI was realized by extracting the signal’s fractal feature. In [14], the authors put forward an FID model for mathematical modeling of an emitter based on the emitter type and used this model to identify the undetermined signal. Wong et al. clustered IQ signals directly to identify specific emitters [15].
However, the manual feature extraction-based method has many shortcomings: First, the SEI system’s identification effect is limited by the effectiveness of the manual feature; additionally, the extraction of the manual feature necessitates that the researchers have sufficient prior knowledge of communication theory on emitters [4]. Second, this method has poor generalizability. A feature extraction method valid for the identification of one emitter is often invalid for the identification of other emitters. Hence, researchers usually need to find a new feature extraction method for identifying a new emitter. Third, many traditional SEI methods perform poorly and cannot meet the practical demands. Many recent studies have shown that deep learning outperformed traditional methods in SEI [16]. The shortcomings of traditional methods are being solved by deep learning.
Deep learning is undergoing rapid development and has been widely used in image processing, natural language processing, and speech recognition, obtaining excellent results. Currently, researchers are applying deep learning technology to SEI. Furthermore, some study results have proved that deep learning is feasible and has great potential in SEI. For example, T. O’Shea et al. [17] systematically analyzed the studies on deep neural networks in radio modulation identification. Robyns et al. [7] identified 22 LoRa devices in the multi-layer perceptron (MLP) and convolutional neural network (CNN) supervised learning methods. Sankhe et al. [18] identified five 802.11 protocol emitters by using CNN. Following that, Guanxiong Shen et al. identified 25 LoRa emitters by using CNN, MLP, and Long Short-Term Memory (LSTM), respectively. However, in the existing deep learning methods, signal features are generally extracted from Euclidean space directly, and then a simple neural network structure is used to identify the emitter. There is still a large room for improving the data representation and network structure.
To effectively extract the features of the signal, we propose signal graph (a new data representation method) and ensemble neural network (ENN). The main innovations in this paper are as follows:
(1)
By constructing a signal graph, the sequence signal is transformed from a Euclidean space to a non-Euclidean space. As a result, the graph convolution method can be used to extract a signal’s non-Euclidean feature from its signal graph. Signal graph provides a new method, different from a signal sequence, for emitter signal representation.
(2)
The ENN is designed with a sequence feature extractor and a graph feature extractor. Hence, it can extract sequence features from Euclidean space and graph features from non-Euclidean space and fuse the two features together to enrich the feature information extracted from the signal and better identify the emitter.
We conduct extensive experiments on ESP20 Dataset and RML2016a Dataset. Experimental results show that the proposed ENN outperforms the state-of-the-art methods for SEI tasks.

2. Ensemble Neural Network and Signal Graph-Based SEI

2.1. Signal Graph and Improved Graph Convolution

2.1.1. Signal Graph

An emitter signal can be input into a neural network directly or after being converted into another form. In this paper, the raw signal without any processing is called a signal sequence, which has two channels (i.e., components I and Q of the emitter signal) and a length of N. A signal sequence is data in Euclidean space; its elements are arranged as per the relations in the time sequence, and the geometric position relationships between the elements are specific. In order to represent the relationship between the nodes of a signal sequence, adjacent edges were added between the nodes to constitute a signal graph and obtain the signal sequence transformed from a Euclidean space to a non-Euclidean space. Then, a non-Euclidean feature of the signal can be extracted from the signal graph by graph convolution.
Figure 1 shows a signal sequence with a length of 10. The signal graph is constructed by adding adjacent edges between the nodes. After that, the nodes in the signal graph do not have the positional relationship in the signal sequence anymore but only the connection relationship.

2.1.2. Improved Graph Convolution for the Signal Graph

The formula for the original graph convolution can be defined as:
H l + 1 = G C o n v ( H , W ) = σ ( L H l W l ) = σ ( D 1 2 A D 1 2 H l W l )
where H is the output of the l-th layer graph convolution, W is the parameter matrix of the l-th layer graph convolution, A is the adjacency matrix of the graph, D is the degree matrix of the adjacency matrix A. In the original signal graph, all the adjacent nodes have the same impacts on the central node. However, different adjacent nodes have different impacts on the central node due to their different positions in the signal sequence [19].
In this regard, the standard graph convolution formula was improved in this study. Each adjacent edge of the adjacent matrix was given a weight parameter to adjust the impact of different adjacent nodes on the central node in the summation and updating. The weight parameter of the adjacent edge is determined by the following formula:
k ( d ) = 1 2 π exp ( d 2 2 σ 2 )
In the formula, k represents the weight parameter of the adjacent edge; d represents the distance between an adjacent node and the central node in the signal sequence; and σ is the shape parameter. The distance between the adjacent node and the central node is inversely proportional to the impact of the former on the latter and vice versa. The set of the weight parameters is written as a weight matrix K ( l e n g t h × l e n g t h ). The following formula exhibits the relation between the weight matrix K and the element k i j :
K = d [ 160 , 160 ] d i a g ( l e n g t h , d , k | d | ) = k 0 k 1 k 2 k m 1 k m k 1 k 0 k 1 k m 2 k m 1 k 2 k 1 k 0 k m 3 k m 2 k m 1 k m 2 k m 3 k 0 k 1 k m k m 1 k m 2 k 1 k 0
In the formula, d i a g is a diagonal matrix generator, where l e n g t h controls the dimensions of the diagonal matrix generated, d controls the position of the diagonal relative to the principal diagonal, k | d | controls the value of the diagonal element. Finally, weight matrix K and adjacent matrix A were used to calculate the Hadamard product and assign corresponding weight to each adjacent edge so as to control the impact of the different adjacent nodes on the central node in its updating. Therefore, the improved graph convolution for extracting features from the signal graph is written as the following formula:
H l + 1 = G o n v ( H , W ) = σ ( D 1 2 K A D 1 2 H l W l )
where H is the output of the l-th layer graph convolution, W is the parameter matrix of the l-th layer graph convolution, A is the adjacency matrix of the graph, D is the degree matrix of the adjacency matrix A, and K is the proposed weight matrix.

2.2. The Proposed Ensemble Neural Network

In this study, an ensemble neural network (ENN) (Figure 2) was designed to make full use of sequence features and graph features, which are in two different spaces. It integrates sequence feature extractor with graph feature extractor and has a sequence classifier, a graph classifier, and an ensemble classifier.
Based on the two feature extractors, the ENN can extract sequence features in Euclidean space and graph features in non-Euclidean space from a signal sequence and a signal graph simultaneously and then fuse the two features together, obtaining the ensemble feature. As the ensemble feature contains the features in both Euclidean space and non-Euclidean space, it has more sufficient feature information for SEI than a single sequence feature or a graph feature.
The sequence classifier can identify a specific emitter based on the sequence feature extracted by the sequence feature extractor, and the graph classifier can identify a specific emitter based on the graph feature extracted by the graph feature extractor. However, the ensemble classifier can use the combination of sequence features and graph features to identify a specific emitter. The ENN treats the result obtained by the ensemble classifier as the final result. The sequence classifier and graph classifier support the training of ENN. Section 2.2.3 will show how to help the ENN obtain better SEI performance.

2.2.1. Components of the Modules

Sequence feature extractor: Sequence feature extractor is used to extract the features of the signal from Euclidean space. To this end, we use three cascaded convolutional layers with kernel size of 3 × 3 to form the feature extractor. In the selection the number of convolutional layers, we consider the overfitting and underfitting of the model. When the model capacity is too large, the model will over-fit, and when the model capacity is insufficient, the model is prone to under-fitting. In our model, three is a suitable value, which is obtained through cross-validation. In addition, the number of output channels of the three convolutional layers is set to 32, which is also the result of multiple attempts. Defining the sequence feature extractor as S ( · ) , the sequence signal as X s , the output feature z S can be expressed as:
z S = S ( X S )
Graph feature extractor: Graph feature extractor is used to extract the features of the signal from non-Euclidean space. Graph feature consists of one graph convolution layer (input layer) connected with two convolution layers with kernel size of 3 × 3. Corresponding to the sequence feature extractor, we also set the number of output channels of the graph convolutional layer and the convolutional layer to 32. Defining the graph feature extractor as G ( · ) , the sequence signal as X s , the output feature z G can be expressed as:
z G = G ( X S )
For the graph feature extractor, the first layer is the graph convolution layer, and the extracted feature is a graph, so the second layer cannot process graph structure data directly. In view of this state, the features extracted by the first layer can be transformed before being subjected to convolution. Specifically, all of the node connections in the graph feature are removed, and the node information is kept. Then, all of the nodes are reconstructed into a sequence based on their time sequences in the signal. Finally, the other two convolution layers are used to extract more of the feature and complete the extraction of the graph feature.
Feature fusion module: Feature fusion module is used to fuse z S and z G , which can be written as:
z E = z S , z G
where [ . ] denotes feature concat, z E denotes the fused feature.
Classifier: Classifier is used to predict the classification results. To train the model efficiently, we classify and supervise the sequence feature, graph feature and fused feature. The classification results of the three classifiers can be expressed as:
p S = C S ( z S )
p G = C G ( z G )
p E = C E ( [ z S , z G ] )
where C s ( . ) denotes the sequence feature classifier, C G ( . ) denotes the graph feature classifier and C E ( . ) denotes the fused feature classifier. P S , P G , P E denotes the corresponding classification results, respectively. The three classifiers all adopt three-layer MLP structures, the number of neurons in each layer are 256, 80, and 20, respectively.

2.2.2. Loss Function

The ENN has three outputs which are from the sequence classifier, graph classifier, and ensemble classifier, respectively. In this paper, three sub-loss functions L S , L G and L E are set for supervising the sequence classifier, graph classifier, and ensemble classifier, respectively. They are all cross-entropy loss functions as expressed in Formulas (11), (12), and (13), respectively:
L S = 1 N i = 1 N j = 1 k y C i log [ p y C i ( j ) ]
L G = 1 N i = 1 N j = 1 k y G i log [ p y G i ( j ) ]
L E = 1 N i = 1 N j = 1 k y E i log [ p y E i ( j ) ]
where N is the number of samples; k is the number of classes; y is the sample label; p is the result predicted by the model.

2.2.3. Training

The ENN was trained in two steps: local classifier supervision and global classifier supervision. In the first step, the outputs from the sequence classifier and the graph classifier were supervised by the loss functions at the same time. Hence, the total loss function L s t e p 1 in this step was the sum of L S and L G . The purpose was to enable the sequence/graph feature extractor and classifier to extract and classify corresponding features.
L s t e p 1 = L S + L G
The second step started after the training passed 90% of the whole epoch. In this step, the total loss function L s t e p 2 include L S , L G , and L E . Over the training in the first step, the sequence feature extractor and graph feature extractor of the ENN had been able to extract feature; on this basis, a good ensemble classifier was obtained just over a fine adjustment.
L s t e p 2 = L S + L G + L E
To summarize, the ENN first used a sequence feature extractor and a graph feature extractor to extract sequence and graph features, respectively, and then fused the two features into an ensemble feature. During training, the first step was to supervise the outputs from the sequence classifier and the graph classifier to make the sequence feature extractor and the graph feature extractor able to extract corresponding features effectively. In the second step, the output from the ensemble classifier was also supervised, and the network was fine-tuned on the basis of the first step, obtaining the optimum SEI performance.

3. Experimental Data

ESP20 Dataset. ESP20 is a WIFI emitter signal dataset acquired by the authors. It contains the signal data acquired from 20 WIFI emitters of the same model (ESP8266) and batch. The acquisition process is presented in Figure 3. The dimension of each sample signal is 160 × 2. Specifically, 160 corresponds to the time dimension of the signal, and 2 corresponds to the in-phase and quadrature-phase of the signal.
RML2016a Dataset. An open-source RML2016a dataset was used to test the performance of ENN in emitter modulation method recognition. This dataset is a modulation signal dataset developed by O’Shea et al. using the GNU Radio signal simulation software [20]. Through this software, they generated 11 emitter signals with different modulation methods. The information of the ESP20 dataset and RML2016a dataset are summarized in Table 1.

4. Experiment and Result Analysis

An SEI experiment was conducted on the self-acquired ESP20 dataset, and an emitter modulation method recognition experiment was carried out on the open source RML2016a dataset. The evaluation indicators of the experiment include accuracy, precision, recall, F1-score, and confusion matrix. The first experiment followed the following steps: First, noiseless raw data was used to train the ENN; second, the above evaluation indicators were used to evaluate the basic performance of the network; third, the additive white Gaussian noise was used to adjust the SNR of the data and test the robustness of the network at different SNRs. In the second experiment, as there was no noiseless raw signal, the different noise modulation signals provided in the dataset were directly used to train the ENN, which then was tested by the test set signal corresponding to the SNR.
Comparison methods. To demonstrate the superiority of the method proposed in this paper, the ENN was compared with two deep learning methods and two traditional machine learning methods in the experiment. For the deep learning method, we compare our model with CNN and GCN. The structure of CNN is consistent with the network structure constituted by the sequence feature extractor and sequence classifier of the ENN, while the structure of GCN accords with the network structure constituted by the graph feature extractor and graph classifier of the ENN. Therefore, GCN and CNN are not only state-of-the-art deep-learning based classifiers, but also part of our model. The comparison with CNN and GCN is also an ablation study of the effectiveness of the ENN structure. In addition, the ENN was also compared with the traditional decision tree and k-Nearest Neighbor algorithms. Both KNN and tree are implemented using sklearn. For the KNN algorithm, we select all samples in the training set as known classes, select all samples in the test set as test samples, and set the value of K to 3. For the decision tree method, we adopted the default values from sklearn. For CNN and GCN, the number of network layers and the number of neurons use the configuration described in Section 2.2.1. All hyperparameters are obtained by cross-validation. The results showed that the method proposed in this paper has better SEI performance than traditional methods and existing deep learning methods.
Training details. The ENN was trained in two steps. During training, the ENN was tested and trained on an NVIDIA GeForce GTX 950M GPU based on the PyTorch deep learning framework under Windows 10. The entire network was optimized by an ADAM optimizer with an initial learning rate of 0.001 and 50 epochs of iteration. It should be noted that in the experiments on the ESP20 dataset, the ENN was trained with a noiseless raw signal merely and tested with signals of various SNRs.

4.1. Results of SEI Experiment on the ESP20 Dataset

(1) Analysis of comprehensive performance indicators. This section analyzes the evaluated comprehensive indicators and robustness of the proposed method for the ESP20 dataset. Table 2 lists the experimental results of different evaluation indicators of the proposed method and other comparison methods at various SNRs. The optimum results have been bolded. As shown in Table 2, ENN had optimum performance in four evaluation indicators compared with other comparison methods. GCN performed second in the indicators, which was better than CNN. Through a comparison between GCN and CNN, it is concluded that the signal graph proposed in this paper is effective for signal data representation. Moreover, ENN performed better than CNN and GCN in SEI, which indicated that ENN can effectively extract the fingerprint feature of a signal and identify the specific emitter. Figure 4 shows the confusion matrices of the methods, where the pixels of ENN concentrate on the diagonal clearly, while those of the confusion matrix of the comparison methods are distributed beyond the diagonal area vaguely. This case intuitively explains that the ENN has the optimum SEI performance.
(2) Robustness analysis at various SNRs. As shown in Table 2, the classification indicators of the ENN were obviously superior to those of CNN and GCN at different SNRs. Figure 5 illustrates that the proposed method has higher accuracy than the comparison methods at all the SNRs. This proves that this method has excellent robustness at different SNRs. Especially in poor signal environments, for example, when the SNR was 20 dB, the experimental results better reflected the robustness of the proposed method. At this time, the identification accuracy of CNN was 76.38%, that of GCN was 84.51%, and that of the proposed method was 85.23%. The accuracy of GCN based on signal graph and improved graph convolution was about 8% higher than that of CNN, which implied that signal graph was effective for representing signal data. Meanwhile, the accuracy of ENN was about 1% higher than that of GCN, further indicating the excellent robustness of ENN.

4.2. Results of Emitter Modulation Method Recognition Experiment on the RML2016a Dataset

This section analyzes the effectiveness of the proposed method for emitter modulation method identification based on the open source RML2016a dataset. Table 3 displays the emitter modulation method identification results of ENN and the comparison methods at various SNRs. The optimum results have been bolded. Figure 6 shows the accuracies of the methods at various SNRs. As the signals of the RML2016a dataset had low SNR as a whole, traditional methods became almost unable to identify the emitter based on the dataset, while the three deep learning methods (ENN, CNN, and GCN) still showed high robustness, with significantly higher accuracy than traditional machine learning methods. Figure 7 presents the confusion matrices of the methods, where the pixels of ENN are concentrated on the diagonal the most. This indicated that the proposed SEI method had optimum performance. The specific results are presented in Table 3.
As shown, the proposed method obtained the best accuracy, precision, recall, and F1-score. Both CNN and GCN have advantages. CNN performed better than GCN at high SNR, but was less robust than GCN at low SNR. In summary, the signal graph and ENN combined method proposed in this paper has excellent identification ability and robustness at various SNRs.

5. Conclusions

In view that the signals used in some existing studies were sourced from simulation software and the dataset was less representative in practice, the authors designed a WIFI emitter communication platform, acquired WIFI signals practically, and formulated the dataset. To fully extract fingerprint features from a signal, a signal graph was designed and used to have the signal sequence transformed from a Euclidean space to a non-Euclidean space; furthermore, an ensemble neural network (ENN) was designed, making it available to extract features from the signal sequence and signal graph simultaneously. Subsequently, the extracted sequence and graph features were fused together, forming an ensemble feature with richer information than a single sequence or graph feature. Finally, the ENN was trained in two steps. In the experiments, there were four comparison methods, including two traditional SEI methods (decision tree and k-nearest neighbor algorithms) and two deep learning SEI methods (CNN and GCN). Among them, CNN has the same network structure as the network structure formed by the sequence feature extractor and sequence classifier of the ENN, while GCN has a completely identical network structure to the network structure formed by the graph feature extractor and graph classifier of the ENN. Therefore, the experiment on CNN and GCN is an ablation study of the network modules of ENN, which can indicate the effectiveness of the modules. In the experiments, SEI was conducted on the ESP20 dataset, and emitter modulation method recognition was carried out on the open-source RML2016a dataset. The experimental results demonstrated that the proposed method performed the best in accuracy, precision, recall, and F1-score at various SNRs. This indicates that this method can effectively improve the SEI’s performance on the basis of CNN and GCN. Furthermore, the best performance of the proposed method at various SNRs proves that this method has the highest robustness, and the optimum performance of the method in different SEIs reveals that our method has high generalization.

Author Contributions

Methodology, C.X.; Writing—original draft, Y.P.; Writing—review & editing, Y.Z., J.H. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (62171320 and U2006211) and the National Key Research and Development Program of China (2020YFC1523200).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors acknowledge the financial supports of the National Natural Science Foundation of China (62171320 and U2006211) and the National Key Research and Development Program of China (2020YFC1523200).

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Talbot, K.I.; Duley, P.R.; Hyatt, M.H. Specific emitter identification and verification. Technol. Rev. 2003, 113, 133. [Google Scholar]
  2. Chen, Y.; Chen, X.; Lei, Y. Emitter Identification of Digital Modulation Transmitter Based on Nonlinearity and Modulation Distortion of Power Amplifier. Sensors 2021, 21, 4362. [Google Scholar] [CrossRef] [PubMed]
  3. Kang, J.; Shin, Y.; Lee, H.; Park, J.; Lee, H. Radio Frequency Fingerprinting for Frequency Hopping Emitter Identification. Appl. Sci. 2021, 11, 10812. [Google Scholar] [CrossRef]
  4. Sankhe, K.; Belgiovine, M.; Zhou, F.; Riyaz, S.; Ioannidis, S.; Chowdhury, K. ORACLE: Optimized radio classification through convolutional neural networks. In Proceedings of the IEEE INFOCOM 2019-IEEE Conference on Computer Communications, Paris, France, 29 April–2 May 2019; pp. 370–378. [Google Scholar]
  5. Jiang, W.; Cao, Y.; Yang, L.; He, Z. A time-space domain information fusion method for specific emitter identification based on Dempster–Shafer evidence theory. Sensors 2017, 17, 1972. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Zhu, M.; Zhang, X.; Qi, Y.; Ji, H. Compressed sensing mask feature in time-frequency domain for civil flight radar emitter recognition. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 2146–2150. [Google Scholar]
  7. Robyns, P.; Marin, E.; Lamotte, W.; Quax, P.; Singelée, D.; Preneel, B. Physical-layer fingerprinting of LoRa devices using supervised and zero-shot learning. In Proceedings of the 10th ACM Conference on Security and Privacy in Wireless and Mobile Networks, Boston, MA, USA, 18–20 July 2017; pp. 58–63. [Google Scholar]
  8. Zhang, J.; Wang, F.; Dobre, O.A.; Zhong, Z. Specific emitter identification via Hilbert–Huang transform in single-hop and relaying scenarios. IEEE Trans. Inf. Forensics Secur. 2016, 11, 1192–1205. [Google Scholar] [CrossRef]
  9. Satija, U.; Trivedi, N.; Biswal, G.; Ramkumar, B. Specific emitter identification based on variational mode decomposition and spectral features in single hop and relaying scenarios. IEEE Trans. Inf. Forensics Secur. 2018, 14, 581–591. [Google Scholar] [CrossRef]
  10. Polak, A.C.; Dolatshahi, S.; Goeckel, D.L. Identifying wireless users via transmitter imperfections. IEEE J. Sel. Areas Commun. 2011, 29, 1469–1479. [Google Scholar] [CrossRef]
  11. Brik, V.; Banerjee, S.; Gruteser, M.; Oh, S. Wireless device identification with radiometric signatures. In Proceedings of the 14th ACM International Conference on Mobile Computing and Networking, San Francisco, CA, USA, 14–19 September 2008; pp. 116–127. [Google Scholar]
  12. Nguyen, N.T.; Zheng, G.; Han, Z.; Zheng, R. Device fingerprinting to enhance wireless security using nonparametric Bayesian method. In Proceedings of the 2011 Proceedings IEEE INFOCOM, Shanghai, China, 10–15 April 2011; pp. 1404–1412. [Google Scholar]
  13. Liu, P.; Yang, P.; Song, W.Z.; Yan, Y.; Li, X.Y. Real-time identification of rogue WiFi connections using environment-independent physical features. In Proceedings of the IEEE INFOCOM 2019-IEEE Conference on Computer Communications, Paris, France, 29 April–2 May 2019; pp. 190–198. [Google Scholar]
  14. Zheng, T.; Sun, Z.; Ren, K. FID: Function modeling-based data-independent and channel-robust physical-layer identification. In Proceedings of the IEEE INFOCOM 2019-IEEE Conference on Computer Communications, Paris, France, 29 April–2 May 2019; pp. 199–207. [Google Scholar]
  15. Wong, L.J.; Headley, W.C.; Andrews, S.; Gerdes, R.M.; Michaels, A.J. Clustering learned CNN features from raw I/Q data for emitter identification. In Proceedings of the MILCOM 2018—2018 IEEE Military Communications Conference (MILCOM), Los Angeles, CA, USA, 29–31 October 2018; pp. 26–33. [Google Scholar]
  16. Wu, L.; Zhao, Y.; Wang, Z.; Abdalla, F.Y.; Ren, G. Specific emitter identification using fractal features based on box-counting dimension and variance dimension. In Proceedings of the 2017 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Bilbao, Spain, 18–20 December 2017; pp. 226–231. [Google Scholar]
  17. West, N.E.; O’Shea, T. Deep architectures for modulation recognition. In Proceedings of the 2017 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), Baltimore, MD, USA, 6–9 March 2017; pp. 1–6. [Google Scholar]
  18. Riyaz, S.; Sankhe, K.; Ioannidis, S.; Chowdhury, K. Deep learning convolutional neural networks for radio identification. IEEE Commun. Mag. 2018, 56, 146–152. [Google Scholar] [CrossRef]
  19. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. In Proceedings of the 5th International Conference on Learning Representations(ICLR), Toulon, France, 24–26 April 2017. [Google Scholar]
  20. O’shea, T.J.; West, N. Radio machine learning dataset generation with gnu radio. In Proceedings of the GNU Radio Conference, Charlotte, NC, USA, 20–24 September 2016; Volume 1. [Google Scholar]
Figure 1. A signal graph constituted from a signal sequence.
Figure 1. A signal graph constituted from a signal sequence.
Applsci 12 05496 g001
Figure 2. Structure of the ensemble neural network proposed in this paper.
Figure 2. Structure of the ensemble neural network proposed in this paper.
Applsci 12 05496 g002
Figure 3. The production process of WIFI emitter communication platform and signal dataset.
Figure 3. The production process of WIFI emitter communication platform and signal dataset.
Applsci 12 05496 g003
Figure 4. Confusion matrices of ENN and the comparison methods. The confusion matrix of our model has been highlighted with a red square. The stronger the blue color, the higher the classification accuracy.
Figure 4. Confusion matrices of ENN and the comparison methods. The confusion matrix of our model has been highlighted with a red square. The stronger the blue color, the higher the classification accuracy.
Applsci 12 05496 g004
Figure 5. Accuracies of ENN and the comparison methods at various SNRs.
Figure 5. Accuracies of ENN and the comparison methods at various SNRs.
Applsci 12 05496 g005
Figure 6. Accuracies of the proposed method and the comparison methods at various SNRs.
Figure 6. Accuracies of the proposed method and the comparison methods at various SNRs.
Applsci 12 05496 g006
Figure 7. Confusion matrices of the proposed method and the comparison methods at SNR of 6 dB. The confusion matrix of our model has been highlighted with a red square. The stronger the blue color, the higher the classification accuracy.
Figure 7. Confusion matrices of the proposed method and the comparison methods at SNR of 6 dB. The confusion matrix of our model has been highlighted with a red square. The stronger the blue color, the higher the classification accuracy.
Applsci 12 05496 g007
Table 1. Information summary of ESP20 dataset and RML2016a dataset.
Table 1. Information summary of ESP20 dataset and RML2016a dataset.
Dataset NameESP20RML2016a
Emitter modelWIFI device signalSoftware simulation signal of
different modulation method
Number of classes2011
Number of each class
of samples
13001000
Sample sizeLength = 160; number of channels = 2Length = 128; number of channels = 2
Training set
division
1–1000 per class is divided
into training set
1–700 per class is divided
into training set
Test set
division
1001–1300 per class is divided
into test set
701–1000 per class is divided
into test set
RemarksThe 20 WIFI devices have the same
model (ESP8266) and batch. The
retained signal is the LLTF signal
part in the complete WIFI signal.
The 11 modulation methods are 8PSK,
AM-DSB, AM-SSB, BPSK, CPFSK, GFSK,
PAM4, QAM16, QAM64, QPSK, and WBFM.
Table 2. A Summary of the evaluated indicators of different methods at various SNRs. Experimental results of our model are bolded.
Table 2. A Summary of the evaluated indicators of different methods at various SNRs. Experimental results of our model are bolded.
SNRIndexENNCNNGCNKNNTREE
rawAccuracy97.3894.495.7879.9162.9
Precision97.3393.769480.6163.17
Recall97.4193.5694.4579.7962.77
F1-Score97.1493.0893.4779.6862.71
26 dBAccuracy96.7392.3192.3884.5247.23
Precision96.0793.7694.3385.1447.65
Recall98.5395.6192.4184.7347.19
F1-Score96.793.9891.8484.4847.37
24 dBAccuracy94.5189.392.8785.1841.67
Precision96.289.9694.8586.0641.6
Recall95.6988.5695.785.0241.57
F1-Score95.2288.1393.9385.0941.53
22 dBAccuracy91.5685.189.6584.1234.47
Precision93.5581.0384.2984.7234.31
Recall93.3783.9480.6883.9534.53
F1-Score92.2180.4480.6883.8634.36
20 dBAccuracy85.2376.3884.5183.2325.55
Precision90.6274.4484.6484.0725.42
Recall89.277182.0683.225.56
F1-Score88.9770.0280.0383.1625.33
Table 3. Experimental results of the indicators at various SNRs. Experimental results of our model are bolded.
Table 3. Experimental results of the indicators at various SNRs. Experimental results of our model are bolded.
SNRIndexENNCNNGCNKNNTREE
6 dBAccuracy76.4172.8272.2719.1530.21
Precision77.6172.5572.0925.0930.84
Recall77.1472.3771.8619.329.95
F1-Score77.2472.4171.8716.2830.31
4 dBAccuracy76.0574.1870.2717.1527.39
Precision76.6373.8169.982627.96
Recall75.9573.6769.9217.6427.31
F1-Score76.0773.7169.8615.0627.54
2 dBAccuracy73.5570.6869.516.8222.48
Precision73.6371.4870.2825.7622.96
Recall73.4971.3270.0416.9822.46
F1-Score73.4971.2970.114.4622.59
0 dBAccuracy69.7367.6463.9517.0317.82
Precision70.4367.7763.9823.5118.26
Recall70.5467.9364.317.5117.94
F1-Score70.3867.7964.0712.2918
−2 dBAccuracy64.0955.5560.649.3315.48
Precision64.935660.4115.4215.65
Recall64.6855.5760.599.1715.55
F1-Score64.6755.6960.473.2315.57
−4 dBAccuracy53.7348.91539.1813.91
Precision53.0748.7853.110.0914
Recall52.6648.8153.019.0913.85
F1-Score52.8448.7753.041.8213.88
−6 dBAccuracy42.5540.55398.711.45
Precision42.9140.6738.378.0911.47
Recall42.9440.2738.689.0911.56
F1-Score42.8440.3338.471.4811.5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xing, C.; Zhou, Y.; Peng, Y.; Hao, J.; Li, S. Specific Emitter Identification Based on Ensemble Neural Network and Signal Graph. Appl. Sci. 2022, 12, 5496. https://doi.org/10.3390/app12115496

AMA Style

Xing C, Zhou Y, Peng Y, Hao J, Li S. Specific Emitter Identification Based on Ensemble Neural Network and Signal Graph. Applied Sciences. 2022; 12(11):5496. https://doi.org/10.3390/app12115496

Chicago/Turabian Style

Xing, Chenjie, Yuan Zhou, Yinan Peng, Jieke Hao, and Shuoshi Li. 2022. "Specific Emitter Identification Based on Ensemble Neural Network and Signal Graph" Applied Sciences 12, no. 11: 5496. https://doi.org/10.3390/app12115496

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop