Next Article in Journal
A Short Survey on Deep Learning for Multimodal Integration: Applications, Future Perspectives and Challenges
Previous Article in Journal
Learning-Based Matched Representation System for Job Recommendation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Automatic Modulation Classification Using Convolutional Deep Neural Network Based on Scalogram Information

by
Ahmed Mohammed Abdulkarem
1,*,
Firas Abedi
2,
Hayder M. A. Ghanimi
3,
Sachin Kumar
4,
Waleed Khalid Al-Azzawi
5,
Ali Hashim Abbas
6,
Ali S. Abosinnee
7,
Ihab Mahdi Almaameri
8 and
Ahmed Alkhayyat
9
1
Ministry of Migration and Displaced, Baghdad 10011, Iraq
2
Department of Mathematics, College of Education, Al-Zahraa University for Women, Karbala 56001, Iraq
3
Biomedical Engineering Department, College of Engineering, University of Warith Al-Anbiyaa, Karbala 56001, Iraq
4
Big Data and Machine Learning Lab, South Ural State University, 454080 Chelyabinsk, Russia
5
Department of Medical Instruments Engineering Techniques, Al-Farahidi University, Baghdad 10011, Iraq
6
College of Information Technology, Imam Ja’afar Al-Sadiq University, Al-Muthanna 66002, Iraq
7
Altoosi University College, Najaf 54001, Iraq
8
Department of Automation and Applied Informatics, Budapest University of Technology and Economics, 1111 Budapest, Hungary
9
Faculty of Engineering, The Islamic University, Najaf 54001, Iraq
*
Author to whom correspondence should be addressed.
Computers 2022, 11(11), 162; https://doi.org/10.3390/computers11110162
Submission received: 19 September 2022 / Revised: 7 November 2022 / Accepted: 10 November 2022 / Published: 15 November 2022

Abstract

:
This study proposed a two-stage method, which combines a convolutional neural network (CNN) with the continuous wavelet transform (CWT) for multiclass modulation classification. The modulation signals’ time-frequency information was first extracted using CWT as a data source. The convolutional neural network was fed input from 2D pictures. The second step included feeding the proposed algorithm the 2D time-frequency information it had obtained in order to classify the different kinds of modulations. Six different types of modulations, including amplitude-shift keying (ASK), phase-shift keying (PSK), frequency-shift keying (FSK), quadrature amplitude-shift keying (QASK), quadrature phase-shift keying (QPSK), and quadrature frequency-shift keying (QFSK), are automatically recognized using a new digital modulation classification model between 0 and 25 dB SNRs. Modulation types are used in satellite communication, underwater communication, and military communication. In comparison with earlier research, the recommended convolutional neural network learning model performs better in the presence of varying noise levels.

1. Introduction

The varieties of modulation schemes employed in wireless communication are diversifying, resulting in a more complex communication environment. Wireless communication technology is always developing. As a consequence, it is becoming more and more important to have the ability to swiftly and automatically evaluate and identify communication signals. Automated modulation categorization (AMC) is a middle step in blind signal processing in several fields, including cognitive radios [1] and SDRs [2]. There are two kinds of AMC [3]. First, to identify the potential categories of received signals, the probability function of the received signals is calculated under various hypotheses, and the results are compared with a predetermined threshold. Bayesian LB techniques often have significant sensitivity to unknown channel conditions, require substantial prior information, or are computationally demanding [4,5,6]. The FB approaches, which can be applied with a lower level of computer complexity, can find suboptimal solutions. Analyzing the properties of the received signals allows for the determination of the modulation type. Without a doubt, high-quality features may provide reliable performance at a reasonable price. Recent research [7] has previously looked at some of these properties, including WT [8], cyclostationary features [9], and higher-order cumulants [10]. Support vector machines (SVMs) [11] and D-Trees [12], as well as Random Forests (RFs) [13] and KNNs [14], were frequently used for AMC systems in earlier studies. Unfortunately, these approaches are time-consuming since the extraction of handmade features requires in-depth technical knowledge and domain experience.
Deep learning (DL) has recently received a lot of attention due to its success in a number of applications, including speech recognition [15], emotion analysis [16], and computer vision [17]. In contrast to conventional data-analysis and processing techniques, the DL’s capacity to automatically represent complex, high-dimensional data without the need for manual features is of critical relevance [18]. Because of this, we have noticed it encroaching on other domains, such as communications [19]. It is significant to note that some engineers have completed AMC assignments using DL with success. So far, several methods have been presented using several different techniques based on deep learning, including LSTM networks [20,21,22,23], deep convolutional networks [21,24,25], RNN [26,27], etc. To clearly define the complicated relationships of time-correlated signals across Rayleigh fading channels, along with different additive noise circumstances, an RNN-based AMC technique is examined in [28]. A sophisticated RNN architecture, known as the long short-term memory (LSTM) network [29], has been made particularly for learning the long-term dependencies in the time domain of signals with varying-length modulation. CNNs are capable of extracting more significant discriminating features from multiscale feature representations for the multiclass classification challenge when compared to RNNs and LSTM. In the newest 24-modulation DeepSig dataset, a compact CNN combining numerous residual convolutional stacks, used to gather more relevant information from multilevel representational feature maps, greatly increases the classification rate. Without feature engineering expertise, DL outperforms ML in recognition performance [30]. For network training and classification tasks, the authors in [31] employed constellation diagrams and the AlexNet CNN model. Additionally, the Caffe framework was used throughout the whole modulation classification process. Based on ACGAN, data augmentation was used in [32]. Nevertheless, Peng et al. Images with grid-like topologies were created using two pretrained models for AMC (AlexNet and GoogLeNet) by [33]. Only the additive white Gaussian noise signal model, which includes several time-frequency modifications, is useful when using feature fusion to enhance performance [34]. Despite the fact that each of these methods modifies AMC for well-researched picture-identification problems in some way, they all require an intricate image-processing stage.
The literature demonstrates that the AMC issue is solved using a broad range of feature extraction and selection techniques, as well as machine learning models. Without any prior knowledge of feature engineering, DL outperforms ML in recognition performance [35,36]. However, most of the current AMC methods perform poorly when several standard convolutional layer structures are added to a heavily deep network architecture. This occurs because extracted feature maps are represented ineffectively and trainable parameters are expensive. These techniques also need a lot of training time. AMC based on scalogram images and a deep convolutional neural network is explicitly presented in this paper. The CNN architecture automatically identified the modulation types after receiving the scalogram pictures created by the CWT approach, which is used to extract 2D data from modulated signals. The described methodology effectively utilizes deep convolutional networks to address AMC difficulties (ConvNet). For robust modulation classification against channel impairments, a well-performing and cost-efficient AMC system based on CWT transform is presented.
As an innovation in this work, the combination of feature visualization with the help of CWT transformation with deep convolutional neural networks has been used. This approach was chosen because, by visualizing the features with the help of scalogram transformation and sending these features to the deep convolutional network, the ability to recognize patterns related to each modulation in the convolutional network increases.
The contributions of the presented study are as follows:
  • 2D-scalogram images are used in order to automatically detect modulation type.
  • CWT-based scalogram images are used for the visualization of modulation features in order to increase the performance of the proposed method.
  • The CNN architecture is proposed to automatically classify scalogram images.
  • Simulation results indicate that the presented model showed better accuracy compared to other methods.
This article is divided into five parts. Section 2 addresses both the materials and the process. The model under discussion is covered in further depth in Section 3. The research results are presented in Section 4. Section 5 summarizes the results and makes recommendations for further study directions.

2. Materials and Methods

2.1. The Formulation of ASK, FSK, and PSK

A type of digital transmission in which the digital baseband signal is changed into a band-restricted high-frequency passband signal is known as a digital passband modulation. Transition bands may be modulated using ASK, FSK, or PSK.
  • Amplitude-shift keying: The identical frequency carriers in A1 and A2 carry bits 0 and 1 of the baseband signal.
  • Frequency-shift keying: Bits 0 and 1 of the baseband signal are modulated using the frequency-shift keying (FSK) technique. Two frequencies with the same amplitude are used in FSK.
  • Phase-shift keying: PSK modulation uses phase variations of the same amplitude and frequency to modify baseband signal bits 0 and 1.
The basic modulation formulae for ASK, FSK, and PSK are listed here:
X m A S K t = A m cos ω c t
X m P S K t = A cos ( ω c t + θ m )  
X m F S K t = A cos ω c m t
The data has been distorted in accordance with the SNR ratio for binary-10110100, which is data with a decimal value of 180, as illustrated in the diagrams for binary-10110100 digital modulation (Figure 1).

2.2. The Formulation of QASK, QFSK, and QPSK

Multilevel transmission may be achieved by varying the amplitude, frequency, or phase of the carrier between more than two distinct values. The throughput of data transmission may be increased by having numerous carriers, as opposed to only one. Depending on the kind of modulation utilized, the carrier for the M-piece symbol may have variable amplitude (MASK), frequency (MFSK), or phase values [37]. The binary information signal’s bits are often separated into groups, each of which receives a distinct carrier. The bit group’s amount of information is sent by this carrier. Multiple-level modulations often use the modulations QASK, OFSK, or OPSK. These modulations, which send out two bits at once, quadruple the transmission speed. The data transmission rate is significantly boosted with the addition of 8, 16, and 32 carriers. Moreover, as the number of carriers increases, the complexity of the demodulator circuits must also increase. Figure 2 illustrates the frequency ASK, FSK, and PSK modulation with AWGN in 5 dB SNR.
Figure 3 illustrates binary data ranging from 0 to 255 for quadrat-type modulations, while Figure 4 illustrates corrupted signals with SNR = 5 dB noise.

3. The Proposed Method

This study presents a novel method for the categorization of modulation signals with noise levels ranging from 0 to 25 dB. Before the time-frequency information was extracted, these modulated signals were originally subjected to CWT. The time-frequency data that had been retrieved were first input into the CNN and fully connected layers of the deep neural network. We will present each stage of the suggested plan in the section that follows. The flowchart for the suggested technique is shown in Figure 5.

3.1. The Continuous Wavelet Transform (CWT)

When the continuous wavelet transform is used to analyze signals whose frequency changes over time, a time-frequency diagram is produced. The technique used to convert to the time-frequency domain is crucial in pattern-recognition techniques. For this transformation, the wavelet transform is well suited because nonstationary signals, such as EEG, ECG, and EMG, may be effectively transformed using this technique [38,39]. The signal is changed using wavelet functions, such as Daubechies, Morlet, Symlet, and Gaussian in a wavelet transform.
A signal’s CWT may be expressed as:
Z a . b = 1 a s t ψ t b a d t
where (t) is a signal with finite energy, ψ is the mother wavelet’s complex conjugate, and a and b are parameters that affect the wavelet’s scaling and translation, respectively. Smaller scale values cause the wavelet to contract and disclose high frequencies in the signal, whereas larger (scale) values cause the wavelet to expand in time and show low-frequency information in the signal [40]. By constantly changing the a and b variables along the range of scales and lengths of the signal, respectively, the continuous wavelet transform is computed.
Similar to a spectrogram made with a quick Fourier transform, a scalogram is a visual depiction of a signal’s CWT (STFT). By permitting variable-size analysis windows at different frequencies, the CWT effectively outperforms the STFT in terms of temporal and frequency resolution. The frequencies that are present at various points in the signals’ scalograms are evident.

3.2. Convolutional Neural Network Architecture

The second kind of neural architecture that is often used is CNN. The two main distinctions between CNN and ANN are the architecture and input data. In ANN, numerical values are employed, but in CNN, pictures are used. A collection of pixels with the dimensions w, h, and d make up an image (I). Images are sized, first by their width and height, and then by their depth. The color model used affects how deep an image seems. For example, d is equal to three in the case of the three-color RGB (Red–Green–Blue) scheme.
The architecture of the neural model consists of fully connected layers, convolution, and pooling. Convolutional filtering layers change the picture by emphasizing and extracting certain elements, which is accomplished by the convolution operation (a star operation *) between the filter k (a matrix of size p × p ) and the image I x , y (for each individual pixel in the position (x, y)):
k I x , y = i = 1 p j = 1 p k i , j . I x + i 1 ,   y + j 1 + b 1
where b 1 is a bias. The pooling layer is used to reduce the image’s file size. To analyze each pixel and its surroundings, the function ω . is employed (such as minimum, maximum, or average). The pixel that performs the function is added to the reduced picture ω . . It can be expressed in the following way so as to maximize its utility:
ω I x , y = I i , j 1 , 0 , 1 max               x i , y j
Image resizing may be calculated as w k s + 1 × h k s + 1 , where s is a kernel shift (the size of neighborhood). These two layers may be used repeatedly before the FC layer, which is the last kind of layer. This neural network has a number of hidden layers and a single output layer (similar to ANN).
Our CNN model’s structure is shown in Figure 6. As can be observed, this structure is made up of convolutional layers, two pooling layers, and a fully connected layer where classification is performed using the output from the second pooling layer. Six probability values are created for each of the six classes of generated modulation by our CNN after it analyses a scalogram picture.

4. Experimental Results

We used the CWT and deep convolutional neural network (CNN) to analyze the digital modulation categorization method. The modulation techniques used to encrypt decimal data in the range of 1 to 255 were many. MATLAB was used to create a total of six distinct SNR rates, each with a 5 dB increase, in a range from 0 dB to 25 dB.
Two different experimental conditions were created for the digital modulation categorization task. In the first experiment, a variety of SNR rates were used to evaluate the classification accuracy of the proposed model. This experiment was run five times, raising the SNR rate from 0 dB to 25 dB in 5-decibel increments, for each class of 255 samples. The information from each grade level was then compared with the others. The second experiment produced 1530 samples, with SNRs ranging from 0 to 25 dB for each class. Thus, 9180 spectrogram pictures were used to feed the suggested model. We also compared the outcomes of every experiment with one another.
Several preprocessing techniques were applied to the scalogram pictures. To eliminate any undesirable white areas, the acquired standard CWT pictures were first automatically cropped. Next, the resolutions were decreased from 657 × 535 × 3 to 227 × 227 × 3 pixels. This made it possible to focus on certain areas of interest in the scalogram pictures.
All training was carried out using MATLAB (2021a) and an NVIDIA graphics card (8 GB onboard RAM). We examined the model’s classification ability at different SNR rates between 0 and 25 decibels (dB) in terms of our prior results. As a result, for each class, only 255 scalogram images with a single SNR rate were taken into account. To put it another way, a total of 1530 spectrogram pictures were used to feed input to the CNN model. Experiments on the modulation classification issue were carried out by randomly dividing the dataset into training (70%) and testing (30%) groups.
The suggested model contains a diagonal pattern in the confusion matrix for modulation-type recognition. It highlights the model under consideration’s high level of categorization accuracy. The results of the experiment are reported in Table 1, as a function of the SNR rates. The total accuracy of all SNRs comes out to be over 99.9, indicating that the suggested approach is noise resistant.
Equations (7)–(10) are used to compute accuracy, precision, recall, and F1-score measurements in order to assess the model’s effectiveness in the sequence of 7, 8, 9, and 10. These measurements are computed using the true positive (TP), false negative (FN), false positive (FP), and true negative (TN) indices. TP and TN reflect the number of positive and negative samples that were properly identified, while FP and FN reflect the number of positive and negative samples that were incorrectly classified.
A c c u r a c y = t o t a l   n u m b e r   o f   c o r r e c t   s a m p l e s   n u m b e r   o f   t o t a l   s a m p l e s   f o r   i t h c l a s s
P r e c i s i o n = ( t r u e   p o s i t i v e s t r u e   p o s i t i v e s + f a l s e   p o s i t i v e s ) f o r   i t h c l a s s
R e c a l l = t r u e   p o s i t i v e s t r u e   p o s i t i v e s + f a l s e   p o s i t i v e s f o r   i t h c l a s s
F 1 s c o r e = 2 × p r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l f o r   i t h c l a s s
A total of 2000 scalograms were input into the suggested model. To ensure that the findings could be used generally, the recommended model was also trained and evaluated using the 10-fold cross-validation method. We divided the dataset into 10 groups to provide a more specific illustration. A tiny sample of the data was tested, and the remaining nine portions were utilized to train the proposed model. This process was carried out 10 times.
We utilized a confusion matrix to measure the model’s performance for the digital modulation classification problem. The resultant confusion matrix is shown in Figure 7. Table 2 displays the classification results for each class of the confusion matrix. In other words, the classification performance of the suggested model was good.
Table 2 lists the outcomes of Experiment 2. The overall accuracy, precision, recall, and F1-score for the proposed model are 99.938%, 99.87%, 99.865%, and 99.867%, respectively. With a classification accuracy of 99.99%, PSK is the most accurately identified modulation type, while FSK has the lowest classification accuracy at 99.77%. The experimental results show that the recommended model can successfully carry out the modulation categorization task.
As can be seen in Table 2, the accuracy of the proposed method increased with the increase of SNR. The reason for this is that, by increasing the SNR, the signal power is increased, and as a result, the effect of the noise decreases, so it becomes easier to detect the type of modulation, and the detection accuracy increases.
We have created a novel model for classifying different forms of digital modulations based on CWT and deep convolutional neural networks (CNN). Our data source was the CWT, which creates scalograms.
We tried to use the VGG-16, VGG-19, and GoogLeNet for well over an hour. Figure 8, Table 3 compared with the different CNN architectures on the digital modulation categorization problem, presents this information, and it is clear to see that the suggested approach is, on average, quicker and more accurate than all other methods.
We compared the proposed method against a number of previously trained CNN-based models for categorizing digital modulation. Table 4 demonstrates that the CNN models of proposed method were successful in achieving a high degree of classification accuracy (over 99% accuracy).

5. Conclusions

In this study, a deep convolutional neural network (CNN) model for classifying different forms of digital modulation is presented. In order to assess the model’s durability at different SNR rates, spanning from 0 dB to 25 dB, we first used a transfer learning technique. These results allowed us to draw the conclusion that the model can categorize various modulation types appropriately. The effectiveness of the model was assessed using all six SNR rates. The experiment’s findings demonstrate that the modulation types can be determined using the CNN model. The model’s classifications are accurate to a level greater than 99%. On the other hand, we contrasted GoogLeNet’s performance on the same task with that of other popular CNN models. The suggested strategy surpassed all others in terms of accuracy and efficiency, as well as training time, when the different CNN models were compared. Using the hybrid methodology we suggested, we have had tremendous success categorizing digital modulation signals. We may use this application in conjunction with the newly suggested communication hardware to recognize modulation signals in the future. This capacity is necessary for both the real-time demodulation and the automated detection of this kind of modulated signal. To improve the performance of the proposed method in future work, feature-selection methods, such as (multivariate ridge regression) MRR and (Neighbourhood Component Analysis) NCA, could be used to select the optimal subset of features. These methods can increase the accuracy and speed of detection by removing the redundancies between the feature vectors.

Author Contributions

Data curation, A.M.A.; Formal analysis, A.M.A., H.M.A.G. and A.A.; Funding acquisition and Visualization, F.A.; Methodology, W.K.A.-A.; Resources, A.H.A.; Supervision, S.K.; Visualization, I.M.A.; Writing—original draft, A.M.A., A.S.A. and A.A.; Writing—review & editing, S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kaleem, Z.; Ali, M.; Ahmad, I.; Khalid, W.; Alkhayyat, A.; Jamalipour, A. Artificial Intelligence-Driven Real-Time Automatic Modulation Classification Scheme for Next-Generation Cellular Networks. IEEE Access 2021, 9, 155584–155597. [Google Scholar] [CrossRef]
  2. Marey, M.; Mostafa, H. Turbo Modulation Identification Algorithm for OFDM Software-Defined Radios. IEEE Commun. Lett. 2021, 25, 1707–1711. [Google Scholar] [CrossRef]
  3. Jdid, B.; Hassan, K.; Dayoub, I.; Lim, W.H.; Mokayef, M. Machine Learning Based Automatic Modulation Recognition for Wireless Communications: A Comprehensive Survey. IEEE Access 2021, 9, 57851–57873. [Google Scholar] [CrossRef]
  4. Chen, W.; Xie, Z.; Ma, L.; Liu, J.; Liang, X. A Faster Maximum-Likelihood Modulation Classification in Flat Fading Non-Gaussian Channels. IEEE Commun. Lett. 2019, 23, 454–457. [Google Scholar] [CrossRef]
  5. Abu-Romoh, M.; Aboutaleb, A.; Rezki, Z. Automatic Modulation Classification Using Moments and Likelihood Maximization. IEEE Commun. Lett. 2018, 22, 938–941. [Google Scholar] [CrossRef]
  6. Han, L.; Gao, F.; Li, Z.; Dobre, O.A. Low Complexity Automatic Modulation Classification Based on Order-Statistics. IEEE Trans. Wirel. Commun. 2016, 16, 400–411. [Google Scholar] [CrossRef]
  7. Peng, S.; Sun, S.; Yao, Y.-D. A survey of modulation classification using deep learning: Signal representation and data preprocessing. IEEE Trans. Neural Netw. Learn. Syst. 2021. [Google Scholar] [CrossRef]
  8. Wang, H.; Ding, W.; Zhang, D.; Zhang, B. Deep Convolutional Neural Network with Wavelet Decomposition for Automatic Modulation Classification. In Proceedings of the 2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA), Kristiansand, Norway, 9–13 November 2020. [Google Scholar]
  9. Ma, J.; Qiu, T. Automatic Modulation Classification Using Cyclic Correntropy Spectrum in Impulsive Noise. IEEE Wirel. Commun. Lett. 2018, 8, 440–443. [Google Scholar] [CrossRef]
  10. Lee, S.H.; Kim, K.-Y.; Shin, Y. Effective Feature Selection Method for Deep Learning-Based Automatic Modulation Classification Scheme Using Higher-Order Statistics. Appl. Sci. 2020, 10, 588. [Google Scholar] [CrossRef] [Green Version]
  11. Lv, J.; Zhang, L.; Teng, X. A modulation classification based on SVM. In Proceedings of the 2016 15th International Conference on Optical Communications and Networks (ICOCN), Hangzhou, China, 24–27 September 2016. [Google Scholar]
  12. Subbarao, M.V.; Punniakodi, S. Automatic modulation recognition in cognitive radio receivers using multi-order cumulants and decision trees. Int. J. Rec. Technol. Eng 2018, 7, 61–69. [Google Scholar]
  13. Zhao, Y.; Shi, C.; Wang, D.; Chen, X.; Wang, L.; Yang, T.; Du, J. Low-Complexity and Nonlinearity-Tolerant Modulation Format Identification Using Random Forest. IEEE Photon. Technol. Lett. 2019, 31, 853–856. [Google Scholar] [CrossRef]
  14. Ghauri, S.A.; Khan, S. Knn based classification of digital modulated signals. IIUM Eng. J. 2016, 17, 71–82. [Google Scholar] [CrossRef]
  15. Lee, W.; Seong, J.; Ozlu, B.; Shim, B.; Marakhimov, A.; Lee, S. Biosignal Sensors and Deep Learning-Based Speech Recognition: A Review. Sensors 2021, 21, 1399. [Google Scholar] [CrossRef]
  16. Kottursamy, K. A Review on Finding Efficient Approach to Detect Customer Emotion Analysis using Deep Learning Analysis. J. Trends Comput. Sci. Smart Technol. 2021, 3, 95–113. [Google Scholar] [CrossRef]
  17. Zhou, R.; Liu, F.; Gravelle, C.W. Deep Learning for Modulation Recognition: A Survey with a Demonstration. IEEE Access 2020, 8, 67366–67376. [Google Scholar] [CrossRef]
  18. Kim, S.-H.; Kim, J.-W.; Nwadiugwu, W.-P.; Kim, D.-S. Deep Learning-Based Robust Automatic Modulation Classification for Cognitive Radio Networks. IEEE Access 2021, 9, 92386–92393. [Google Scholar] [CrossRef]
  19. Bu, K.; He, Y.; Jing, X.; Han, J. Adversarial Transfer Learning for Deep Learning Based Automatic Modulation Classification. IEEE Signal Process. Lett. 2020, 27, 880–884. [Google Scholar] [CrossRef]
  20. Chen, Y.; Shao, W.; Liu, J.; Yu, L.; Qian, Z. Automatic modulation classification scheme based on LSTM with random erasing and attention mechanism. IEEE Access 2020, 8, 154290–154300. [Google Scholar] [CrossRef]
  21. Zhang, Z.; Luo, H.; Wang, C.; Gan, C.; Xiang, Y. Automatic Modulation Classification Using CNN-LSTM Based Dual-Stream Structure. IEEE Trans. Veh. Technol. 2020, 69, 13521–13531. [Google Scholar] [CrossRef]
  22. Xu, Q.; Yao, Z.; Tu, Y.; Chen, Y. Attention-Based Multi-component LSTM for Internet Traffic Prediction. In Proceedings of the International Conference on Neural Information Processing, Bangkok, Thailand, 18–22 November 2020. [Google Scholar]
  23. Yang, Z.; Chen, L.; Zhang, H.; Yao, Z. Residual Connection based TPA-LSTM Networks for Cluster Node CPU Load Prediction. In Proceedings of the 2021 IEEE International Conference on Big Data (Big Data), Orlando, FL, USA, 15–18 December 2021. [Google Scholar]
  24. Amorim, A.; Morehouse, T.; Kasilingam, D.; Zhou, R.; Magotra, N. CNN-based AMC for Internet of Underwater Things. In Proceedings of the 2021 IEEE International Midwest Symposium on Circuits and Systems (MWSCAS), Lansing, MI, USA, 8–11 August 2021. [Google Scholar]
  25. Kojima, S.; Maruta, K.; Ahn, C.J. High-precision SNR estimation by CNN using PSD image for adaptive modulation and coding. In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), Antwerp, Belgium, 25–28 May 2020. [Google Scholar]
  26. Ghasemzadeh, P.; Hempel, M.; Sharif, H. GS-QRNN: A High-Efficiency Automatic Modulation Classifier for Cognitive Radio IoT. IEEE Internet Things J. 2022, 9, 9467–9477. [Google Scholar] [CrossRef]
  27. Moore, M.O.; Buehrer, R.M.; Headley, W.C. Decoupling RNN Training and Testing Observation Intervals for Spectrum Sensing Applications. Sensors 2022, 22, 4706. [Google Scholar] [CrossRef] [PubMed]
  28. Rajendran, S.; Meert, W.; Giustiniano, D.; Lenders, V.; Pollin, S. Deep Learning Models for Wireless Signal Classification with Distributed Low-Cost Spectrum Sensors. IEEE Trans. Cogn. Commun. Netw. 2018, 4, 433–445. [Google Scholar] [CrossRef] [Green Version]
  29. Hu, S.; Pei, Y.; Liang, P.P.; Liang, Y.C. Robust modulation classification under uncertain noise condition using recurrent neural network. In Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, 9–13 December 2018. [Google Scholar]
  30. Huynh-The, T.; Hua, C.-H.; Pham, Q.-V.; Kim, D.-S. MCNet: An Efficient CNN Architecture for Robust Automatic Modulation Classification. IEEE Commun. Lett. 2020, 24, 811–815. [Google Scholar] [CrossRef]
  31. Zhang, Z.; Wang, C.; Gan, C.; Sun, S.; Wang, M. Automatic Modulation Classification Using Convolutional Neural Network with Features Fusion of SPWVD and BJD. IEEE Trans. Signal Inf. Process. Netw. 2019, 5, 469–478. [Google Scholar] [CrossRef]
  32. Tang, B.; Tu, Y.; Zhang, Z.; Lin, Y. Digital signal modulation classification with data augmentation using generative adversarial nets in cognitive radio networks. IEEE Access 2018, 6, 15713–15722. [Google Scholar] [CrossRef]
  33. Kumar, Y.; Sheoran, M.; Jajoo, G.; Yadav, S.K. Automatic Modulation Classification Based on Constellation Density Using Deep Learning. IEEE Commun. Lett. 2020, 24, 1275–1278. [Google Scholar] [CrossRef]
  34. Han, H.; Ren, Z.; Li, L.; Zhu, Z. Automatic Modulation Classification Based on Deep Feature Fusion for High Noise Level and Large Dynamic Input. Sensors 2021, 21, 2117. [Google Scholar] [CrossRef]
  35. Wang, Y.; Yang, J.; Liu, M.; Gui, G. LightAMC: Lightweight Automatic Modulation Classification via Deep Learning and Compressive Sensing. IEEE Trans. Veh. Technol. 2020, 69, 3491–3495. [Google Scholar] [CrossRef]
  36. Lin, Y.; Tu, Y.; Dou, Z. An Improved Neural Network Pruning Technology for Automatic Modulation Classification in Edge Devices. IEEE Trans. Veh. Technol. 2020, 69, 5703–5706. [Google Scholar] [CrossRef]
  37. Grewal, D.; Herhausen, D.; Ludwig, S.; Ordenes, F.V. The Future of Digital Communication Research: Considering Dynamics and Multimodality. J. Retail. 2021, 98, 224–240. [Google Scholar] [CrossRef]
  38. Narin, A. Detection of Focal and Non-focal Epileptic Seizure Using Continuous Wavelet Transform-Based Scalogram Images and Pre-trained Deep Neural Networks. IRBM 2020, 43, 22–31. [Google Scholar] [CrossRef]
  39. Wang, T.; Lu, C.; Sun, Y.; Yang, M.; Liu, C.; Ou, C. Automatic ECG Classification Using Continuous Wavelet Transform and Convolutional Neural Network. Entropy 2021, 23, 119. [Google Scholar] [CrossRef]
  40. Meintjes, A.; Lowe, A.; Legget, M. Fundamental heart sound classification using the continuous wavelet transform and convolutional neural networks. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018. [Google Scholar]
Figure 1. Examples of ASK, FSK, and PSK modulation for a string bit of 10110100.
Figure 1. Examples of ASK, FSK, and PSK modulation for a string bit of 10110100.
Computers 11 00162 g001
Figure 2. Examples of ASK, FSK, and PSK modulation with AWGN in 5 dB SNR for a string bit of 10110100.
Figure 2. Examples of ASK, FSK, and PSK modulation with AWGN in 5 dB SNR for a string bit of 10110100.
Computers 11 00162 g002
Figure 3. Examples of QASK, QFSK, and QPSK modulation for a string bit of 10110100.
Figure 3. Examples of QASK, QFSK, and QPSK modulation for a string bit of 10110100.
Computers 11 00162 g003
Figure 4. Examples of QASK, QFSK, and QPSK modulation with AWGN in 5 dB SNR for a string bit of 10110100.
Figure 4. Examples of QASK, QFSK, and QPSK modulation with AWGN in 5 dB SNR for a string bit of 10110100.
Computers 11 00162 g004
Figure 5. Diagram of the proposed method.
Figure 5. Diagram of the proposed method.
Computers 11 00162 g005
Figure 6. Structure of CNN model in this work.
Figure 6. Structure of CNN model in this work.
Computers 11 00162 g006
Figure 7. Confusion matrix of categorization results, including all SNR rates in each class.
Figure 7. Confusion matrix of categorization results, including all SNR rates in each class.
Computers 11 00162 g007
Figure 8. The performance of the pre-trained models and the proposed model.
Figure 8. The performance of the pre-trained models and the proposed model.
Computers 11 00162 g008
Table 1. The proposed model’s accuracy when different SNR rates are considered independently.
Table 1. The proposed model’s accuracy when different SNR rates are considered independently.
ClassSNR
0 dB5 dB10 dB15 dB20 dB25 dBAvg. Acc
ASK (Proposed)99.8499.8899.9099.9199.9399.9799.90
FSK (Proposed)99.5899.6099.7099.9199.9499.9499.77
PSK (Proposed)99.9999.9999.9910010010099.99
QASK (Proposed)99.9999.9999.9910010010099.99
QFSK (Proposed)99.9999.9999.9910010010099.99
QPSK (Proposed)99.9999.9999.9910010010099.99
Avg. ACC (Proposed)99.9099.9199.9299.9799.97899.9899.938
Table 2. The proposed model’s results covered all SNR rates in each class.
Table 2. The proposed model’s results covered all SNR rates in each class.
ClassAccuracyPrecisionRecallF-Score
ASK (Proposed)99.9099.8099.9099.93
FSK (Proposed)99.7799.4999.999.69
PSK (Proposed)99.9999.9999.9999.99
QASK (Proposed)99.9999.9999.9999.99
QFSK (Proposed)99.9999.9099.6199.75
QPSK (Proposed)99.9999.9999.8099.88
Avg. (Proposed)99.93899.8799.86599.86
Table 3. The proposed method is compared with the different CNN architectures on the digital modulation categorization problem.
Table 3. The proposed method is compared with the different CNN architectures on the digital modulation categorization problem.
MetricsAlexNetVGG-16VGG-19GoogLenetProposed
Accuracy99.4099.999.9099.5699.93
Precision99.4099.8910099.7899.87
Recall99.4099.8199.9099.6799.86
F1-Score99.4099.9499.9599.7399.86
Time (min)27.2206.1240.5217.56.3
Time: The amount of time that was required during the training of the network is only equivalent to one training epoch.
Table 4. Compare the proposed method against a number of previously trained CNN.
Table 4. Compare the proposed method against a number of previously trained CNN.
NoTitleYearRefAccuracy
1Artificial Intelligence-Driven Real-Time Automatic Modulation Classification Scheme for Next-Generation Cellular Networks.2021[1]97
2Machine Learning Based Automatic Modulation Recognition for Wireless Communications: A Comprehensive Survey2021[3]99
3Faster Maximum-Likelihood Modulation Classification in Flat Fading Non-Gaussian Channels2019[4]95
4Deep Convolutional Neural Network with Wavelet Decomposition for Automatic Modulation Classification2020[9]96
5Deep Learning-Based Robust Automatic Modulation Classification for Cognitive Radio Networks2021[19]98.7
6An Efficient CNN Architecture for Robust Automatic Modulation Classification2020[32]93
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abdulkarem, A.M.; Abedi, F.; Ghanimi, H.M.A.; Kumar, S.; Al-Azzawi, W.K.; Abbas, A.H.; Abosinnee, A.S.; Almaameri, I.M.; Alkhayyat, A. Robust Automatic Modulation Classification Using Convolutional Deep Neural Network Based on Scalogram Information. Computers 2022, 11, 162. https://doi.org/10.3390/computers11110162

AMA Style

Abdulkarem AM, Abedi F, Ghanimi HMA, Kumar S, Al-Azzawi WK, Abbas AH, Abosinnee AS, Almaameri IM, Alkhayyat A. Robust Automatic Modulation Classification Using Convolutional Deep Neural Network Based on Scalogram Information. Computers. 2022; 11(11):162. https://doi.org/10.3390/computers11110162

Chicago/Turabian Style

Abdulkarem, Ahmed Mohammed, Firas Abedi, Hayder M. A. Ghanimi, Sachin Kumar, Waleed Khalid Al-Azzawi, Ali Hashim Abbas, Ali S. Abosinnee, Ihab Mahdi Almaameri, and Ahmed Alkhayyat. 2022. "Robust Automatic Modulation Classification Using Convolutional Deep Neural Network Based on Scalogram Information" Computers 11, no. 11: 162. https://doi.org/10.3390/computers11110162

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop