Special Issue "Advanced Technologies of Artificial Intelligence in Signal Processing"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 August 2023 | Viewed by 5174

Special Issue Editors

Communication Engineering College, Xidian University, Xi'an 710071, China
Interests: communication signal processing; statistical signal processing; artificial intelligence
School of Engineering, University of Warwick, Coventry CV4 7AL, UK
Interests: wireless communications; performance analysis; joint radar-communications designs; cognitive radios; wireless relaying; energy harvesting
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

While science and technology are being deployed, both industry and academia have started to look into wireless communication, radar, computing, and control, especially those supported by artificial intelligence (AI) such as intelligent signal processing. The performance of such applications depends on the trade-off between information transmission, storage, and processing. Therefore, the convergence of intelligent wireless communication, computing, radar, and control becomes of paramount importance. Moreover, intelligent signal processing techniques should be flexible enough to meet the requirements of different verticals in terms of, e.g., connectivity, latency, security, energy efficiency, and reliability. Artificial intelligence has gradually been applied in radar, communication, and statistical signal processing to effectively improve the efficiency, speed, and intelligence. The artificial intelligence approach may offer some novel design approaches for traditionally difficult information and signal processing tasks in communication, radar, and control. Potential topics include, but are not limited to, the following:

  • Deep learning/machine learning on signal processing;
  • Artificial intelligence in wireless communications and satellite communications;
  • Artificial intelligence in radar signal processing;
  • Artificial intelligence application in wireless caching and computing;
  • Artificial intelligence application in computer network;
  • Advances in AI and its applications in information security and control.

Dr. Mingqian Liu
Dr. Yunfei Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • radar signal processing
  • communication signal processing
  • statistical signal processing
  • intelligence control
  • intelligence information security

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
An Improved Modulation Recognition Algorithm Based on Fine-Tuning and Feature Re-Extraction
Electronics 2023, 12(9), 2134; https://doi.org/10.3390/electronics12092134 - 06 May 2023
Viewed by 886
Abstract
Modulation recognition is an important technology in wireless communication systems. In recent years, deep learning-based modulation recognition algorithms, which can autonomously learn deep features and achieve superior recognition performance compared with traditional algorithms, have emerged. Yet, there are still certain limitations. In this [...] Read more.
Modulation recognition is an important technology in wireless communication systems. In recent years, deep learning-based modulation recognition algorithms, which can autonomously learn deep features and achieve superior recognition performance compared with traditional algorithms, have emerged. Yet, there are still certain limitations. In this paper, aiming at addressing the issue of poor recognition performance at low signal-to-noise ratios (SNRs) and the inability of deep features to effectively distinguish among all modulation types, we propose an optimization scheme for modulation recognition based on fine-tuning and feature re-extraction. In the proposed scheme, the network is firstly trained with the signals at high SNRs; then, the trained network is fine-tuned to the untrained network at low SNRs. Finally, on the basis of the features learned by the network, deeper features with enhanced discriminability for confused modulation types are obtained using feature re-extraction. The simulation results demonstrate that the proposed optimization scheme can maximize the performance of the neural network in the recognition of signals that are easily confused and at low SNRs. Notably, the average recognition accuracy of the proposed scheme was 91.28% within an SNR range of −8 dB to 18 dB, which is an improvement of 8% to 17% in comparison with four existing schemes. Full article
(This article belongs to the Special Issue Advanced Technologies of Artificial Intelligence in Signal Processing)
Show Figures

Figure 1

Article
Joint Resource Allocation in a Two-Way Relaying Simultaneous Wireless Information and Power Transfer System
Electronics 2023, 12(8), 1941; https://doi.org/10.3390/electronics12081941 - 20 Apr 2023
Viewed by 390
Abstract
Simultaneous wireless information and power transfer (SWIPT) technology provides an efficient solution to energy-limited communication terminals in a wireless network. However, in the current application scenario of SWIPT technology, relays use all the collected energy forwarding to assist communication between source nodes, while [...] Read more.
Simultaneous wireless information and power transfer (SWIPT) technology provides an efficient solution to energy-limited communication terminals in a wireless network. However, in the current application scenario of SWIPT technology, relays use all the collected energy forwarding to assist communication between source nodes, while ignoring the fact that relays can accumulate and store energy. This is of great practical significance in collaborative communication. By using a transmission scheme of a time-division broadcast for the direct transmission link, improvement of the system diversity gain through combining the technology at the receiver can be realized. Based on the above, we propose a two-way relay energy accumulation communication system, which is extended from a single relay system to a multi-relay system. The instantaneous transmission rate of the system link and the system equivalent profit are optimized by jointly optimizing the relay selection, time slot allocation factor, power splitting factor and relay transmit power. Compared with the comparison algorithm, the proposed algorithm shows a significant improvement in the performance of the system. Full article
(This article belongs to the Special Issue Advanced Technologies of Artificial Intelligence in Signal Processing)
Show Figures

Figure 1

Article
Pruning- and Quantization-Based Compression Algorithm for Number of Mixed Signals Identification Network
Electronics 2023, 12(7), 1694; https://doi.org/10.3390/electronics12071694 - 03 Apr 2023
Cited by 1 | Viewed by 456
Abstract
Source number estimation plays an important role in successful blind signal separation. At present, the application of machine learning allows the processing of signals without the time-consuming and complex work of manual feature extraction. However, the convolutional neural network (CNN) for processing complex [...] Read more.
Source number estimation plays an important role in successful blind signal separation. At present, the application of machine learning allows the processing of signals without the time-consuming and complex work of manual feature extraction. However, the convolutional neural network (CNN) for processing complex signals has some problems, such as incomplete feature extraction and high resource consumption. In this paper, a lightweight source number estimation network (LSNEN), which can achieve a robust estimation of the number of mixed complex signals at low SNR (signal-to-noise ratio), is studied. Compared with other estimation methods, which require manual feature extraction, our network can realize the extraction of the depth feature of the original signal data. The convolutional neural network realizes complex mapping of modulated signals through the cascade of multiple three-dimensional convolutional modules. By using a three-dimensional convolution module, the mapping of complex signal convolution is realized. In order to deploy the network in the mobile terminal with limited resources, we further propose a compression method for the network. Firstly, the sparse structure network is obtained by the weight pruning method to accelerate the speed of network reasoning. Then, the weights and activation values of the network are quantified at a fixed point with the method of parameter quantization. Finally, a lightweight network for source number estimation was obtained, which was compressed from 12.92 MB to 3.78 MB with a compression rate of 70.74%, while achieving an accuracy of 94.4%. Compared with other estimation methods, the lightweight source number estimation network method proposed in this paper has higher accuracy, less model space occupation, and can realize the deployment of the mobile terminal. Full article
(This article belongs to the Special Issue Advanced Technologies of Artificial Intelligence in Signal Processing)
Show Figures

Figure 1

Article
ECG Signal Denoising Method Based on Disentangled Autoencoder
Electronics 2023, 12(7), 1606; https://doi.org/10.3390/electronics12071606 - 29 Mar 2023
Viewed by 549
Abstract
The electrocardiogram (ECG) is widely used in medicine because it can provide basic information about different types of heart disease. However, ECG data are usually disturbed by various types of noise, which can lead to errors in diagnosis by doctors. To address this [...] Read more.
The electrocardiogram (ECG) is widely used in medicine because it can provide basic information about different types of heart disease. However, ECG data are usually disturbed by various types of noise, which can lead to errors in diagnosis by doctors. To address this problem, this study proposes a method for denoising ECG based on disentangled autoencoders. A disentangled autoencoder is an improved autoencoder suitable for denoising ECG data. In our proposed method, we use a disentangled autoencoder model based on a fully convolutional neural network to effectively separate the clean ECG data from the noise. Unlike conventional autoencoders, we disentangle the features of the coding hidden layer to separate the signal-coding features from the noise-coding features. We performed simulation experiments on the MIT-BIH Arrhythmia Database and found that the algorithm had better noise reduction results when dealing with four different types of noise. In particular, using our method, the average improved signal-to-noise ratios for the three noises in the MIT-BIH Noise Stress Test Database were 27.45 db for baseline wander, 25.72 db for muscle artefacts, and 29.91 db for electrode motion artefacts. Compared to a denoising autoencoder based on a fully convolutional neural network (FCN), the signal-to-noise ratio was improved by an average of 12.57%. We can conclude that the model has scientific validity. At the same time, our noise reduction method can effectively remove noise while preserving the important information conveyed by the original signal. Full article
(This article belongs to the Special Issue Advanced Technologies of Artificial Intelligence in Signal Processing)
Show Figures

Figure 1

Article
An Image Object Detection Model Based on Mixed Attention Mechanism Optimized YOLOv5
Electronics 2023, 12(7), 1515; https://doi.org/10.3390/electronics12071515 - 23 Mar 2023
Cited by 1 | Viewed by 607
Abstract
As one of the more difficult problems in the field of computer vision, utilizing object image detection technology in a complex environment includes other key technologies, such as pattern recognition, artificial intelligence, and digital image processing. However, because an environment can be complex, [...] Read more.
As one of the more difficult problems in the field of computer vision, utilizing object image detection technology in a complex environment includes other key technologies, such as pattern recognition, artificial intelligence, and digital image processing. However, because an environment can be complex, changeable, highly different, and easily confused with the target, the target is easily affected by other factors, such as insufficient light, partial occlusion, background interference, etc., making the detection of multiple targets extremely difficult and the robustness of the algorithm low. How to make full use of the rich spatial information and deep texture information in an image to accurately identify the target type and location is an urgent problem to be solved. The emergence of deep neural networks provides an effective way for image feature extraction and full utilization. By aiming at the above problems, this paper proposes an object detection model based on the mixed attention mechanism optimization of YOLOv5 (MAO-YOLOv5). The proposed method fuses the local features and global features in an image so as to better enrich the expression ability of the feature map and more effectively detect objects with large differences in size within the image. Then, the attention mechanism is added to the feature map to weigh each channel, enhance the key features, remove the redundant features, and improve the recognition ability of the feature network towards the target object and background. The results show that the proposed network model has higher precision and a faster running speed and can perform better in object-detection tasks. Full article
(This article belongs to the Special Issue Advanced Technologies of Artificial Intelligence in Signal Processing)
Show Figures

Figure 1

Article
Deep-Learning-Based Recovery of Frequency-Hopping Sequences for Anti-Jamming Applications
Electronics 2023, 12(3), 496; https://doi.org/10.3390/electronics12030496 - 18 Jan 2023
Cited by 1 | Viewed by 908
Abstract
The frequency-hopping communication system has been widely used in anti-jamming communication due to its anti-interception and anti-jamming performance. With the increasingly complex electromagnetic environment, the frequency-hopping communication system needs more flexible frequency-hopping patterns to deal with interferences, which brings great challenges to the [...] Read more.
The frequency-hopping communication system has been widely used in anti-jamming communication due to its anti-interception and anti-jamming performance. With the increasingly complex electromagnetic environment, the frequency-hopping communication system needs more flexible frequency-hopping patterns to deal with interferences, which brings great challenges to the communication receiver. In this paper, an intelligent receiving scheme of frequency-hopping sequences is proposed, which combines time–frequency analysis with deep learning to realize an intelligent estimation of frequency-hopping sequences. A hybrid network module is designed by combining a convolutional neural network (CNN) with a gated recurrent unit (GRU). In the proposed network module, the combination of a residual network (ResNet) and squeeze and extraction (SE) improves the feature extraction and expression capabilities of the CNN network. The GRU network is proposed to solve the problem of dealing with signals with variant input lengths. A transfer learning scheme is further proposed to deal with communications systems with different frequency-hopping sets. Simulation results show that the proposed method has strong generalization ability and robustness, and the bit error rate (BER) performance of intelligent receiving is close to the receiving performance under ideal conditions. Full article
(This article belongs to the Special Issue Advanced Technologies of Artificial Intelligence in Signal Processing)
Show Figures

Figure 1

Article
A Novel Target Tracking Scheme Based on Attention Mechanism in Complex Scenes
Electronics 2022, 11(19), 3125; https://doi.org/10.3390/electronics11193125 - 29 Sep 2022
Cited by 1 | Viewed by 616
Abstract
In recent years, target tracking algorithms based on deep learning have realized significant progress, especially the Siamese neural network structure, which has a simple structure and excellent scalability. Although these methods provide excellent generalization capabilities, they fail to perform the task of learning [...] Read more.
In recent years, target tracking algorithms based on deep learning have realized significant progress, especially the Siamese neural network structure, which has a simple structure and excellent scalability. Although these methods provide excellent generalization capabilities, they fail to perform the task of learning target information discrimination smoothly due to being affected by distractors such as background clutter, occlusion, and target size. To solve this problem, in this paper we propose a newly improved Siamese network target tracking algorithm based on an attention mechanism. We introduce a channel attention module and a spatial attention module into the original network to improve the problem of insufficient semantic extraction ability of the convolutional layer of the tracking algorithm in complex environments. A channel attention mechanism enhances the feature extraction ability by using the network to learn the importance of each channel and establish the relationship between channels, while a spatial attention mechanism strengthens the feature extraction ability by establishing the importance of spatial position in locating the target or carrying out a certain degree of deformation. In this paper, the above two models are combined to improve the robustness of trackers without sacrificing tracking speed. We conduct a comprehensive experiment on the Object Tracking Benchmark dataset. The experimental results show that our algorithm outperforms other real-time trackers in both accuracy and robustness in most complex environments. Full article
(This article belongs to the Special Issue Advanced Technologies of Artificial Intelligence in Signal Processing)
Show Figures

Figure 1

Back to TopTop