Next Issue
Volume 13, January-1
Previous Issue
Volume 12, December-1
 
 

Electronics, Volume 12, Issue 24 (December-2 2023) – 161 articles

Cover Story (view full-size image): A macro–micro dual-drive positioning system was developed for Scanning Beam Interference Lithography (SBIL), which uses a dual-frequency laser interferometer as the position reference and exhibits the characteristics of long travel, heavy load, and high accuracy. The macro-motion system adopts a friction-driven structure and a feedforward PID control algorithm, and the stroke can reach 1800 mm. The micro-motion system adopts a flexible hinge–plus-PZT driving method and a PID control algorithm based on neural networks, which achieves sufficient positioning accuracy of this system at the nanometer level. Additionally, the sources and effects of errors during the motion process were assessed in detail. Finally, the experimental results show that the workbench can locate at the nanoscale within the full range of travel, which can satisfy the SBIL exposure requirement. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 8503 KiB  
Article
Imitation Learning-Based Energy Management Algorithm: Lille Catholic University Smart Grid Demonstrator Case Study
Electronics 2023, 12(24), 5048; https://doi.org/10.3390/electronics12245048 - 18 Dec 2023
Cited by 1 | Viewed by 642
Abstract
This paper proposes a novel energy management approach (imitation-Q-learning) based on imitation learning (IL) and reinforcement learning (RL). The proposed approach reinforces a decision-making agent based on a modified Q-learning algorithm to mimic an expert demonstration to solve a microgrid (MG) energy management [...] Read more.
This paper proposes a novel energy management approach (imitation-Q-learning) based on imitation learning (IL) and reinforcement learning (RL). The proposed approach reinforces a decision-making agent based on a modified Q-learning algorithm to mimic an expert demonstration to solve a microgrid (MG) energy management problem. Those demonstrations are derived from solving a set of linear programming (LP) problems. Consequently, the imitation-Q-learning algorithm learns by interacting with the MG simulator and imitating the LP demonstrations to make decisions in real time that minimize the MG energy costs without prior knowledge of uncertainties related to photovoltaic (PV) production, load consumption, and electricity prices. A real-scale MG at the Lille Catholic University in France was used as a case study to conduct experiments. The proposed approach was compared to the expert performances, which are the LP algorithm and the conventional Q-learning algorithm in different test scenarios. It was approximately 80 times faster than conventional Q-learning and achieved the same performance as LP. In order to test the robustness of the proposed approach, a PV inverter crush and load shedding were also simulated. Preliminary results show the effectiveness of the proposed method. Full article
Show Figures

Figure 1

27 pages, 8733 KiB  
Article
Improved A-Star Path Planning Algorithm in Obstacle Avoidance for the Fixed-Wing Aircraft
Electronics 2023, 12(24), 5047; https://doi.org/10.3390/electronics12245047 - 18 Dec 2023
Viewed by 439
Abstract
The flight management system is a basic component of avionics for modern airliners. However, the airborne flight management system needs to be improved and relies on imports; path planning is the key to the flight management system. Based on the classical A* algorithm, [...] Read more.
The flight management system is a basic component of avionics for modern airliners. However, the airborne flight management system needs to be improved and relies on imports; path planning is the key to the flight management system. Based on the classical A* algorithm, this paper proposes an improved A* path planning algorithm, which solves the problem of low planning efficiency and following a non-smooth path. In order to solve the problem of the large amount of data calculation and long planning time of the classical A* algorithm, a new data structure called a “value table” is designed to replace the open table and close table of the classical A* algorithm to improve the retrieval efficiency, and the Heap sort algorithm is used to optimize the efficiency of node sorting. Aiming at the problem that the flight trajectory is hard to follow, the trajectory smoothing optimization algorithm combined with turning angle limit is proposed. The gray value in the digital map is added to the A* algorithm, and the calculation methods of gray cost, cumulative cost, and estimated cost are improved, which can better meet the constraints of obstacle avoidance. Through the comparative simulation verification of the algorithm, the improved A* algorithm can significantly reduce the path planning time to 1% compared to the classical A* algorithm; it can be seen that the proposed algorithm improves the efficiency of path planning and the smoother planned path, which has obvious advantages compared to the classical A* algorithm. Full article
(This article belongs to the Special Issue Advances in Intelligent Data Analysis and Its Applications, Volume II)
Show Figures

Figure 1

13 pages, 4241 KiB  
Article
Rotating Object Detection for Cranes in Transmission Line Scenarios
Electronics 2023, 12(24), 5046; https://doi.org/10.3390/electronics12245046 - 18 Dec 2023
Viewed by 433
Abstract
Cranes are pivotal heavy equipment used in the construction of transmission line scenarios. Accurately identifying these cranes and monitoring their status is pressing. The rapid development of computer vision brings new ideas to solve these challenges. Since cranes have a high aspect ratio, [...] Read more.
Cranes are pivotal heavy equipment used in the construction of transmission line scenarios. Accurately identifying these cranes and monitoring their status is pressing. The rapid development of computer vision brings new ideas to solve these challenges. Since cranes have a high aspect ratio, conventional horizontal bounding boxes contain a large number of redundant objects, which deteriorates the accuracy of object detection. In this study, we use a rotating target detection paradigm to detect cranes. We propose the YOLOv8-Crane model, where YOLOv8 serves as a detection network for rotating targets, and we incorporate Transformers in the backbone to improve global context modeling. The Kullback–Leibler divergence (KLD) with excellent scale invariance is used as a loss function to measure the distance between predicted and true distribution. Finally, we validate the superiority of YOLOv8-Crane on 1405 real-scene data collected by ourselves. Our approach demonstrates a significant improvement in crane detection and offers a new solution for enhancing safety monitoring. Full article
Show Figures

Figure 1

18 pages, 2884 KiB  
Article
An Optimized Byzantine Fault Tolerance Algorithm for Medical Data Security
Electronics 2023, 12(24), 5045; https://doi.org/10.3390/electronics12245045 - 18 Dec 2023
Viewed by 492
Abstract
Medical data are an intangible asset and an important resource for the entire society. The mining and application of medical data can generate enormous value. Currently, medical data management is mostly centralized and heavily relies on central servers, which are prone to malfunctions [...] Read more.
Medical data are an intangible asset and an important resource for the entire society. The mining and application of medical data can generate enormous value. Currently, medical data management is mostly centralized and heavily relies on central servers, which are prone to malfunctions or malicious attacks, making it difficult to form a consensus among multiple parties and achieve secure sharing. Blockchain technology offers a solution to enhance medical data security. However, in medical data security sharing schemes based on blockchain, the widely adopted Practical Byzantine Fault-Tolerant (PBFT) algorithm encounters challenges, including intricate communication, limited scalability, and the inability to dynamically add or remove nodes. These issues make it challenging to address practical requirements effectively. In this paper, we implement an efficient and scalable consensus algorithm based on the PBFT consensus algorithm, referred to as Me-PBFT, which is more suitable for the field of medical data security. First, we design a reputation evaluation model to select more trusted nodes to participate in the system consensus, which is implemented based on a sigmoid function with adjustable difficulty. Second, we implement the division of node roles to construct a dual consensus layer structure. Finally, we design a node dynamic join and exit mechanism on the overall framework of the algorithm. Analysis shows that compared to PBFT and RAFT, ME-PBFT can reduce communication complexity, improve fault tolerance, and have good scalability. It can meet the need for consensus and secure sharing of medical data among multiple parties. Full article
Show Figures

Figure 1

20 pages, 3384 KiB  
Article
MCFP-YOLO Animal Species Detector for Embedded Systems
Electronics 2023, 12(24), 5044; https://doi.org/10.3390/electronics12245044 - 18 Dec 2023
Viewed by 550
Abstract
Advances in deep learning have led to the development of various animal species detection models suited for different environments. Building on this, our research introduces a detection model that efficiently handles both batch and real-time processing. It achieves this by integrating a motion-based [...] Read more.
Advances in deep learning have led to the development of various animal species detection models suited for different environments. Building on this, our research introduces a detection model that efficiently handles both batch and real-time processing. It achieves this by integrating a motion-based frame selection algorithm and a two-stage pipelining–dataflow hybrid parallel processing approach. These modifications significantly reduced the processing delay and power consumption of the proposed MCFP-YOLO detector, particularly on embedded systems with limited resources, without trading off the accuracy of our animal species detection system. For field applications, the proposed MCFP-YOLO model was deployed and tested on two embedded devices: the RP4B and the Jetson Nano. While the Jetson Nano provided faster processing, the RP4B was selected due to its lower power consumption and a balanced cost–performance ratio, making it particularly suitable for extended use in remote areas. Full article
(This article belongs to the Special Issue Embedded Systems for Neural Network Applications)
Show Figures

Figure 1

17 pages, 792 KiB  
Article
Bit-Weight Adjustment for Bridging Uniform and Non-Uniform Quantization to Build Efficient Image Classifiers
Electronics 2023, 12(24), 5043; https://doi.org/10.3390/electronics12245043 - 18 Dec 2023
Viewed by 631
Abstract
Network quantization, which strives to reduce the precision of model parameters and/or features, is one of the most efficient ways to accelerate model inference and reduce memory consumption, particularly for deep models when performing a variety of real-time vision tasks on edge platforms [...] Read more.
Network quantization, which strives to reduce the precision of model parameters and/or features, is one of the most efficient ways to accelerate model inference and reduce memory consumption, particularly for deep models when performing a variety of real-time vision tasks on edge platforms with constrained resources. Existing quantization approaches function well when using relatively high bit widths but suffer from a decline in accuracy at ultra-low precision. In this paper, we propose a bit-weight adjustment (BWA) module to bridge uniform and non-uniform quantization, successfully quantizing the model to ultra-low bit widths without bringing about noticeable performance degradation. Given uniformly quantized data, the BWA module adaptively transforms these data into non-uniformly quantized data by simply introducing trainable scaling factors. With the BWA module, we combine uniform and non-uniform quantization in a single network, allowing low-precision networks to benefit from both the hardware friendliness of uniform quantization and the high performance of non-uniform quantization. We optimize the proposed BWA module by directly minimizing the classification loss through end-to-end training. Numerous experiments on the ImageNet and CIFAR-10 datasets reveal that the proposed approach outperforms state-of-the-art approaches across various bit-width settings and can even produce low-precision quantized models that are competitive with their full-precision counterparts. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

21 pages, 3803 KiB  
Article
A Fast Mismatch Calibration Method Based on Frequency Domain Orthogonal Decomposition for Time-Interleaved Analog-to-Digital Converters
Electronics 2023, 12(24), 5042; https://doi.org/10.3390/electronics12245042 - 18 Dec 2023
Viewed by 485
Abstract
This paper proposes a fully digital background calibration method for time-interleaved analog-to-digital converter (TIADC) mismatches. The method analyzes the frequency and phase of spurious signals caused by three types of mismatches in TIADCs in the frequency domain. By utilizing the Hilbert transform and [...] Read more.
This paper proposes a fully digital background calibration method for time-interleaved analog-to-digital converter (TIADC) mismatches. The method analyzes the frequency and phase of spurious signals caused by three types of mismatches in TIADCs in the frequency domain. By utilizing the Hilbert transform and frequency shifting, orthogonal basis signals located at the mismatch frequencies can be constructed. The calibration of mismatches is achieved by linearly combining the orthogonal basis signals with the estimated coefficients and subtracting them from the original signal. The estimation of coefficients is determined by evaluating the correlation between the linear combination of orthogonal basis signals and the calibrated signal. Furthermore, an exponential moving average (EMA) and least mean square (LMS) algorithm are introduced to expedite the coefficient estimation process. The entire calibration process converges in merely 600 samples, significantly improving the convergence speed. By monitoring the amplitude of the input signal and adjusting the LMS step, the algorithm is functional under different amplitude signals, enhancing the robustness. An off-chip calibration is conducted based on a commercial 14-bit, 8-channel, 2.4GSPS TIADC. Results indicate that all spurious signals are suppressed below 80 dB, and the convergence rate is consistent with the simulation. Full article
(This article belongs to the Special Issue Design of Mixed Analog/Digital Circuits, Volume 2)
Show Figures

Figure 1

12 pages, 259 KiB  
Review
Application of Artificial Intelligence Techniques to Detect Fake News: A Review
Electronics 2023, 12(24), 5041; https://doi.org/10.3390/electronics12245041 - 18 Dec 2023
Viewed by 1523
Abstract
With the rapid growth of social media platforms and online news consumption, the proliferation of fake news has emerged as a pressing concern. Detecting and combating fake news has become crucial in ensuring the accuracy and reliability of information disseminated through social media. [...] Read more.
With the rapid growth of social media platforms and online news consumption, the proliferation of fake news has emerged as a pressing concern. Detecting and combating fake news has become crucial in ensuring the accuracy and reliability of information disseminated through social media. Machine learning plays a crucial role in fake news detection due to its ability to analyze large amounts of data and identify patterns and trends that are indicative of misinformation. Fake news detection involves analyzing various types of data, such as textual or media content, social context, and network structure. Machine learning techniques enable automated and scalable detection of fake news, which is essential given the vast volume of information shared on social media platforms. Overall, machine learning provides a powerful tool for detecting and preventing the spread of fake news on social media. This review article provides an extensive analysis of recent advancements in fake news detection. The chosen articles cover a wide range of approaches, including data mining, deep learning, natural language processing (NLP), ensemble learning, transfer learning, and graph-based techniques. Full article
(This article belongs to the Section Artificial Intelligence)
17 pages, 3197 KiB  
Article
DepressionGraph: A Two-Channel Graph Neural Network for the Diagnosis of Major Depressive Disorders Using rs-fMRI
Electronics 2023, 12(24), 5040; https://doi.org/10.3390/electronics12245040 - 18 Dec 2023
Viewed by 538
Abstract
Major depressive disorder (MDD) is a prevalent psychiatric condition with a complex and unknown pathological mechanism. Resting-state functional magnetic resonance imaging (rs-fMRI) has emerged as a valuable non-invasive technology for MDD diagnosis. By utilizing rs-fMRI data, a dynamic brain functional connection network (FCN) [...] Read more.
Major depressive disorder (MDD) is a prevalent psychiatric condition with a complex and unknown pathological mechanism. Resting-state functional magnetic resonance imaging (rs-fMRI) has emerged as a valuable non-invasive technology for MDD diagnosis. By utilizing rs-fMRI data, a dynamic brain functional connection network (FCN) can be constructed to represent the complex interacting relationships of multiple brain sub-regions. Graph neural network (GNN) models have been widely employed to extract disease-associated information. The simple averaging or summation graph readout functions of GNNs may lead to a loss of critical information. This study introduces a two-channel graph neural network (DepressionGraph) that effectively aggregates more comprehensive graph information from the two channels based on the node feature number and node number. Our proposed DepressionGraph model leverages the transformer–encoder architecture to extract the relevant information from the time-series FCN. The rs-fMRI data were obtained from a cohort of 533 subjects, and the experimental data show that DepressionGraph outperforms both traditional GNNs and simple graph readout functions for the MDD diagnosis task. The introduced DepressionGraph framework demonstrates efficacy in extracting complex patterns from rs-fMRI data and exhibits promising capabilities for the precise diagnosis of complex neurological disorders. The current study acknowledges a potential gender bias due to an imbalanced gender distribution in the dataset. Future research should prioritize the development and utilization of gender-balanced datasets to mitigate this limitation and enhance the generalizability of the findings. Full article
(This article belongs to the Special Issue Deep Learning in Image Processing and Computer Vision)
Show Figures

Figure 1

24 pages, 4153 KiB  
Article
Introducing the UWF-ZeekDataFall22 Dataset to Classify Attack Tactics from Zeek Conn Logs Using Spark’s Machine Learning in a Big Data Framework
Electronics 2023, 12(24), 5039; https://doi.org/10.3390/electronics12245039 - 18 Dec 2023
Viewed by 601
Abstract
This study introduces UWF-ZeekDataFall22, a newly created dataset labeled using the MITRE ATT&CK framework. Although the focus of this research is on classifying the never-before classified resource development tactic, the reconnaissance and discovery tactics were also classified. The results were also compared to [...] Read more.
This study introduces UWF-ZeekDataFall22, a newly created dataset labeled using the MITRE ATT&CK framework. Although the focus of this research is on classifying the never-before classified resource development tactic, the reconnaissance and discovery tactics were also classified. The results were also compared to a similarly created dataset, UWF-ZeekData22, created in 2022. Both of these datasets, UWF-ZeekDataFall22 and UWF-ZeekData22, created using Zeek Conn logs, were stored in a Big Data Framework, Hadoop. For machine learning classification, Apache Spark was used in the Big Data Framework. To summarize, the uniqueness of this work is its focus on classifying attack tactics. For UWF-ZeekdataFall22, the binary as well as the multinomial classifier results were compared, and overall, the results of the binary classifier were better than the multinomial classifier. In the binary classification, the tree-based classifiers performed better than the other classifiers, although the decision tree and random forest algorithms performed almost equally well in the multinomial classification too. Taking training time into consideration, decision trees can be considered the most efficient classifier. Full article
(This article belongs to the Special Issue Security and Privacy Issues and Challenges in Big Data Era)
Show Figures

Figure 1

13 pages, 2245 KiB  
Article
Electrochemical Impedance Spectrum Equivalent Circuit Parameter Identification Using a Deep Learning Technique
Electronics 2023, 12(24), 5038; https://doi.org/10.3390/electronics12245038 - 18 Dec 2023
Viewed by 631
Abstract
Physical models are suitable for the development and optimization of materials and cell designs, whereas models based on experimental data and electrical equivalent circuits (EECs) are suitable for the development of operation estimators, both for cells and batteries. This research work develops an [...] Read more.
Physical models are suitable for the development and optimization of materials and cell designs, whereas models based on experimental data and electrical equivalent circuits (EECs) are suitable for the development of operation estimators, both for cells and batteries. This research work develops an innovative unsupervised artificial neural network (ANN) training cost function for identifying equivalent circuit parameters using electrochemical impedance spectroscopy (EIS) to identify and monitor parameter variations associated with different physicochemical processes that can be related to the states or failure modes in batteries. Many techniques and algorithms are used to fit a predefined EEC parameter, many requiring high-human-expertise support work. However, once the appropriate EEC model is selected to model the different physicochemical processes associated with a given battery technology, the challenge is to implement algorithms that can automatically calculate parameter variations in real time to allow the implementation of estimators of capacity, health, safety, and other degradation modes. Based on previous studies using data augmentation techniques, the new ANN deep learning method introduced in this study yields better results than classical training algorithms. The data used in this work are based on an aging and characterization dataset for 80 Ah and 12 V lead–acid batteries. Full article
Show Figures

Figure 1

13 pages, 3147 KiB  
Article
An LCD Detection Method Based on the Simultaneous Automatic Generation of Samples and Masks Using Generative Adversarial Networks
Electronics 2023, 12(24), 5037; https://doi.org/10.3390/electronics12245037 - 18 Dec 2023
Viewed by 601
Abstract
When applying deep learning methods to detect micro defects on low-contrast LCD surfaces, there are challenges related to imbalances in sample datasets and the complexity and laboriousness of annotating and acquiring target image masks. In order to solve these problems, a method based [...] Read more.
When applying deep learning methods to detect micro defects on low-contrast LCD surfaces, there are challenges related to imbalances in sample datasets and the complexity and laboriousness of annotating and acquiring target image masks. In order to solve these problems, a method based on sample and mask auto-generation for deep generative network models is proposed. We first generate an augmented dataset of negative samples using a generative adversarial network (GAN), and then highlight the defect regions in these samples using the training method constructed by the GAN to automatically generate masks for the defect images. Experimental results demonstrate the effectiveness of our proposed method, as it can simultaneously generate liquid crystal image samples and their corresponding image masks. Through a comparative experiment on the deep learning method Mask R-CNN, we demonstrate that the automatically obtained image masks have high detection accuracy. Full article
(This article belongs to the Special Issue Neural Networks and Deep Learning in Computer Vision)
Show Figures

Figure 1

22 pages, 1907 KiB  
Article
Multi-Wavelength Path Loss Model for Indoor VLC with Mobile Human Blockage
Electronics 2023, 12(24), 5036; https://doi.org/10.3390/electronics12245036 - 18 Dec 2023
Viewed by 619
Abstract
Visible light communication (VLC) is one of the candidate technologies for the sixth generation (6G) networks. The path loss model is particularly important for link budget estimation and network planning in VLC. Due to the wideband nature and the extremely poor diffraction capacity [...] Read more.
Visible light communication (VLC) is one of the candidate technologies for the sixth generation (6G) networks. The path loss model is particularly important for link budget estimation and network planning in VLC. Due to the wideband nature and the extremely poor diffraction capacity of light, the path loss of the VLC channel is susceptible to wavelength dependence and blockage effect. In this paper, we propose a novel path loss model which can characterize the impact of wavelength dependence combined with mobile human blockage for both the single-LED (light emitting diode) and the multi-LED scenario. When there is no blockage in the channel, the multi-wavelength path loss under free space propagation is modeled with a small standard deviation of 0.262 in the single-LED scenario and a small root mean square error of 0.009 in the multi-LED scenario which indicates the high accuracy of the model. When considering the mobile human blockage, the blockage probability (BP) is modeled with full consideration of realistic human mobility and human body shadowing. The results indicate that the BP in single-LED scenario can reach 0.08, while the BP in multi-LED scenario is 0.022. This demonstrates that the distributed deployment of transmitters can effectively reduce the occurrence of the blockage state in VLC. Full article
(This article belongs to the Special Issue Channel Measurement, Modeling and Simulation of 6G)
Show Figures

Figure 1

22 pages, 6972 KiB  
Article
Safe Performance of an Industrial Autonomous Ground Vehicle in the Supervisory Control Framework
Electronics 2023, 12(24), 5035; https://doi.org/10.3390/electronics12245035 - 17 Dec 2023
Viewed by 449
Abstract
A Cyberphysical system, being an autonomous guided vehicle (AGV) and having diverse applications such as thematic parks and product transfer in manufacturing units, is modeled and controlled. The models of all subsystems of the AGV are provided in discrete event systems (DES) form [...] Read more.
A Cyberphysical system, being an autonomous guided vehicle (AGV) and having diverse applications such as thematic parks and product transfer in manufacturing units, is modeled and controlled. The models of all subsystems of the AGV are provided in discrete event systems (DES) form following the Ramadge–Wonham (R–W) framework. The safe performance of the AGV, being the desired behavior of the system, is presented in the form of desired rules and translated into a set of regular languages. Then, the regular languages are realized as supervisory automata in the framework of Supervisory Control Theory (SCT). To ease implementation and coordination of the control architecture, the supervisors are designed to be in two-state automata forms. The controllability of the regular languages, regarding the AGV, will be proved, using the physical realizability (PR) of the synchronous product of the automata of the system and the supervisors. Also, the nonblocking property of all the controlled automata will be proven to be satisfied. Simulation of the controlled AGV will validate the proposed method. Full article
(This article belongs to the Special Issue Advances in Robust Control for Automated Manufacturing System)
Show Figures

Figure 1

18 pages, 2651 KiB  
Article
A Service Recommendation System Based on Dynamic User Groups and Reinforcement Learning
Electronics 2023, 12(24), 5034; https://doi.org/10.3390/electronics12245034 - 17 Dec 2023
Viewed by 503
Abstract
Recently, advancements in machine-learning technology have enabled platforms such as short video applications and e-commerce websites to accurately predict user behavior and cater to their interests. However, the limited nature of user data may compromise the accuracy of these recommendation systems. To address [...] Read more.
Recently, advancements in machine-learning technology have enabled platforms such as short video applications and e-commerce websites to accurately predict user behavior and cater to their interests. However, the limited nature of user data may compromise the accuracy of these recommendation systems. To address personalized recommendation challenges and adapt to changes in user preferences, reinforcement-learning algorithms have been developed. These algorithms strike a balance between exploring new items and exploiting existing ones, thereby enhancing recommendation accuracy. Nevertheless, the cold-start problem and data sparsity continue to impede the development of these recommendation systems. Hence, we proposed a joint-training algorithm that combined deep reinforcement learning with dynamic user groups. The goal was to capture user preferences for precise recommendations while addressing the challenges of data sparsity and cold-start. We used embedding layers to capture representations and make decisions before the reinforcement-learning process, executing this approach cyclically. Through this method, we dynamically obtained more accurate user and item representations and provide precise recommendations. Additionally, to address data sparsity, we introduced a dynamic user grouping algorithm that collectively enhanced the recommendations using group parameters. We evaluated our model using movie-rating and e-commerce datasets. As compared to other baseline algorithms, our algorithm not only improved recommendation accuracy but also enhanced diversity by uncovering recommendations across more categories. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

24 pages, 6198 KiB  
Article
Resource Allocation in UAV-Enabled NOMA Networks for Enhanced Six-G Communications Systems
Electronics 2023, 12(24), 5033; https://doi.org/10.3390/electronics12245033 - 17 Dec 2023
Cited by 1 | Viewed by 756
Abstract
Enhancing energy efficiency, content distribution, latency, and transmission speeds are vital components of communication systems. Multiple access methods hold great promise for boosting these performance indicators. This manuscript evaluates the effectiveness of Non-Orthogonal Multiple Access (NOMA) and Orthogonal Multiple Access (OMA) systems within [...] Read more.
Enhancing energy efficiency, content distribution, latency, and transmission speeds are vital components of communication systems. Multiple access methods hold great promise for boosting these performance indicators. This manuscript evaluates the effectiveness of Non-Orthogonal Multiple Access (NOMA) and Orthogonal Multiple Access (OMA) systems within a single cell, where users are scattered randomly and rely on relays for dependability. This paper presents a model for improving energy efficiency, content distribution, latency, and transmission speeds in communication systems using NOMA and OMA systems within a single cell. Additionally, this paper also proposes a caching strategy using unmanned aerial vehicles (UAVs) as aerial base stations for ground users. These UAVs distribute cached content to minimize the overall latency of content demands from ground users while modifying their positions. We carried out simulations using various cache capacities and user counts linked to their respective UAVs. Furthermore, we evaluated OMA and NOMA in terms of the achievable rate and energy efficiency. The proposed model has achieved noteworthy enhancement across various scenarios including different sum rates, numbers of mobility users, diverse cache sizes, and amounts of power allocation. Full article
(This article belongs to the Special Issue Advances in 5G Wireless Edge Computing)
Show Figures

Figure 1

19 pages, 9182 KiB  
Article
Indoor Localization Based on Integration of Wi-Fi with Geomagnetic and Light Sensors on an Android Device Using a DFF Network
Electronics 2023, 12(24), 5032; https://doi.org/10.3390/electronics12245032 - 16 Dec 2023
Viewed by 746
Abstract
Sensor-related indoor localization has attracted considerable attention in recent years. The accuracy of conventional fingerprint solutions based on a single sensor, such as a Wi-Fi sensor, is affected by multipath interferences from other electronic devices that are produced as a result of complex [...] Read more.
Sensor-related indoor localization has attracted considerable attention in recent years. The accuracy of conventional fingerprint solutions based on a single sensor, such as a Wi-Fi sensor, is affected by multipath interferences from other electronic devices that are produced as a result of complex indoor environments. Light sensors and magnetic (i.e., geomagnetic) field sensors can be used to enhance the accuracy of a system since they are less vulnerable to disturbances. In this paper, we propose a deep feedforward (DFF)-neural-network-based method, termed DFF-WGL, which integrates the data from the embedded Wi-Fi sensor, geomagnetic field sensor, and light sensor (WGL) in a smart device to localize the device in an indoor environment. DFF-WGL does not require complex and expensive auxiliary equipment, except for basic fluorescent lamps and low-density Wi-Fi signal coverage, conditions that are easily satisfied in modern offices or educational buildings. The proposed system was implemented on a commercial off-the-shelf android device, and performance was evaluated through an experimental analysis conducted in two different indoor testbeds, one measuring 60.5 m2 and the other measuring 38 m2, with 242 and 60 reference points, respectively. The results indicate that the model prediction with an input consisting of the combination of light, a magnetic field sensor, and two Wi-Fi RSS signals achieved mean localization errors of 0.01 m and 0.04 m in the two testbeds, respectively, compared with any subset of combination of sensors, verifying the effectiveness of the proposed DFF-WGL method. Full article
(This article belongs to the Special Issue Recent Research in Positioning and Activity Recognition Systems)
Show Figures

Figure 1

16 pages, 8096 KiB  
Article
Modeling of Cross-Coupled AC–DC Charge Pump Operating in Subthreshold Region
Electronics 2023, 12(24), 5031; https://doi.org/10.3390/electronics12245031 - 16 Dec 2023
Viewed by 663
Abstract
This paper proposes a circuit model of a cross-coupled CMOS AC–DC charge pump (XC–CP) operating in the subthreshold region. The aim is to improve the efficiency of designing XC–CPs with a variety of specifications, e.g., input and output voltages and AC input frequency. [...] Read more.
This paper proposes a circuit model of a cross-coupled CMOS AC–DC charge pump (XC–CP) operating in the subthreshold region. The aim is to improve the efficiency of designing XC–CPs with a variety of specifications, e.g., input and output voltages and AC input frequency. First, it is shown that the output resistance (Ro) of XC–CP is much higher than those of CPs with single diodes (SD–CP) and ultra-low-power diodes (ULPD–CP) as charge transfer switches (CTSs). Second, the reason behind the above feature of XC–CP, identified by a simple model, is that the gate-to-source voltages of CTS MOSFETs are independent of the output voltage of the CP. Third, the high but finite Ro of XC–CP is explainable with a more accurate model that includes the dependence of the saturation current of MOSFETs operating in the subthreshold region on the drain-to-source voltage, which is a function of the output voltage of CP. The model is in good agreement with measured and simulated results of XC–, SD–, and ULPD–CPs fabricated in a 250 nm CMOS. Full article
Show Figures

Figure 1

12 pages, 9008 KiB  
Article
Enhancing Outdoor Moving Target Detection: Integrating Classical DSP with mmWave FMCW Radars in Dynamic Environments
Electronics 2023, 12(24), 5030; https://doi.org/10.3390/electronics12245030 - 16 Dec 2023
Viewed by 575
Abstract
This paper introduces a computationally inexpensive technique for moving target detection in challenging outdoor environments using millimeter-wave (mmWave) frequency-modulated continuous-wave (FMCW) radars leveraging traditional signal processing methodologies. Conventional learning-based techniques for moving target detection suffer when there are variations in environmental conditions. Hence, [...] Read more.
This paper introduces a computationally inexpensive technique for moving target detection in challenging outdoor environments using millimeter-wave (mmWave) frequency-modulated continuous-wave (FMCW) radars leveraging traditional signal processing methodologies. Conventional learning-based techniques for moving target detection suffer when there are variations in environmental conditions. Hence, the work described here leverages robust digital signal processing (DSP) methods, including wavelet transform, FIR filtering, and peak detection, to efficiently address variations in reflective data. The evaluation of this method is conducted in an outdoor environment, which includes obstructions like woods and trees, producing an accuracy score of 92.0% and precision of 91.5%. Notably, this approach outperforms deep learning methods when it comes to operating in changing environments that project extreme data variations. Full article
(This article belongs to the Special Issue Machine Learning for Radar and Communication Signal Processing)
Show Figures

Figure 1

17 pages, 929 KiB  
Article
Vehicle Simulation Algorithm for Observations with Variable Dimensions Based on Deep Reinforcement Learning
Electronics 2023, 12(24), 5029; https://doi.org/10.3390/electronics12245029 - 16 Dec 2023
Viewed by 741
Abstract
Vehicle simulation algorithms play a crucial role in enhancing traffic efficiency and safety by predicting and evaluating vehicle behavior in various traffic scenarios. Recently, vehicle simulation algorithms based on reinforcement learning have demonstrated excellent performance in practical tasks due to their ability to [...] Read more.
Vehicle simulation algorithms play a crucial role in enhancing traffic efficiency and safety by predicting and evaluating vehicle behavior in various traffic scenarios. Recently, vehicle simulation algorithms based on reinforcement learning have demonstrated excellent performance in practical tasks due to their ability to exhibit superior performance with zero-shot learning. However, these algorithms face challenges in field adaptation problems when deployed in task sets with variable-dimensional observations, primarily due to the inherent limitations of neural network models. In this paper, we propose a neural network structure accommodating variations in specific dimensions to enhance existing reinforcement learning methods. Building upon this, a scene-compatible vehicle simulation algorithm is designed. We conducted experiments on multiple tasks and scenarios using the Highway-Env traffic environment simulator. The results of our experiments demonstrate that the algorithm can successfully operate on all tasks using a neural network model with fixed shape, even with variable-dimensional observations. Our model exhibits no degradation in simulation performance when compared to the baseline algorithm. Full article
(This article belongs to the Special Issue Zero-Shot Learning and Field Adaptation)
Show Figures

Figure 1

15 pages, 4775 KiB  
Article
Study of Single-Event Effects Influenced by Displacement Damage Effects under Proton Irradiation in Static Random-Access Memory
Electronics 2023, 12(24), 5028; https://doi.org/10.3390/electronics12245028 - 16 Dec 2023
Viewed by 482
Abstract
Static random-access memory (SRAM), a pivotal component in integrated circuits, finds extensive applications and remains a focal point in the global research on single-event effects (SEEs). Prolonged exposure to irradiation, particularly the displacement damage effect (DD) induced by high-energy protons, poses a substantial [...] Read more.
Static random-access memory (SRAM), a pivotal component in integrated circuits, finds extensive applications and remains a focal point in the global research on single-event effects (SEEs). Prolonged exposure to irradiation, particularly the displacement damage effect (DD) induced by high-energy protons, poses a substantial threat to the performance of electronic devices. Additionally, the impact of proton displacement damage effects on the performance of a six-transistor SRAM with an asymmetric structure is not well understood. In this paper, we conducted an analysis of the impact and regularities of DD on the upset cross-sections of SRAM and simulated the single-event upset (SEU) characteristics of SRAM using the Monte Carlo method. The research findings reveal an overall increasing trend in upset cross-sections with the augmentation of proton energy. Notably, the effect of proton irradiation on the SEU cross-section is related to the storage state of SRAM. Due to the asymmetry in the distribution of sensitive regions during the storage of “0” and “1”, the impact of DD in the two initial states is not uniform. These findings can be used to identify the causes of SEU in memory devices. Full article
(This article belongs to the Section Microelectronics)
Show Figures

Figure 1

19 pages, 3842 KiB  
Article
Discrepant Semantic Diffusion Boosts Transfer Learning Robustness
Electronics 2023, 12(24), 5027; https://doi.org/10.3390/electronics12245027 - 16 Dec 2023
Viewed by 543
Abstract
Transfer learning could improve the robustness and generalization of the model, reducing potential privacy and security risks. It operates by fine-tuning a pre-trained model on downstream datasets. This process not only enhances the model’s capacity to acquire generalizable features but also ensures an [...] Read more.
Transfer learning could improve the robustness and generalization of the model, reducing potential privacy and security risks. It operates by fine-tuning a pre-trained model on downstream datasets. This process not only enhances the model’s capacity to acquire generalizable features but also ensures an effective alignment between upstream and downstream knowledge domains. Transfer learning can effectively speed up the model convergence when adapting to novel tasks, thereby leading to the efficient conservation of both data and computational resources. However, existing methods often neglect the discrepant downstream–upstream connections. Instead, they rigidly preserve the upstream information without an adequate regularization of the downstream semantic discrepancy. Consequently, this results in weak generalization, issues with collapsed classification, and an overall inferior performance. The main reason lies in the collapsed downstream–upstream connection due to the mismatched semantic granularity. Therefore, we propose a discrepant semantic diffusion method for transfer learning, which could adjust the mismatched semantic granularity and alleviate the collapsed classification problem to improve the transfer learning performance. Specifically, the proposed framework consists of a Prior-Guided Diffusion for pre-training and a discrepant diffusion for fine-tuning. Firstly, the Prior-Guided Diffusion aims to empower the pre-trained model with the semantic-diffusion ability. This is achieved through a semantic prior, which consequently provides a more robust pre-trained model for downstream classification. Secondly, the discrepant diffusion focuses on encouraging semantic diffusion. Its design intends to avoid the unwanted semantic centralization, which often causes the collapsed classification. Furthermore, it is constrained by the semantic discrepancy, serving to elevate the downstream discrimination capabilities. Extensive experiments on eight prevalent downstream classification datasets confirm that our method can outperform a number of state-of-the-art approaches, especially for fine-grained datasets or datasets dissimilar to upstream data (e.g., 3.75% improvement for Cars dataset and 1.79% improvement for SUN dataset under the few-shot setting with 15% data). Furthermore, the experiments of data sparsity caused by privacy protection successfully validate our proposed method’s effectiveness in the field of artificial intelligence security. Full article
(This article belongs to the Special Issue AI Security and Safety)
Show Figures

Figure 1

15 pages, 2667 KiB  
Article
Single-Instruction-Multiple-Data Instruction-Set-Based Heat Ranking Optimization for Massive Network Flow
Electronics 2023, 12(24), 5026; https://doi.org/10.3390/electronics12245026 - 16 Dec 2023
Viewed by 444
Abstract
In order to cope with the massive scale of traffic and reduce the memory overhead of traffic statistics, the traffic statistics method based on the Sketch algorithm has become a research hotspot for traffic statistics. This paper studies the problem of the top-k [...] Read more.
In order to cope with the massive scale of traffic and reduce the memory overhead of traffic statistics, the traffic statistics method based on the Sketch algorithm has become a research hotspot for traffic statistics. This paper studies the problem of the top-k flow statistics based on the Sketch algorithm and proposes a method to estimate the flow heat from massive network traffic using the Sketch algorithm and identify the kth flow with the highest heat by using a bitonic sort algorithm. In view of the performance difficulties of applying multiple hash functions in the implementation of the Sketch algorithm, the Single-Instruction-Multiple-Data (SIMD) instruction set is adopted to improve the performance of the Sketch algorithm so that SIMD instructions can process multiple fragments of data in a single step, implement multiple hash operations at the same time, compare and sort multiple flow tables at the same time. Thus, the throughput of the execution task is improved. Firstly, the elements of data flow are described and stored in the form of vectors, while the construction, analysis, and operation of data vectors are realized by SIMD instructions. Secondly, the multi-hash operation is simplified into a single vector operation, which reduces the CPU computing resource consumption of the Sketch algorithm. At the same time, the SIMD instruction set is used to optimize the parallel comparison operation of the flow table in a bitonic sort algorithm. Finally, the SIMD instruction set is used to optimize the functions in the Sketch algorithm and top-k sorting algorithm program, and the optimized code is tested and analyzed. The experimental results show that the time consumed by the advanced vector extensions (AVX)-instructions-optimized version has a significant reduction compared to the original version. When the length of KEY is 96 bytes, the instructions consumed by multiple hash functions account for less in the entire Sketch algorithm, and the time consumed by the optimized version of AVX is about 67.2% of that in the original version. As the length of KEY gradually increases to 256 bytes, the time consumed by the optimized version of AVX decreases to 53.8% of the original version. The simulation results show that the AVX optimization algorithm is effective in improving the measurement efficiency of network flow. Full article
(This article belongs to the Special Issue Recent Advances and Applications of Computational Intelligence)
Show Figures

Figure 1

25 pages, 10796 KiB  
Article
Novel Magnetic Field Modeling Method for a Low-Speed, High-Torque External-Rotor Permanent-Magnet Synchronous Motor
Electronics 2023, 12(24), 5025; https://doi.org/10.3390/electronics12245025 - 15 Dec 2023
Viewed by 522
Abstract
In view of the unstable electromagnetic performance of the air gap magnetic field caused by the torque ripple and harmonic interference of a multi-slot and multi-pole low-speed, high-torque permanent magnet synchronous motor, we propose a simplified model of double-layer permanent magnets. The model [...] Read more.
In view of the unstable electromagnetic performance of the air gap magnetic field caused by the torque ripple and harmonic interference of a multi-slot and multi-pole low-speed, high-torque permanent magnet synchronous motor, we propose a simplified model of double-layer permanent magnets. The model is divided into an upper and a lower subdomain, with the upper subdomain being an ideal circular ring and the lower subdomain being a segmented sector ring. Moreover, we develop an exact analytical model of the motor that predicts the magnetic field distribution based on Laplace’s and Poisson’s equations, which is solved using the method of separating variables. Taking a 40p168s low-speed, high-torque permanent magnet synchronous motor as an example, the accuracy of the model is verified by comparison with an ideal circular ring model, a segmented sector ring model, and the finite element method. Based on the proposed simplified model, three combined permanent magnets considering both edge-cutting and polar arc cutting structures are proposed, which are chamfered, rounded, and rectangular combinations. Under the premise of a consistent edge-cutting amount, the electromagnetic characteristics of the three combination types of permanent magnets are compared using the finite element method. The results show that the electromagnetic characteristics of the chamfered combination PM are superior to those of the other two combinations. Finally, a prototype is manufactured and tested to validate the theoretical analysis. Full article
Show Figures

Figure 1

20 pages, 5208 KiB  
Article
Design Optimization of an Automotive Permanent-Magnet Synchronous Motor by Combining DOE and NMGWO
Electronics 2023, 12(24), 5024; https://doi.org/10.3390/electronics12245024 - 15 Dec 2023
Viewed by 488
Abstract
This study proposes an optimization methodology for automotive permanent-magnet synchronous motors (PMSMs) to achieve maximum efficiency, maximum average torque, and minimum torque ripple. Many geometrical parameters can be used to define the PMSM of an automobile. To identify the most significant parameters for [...] Read more.
This study proposes an optimization methodology for automotive permanent-magnet synchronous motors (PMSMs) to achieve maximum efficiency, maximum average torque, and minimum torque ripple. Many geometrical parameters can be used to define the PMSM of an automobile. To identify the most significant parameters for optimization, the fractional factorial design of the design of experiment (DOE) was employed for screening, considering the interaction effects. The central composite design was used to construct the proxy model between the optimization target and optimization variable, and the effectiveness of the model was judged. Aiming at the multi-objective optimization problem of a motor, a new mechanism for grey wolf optimizer (NMGWO) algorithm combining an elite reverse learning strategy, a local search strategy, and a nonlinear control parameter strategy is innovatively proposed. This algorithm was applied to solve the multi-objective optimization model. The numerical calculation results show that this is an effective optimization design method that can improve the performance of automotive PMSMs. The effectiveness of the NMGWO algorithm on the optimization results of permanent-magnet synchronous motors is verified by the experimental results. Full article
Show Figures

Figure 1

19 pages, 9159 KiB  
Article
SOINN Intrusion Detection Model Based on Three-Way Attribute Reduction
Electronics 2023, 12(24), 5023; https://doi.org/10.3390/electronics12245023 - 15 Dec 2023
Viewed by 451
Abstract
With a large number of intrusion detection datasets and high feature dimensionality, the emergent nature of new attack types makes it impossible to collect network traffic data all at once. The modified three-way attribute reduction method is combined with a Self-Organizing Incremental learning [...] Read more.
With a large number of intrusion detection datasets and high feature dimensionality, the emergent nature of new attack types makes it impossible to collect network traffic data all at once. The modified three-way attribute reduction method is combined with a Self-Organizing Incremental learning Neural Network (SOINN) algorithm to propose a self-organizing incremental neural network intrusion detection model based on three-way attribute reduction. Attribute importance is used to perform attribute reduction, and the data after attribute reduction are fed into a self-organized incremental learning neural network algorithm, which generalizes the topology of the original data through self-organized competitive learning. When the streaming data are transferred into the model, the inter-class insertion or node fusion operation is performed by comparing the inter-node distance and similarity threshold to achieve incremental learning of the model streaming data. The inter-node distance value is introduced into the weight update formulation to replace the traditional learning rate and to optimize the topological structure adjustment operation. The experimental results show that T-SOINN achieves high precision and recall when processing intrusion detection data. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

13 pages, 8393 KiB  
Article
Real-Time Low-Light Imaging in Space Based on the Fusion of Spatial and Frequency Domains
Electronics 2023, 12(24), 5022; https://doi.org/10.3390/electronics12245022 - 15 Dec 2023
Viewed by 502
Abstract
Due to the low photon count in space imaging and the performance bottlenecks of edge computing devices, there is a need for a practical low-light imaging solution that maintains satisfactory recovery while offering lower network latency, reduced memory usage, fewer model parameters, and [...] Read more.
Due to the low photon count in space imaging and the performance bottlenecks of edge computing devices, there is a need for a practical low-light imaging solution that maintains satisfactory recovery while offering lower network latency, reduced memory usage, fewer model parameters, and fewer operation counts. Therefore, we propose a real-time deep learning framework for low-light imaging. Leveraging the parallel processing capabilities of the hardware, we perform the parallel processing of the image data from the original sensor across branches with different dimensionalities. The high-dimensional branch conducts high-dimensional feature learning in the spatial domain, while the mid-dimensional and low-dimensional branches perform pixel-level and global feature learning through the fusion of the spatial and frequency domains. This approach ensures a lightweight network model while significantly improving the quality and speed of image recovery. To adaptively adjust the image based on brightness and avoid the loss of detailed pixel feature information, we introduce an adaptive balancing module, thereby greatly enhancing the effectiveness of the model. Finally, through validation on the SID dataset and our own low-light satellite dataset, we demonstrate that this method can significantly improve image recovery speed while ensuring image recovery quality. Full article
(This article belongs to the Collection Computer Vision and Pattern Recognition Techniques)
Show Figures

Figure 1

29 pages, 5823 KiB  
Article
A Personalized Motion Planning Method with Driver Characteristics in Longitudinal and Lateral Directions
Electronics 2023, 12(24), 5021; https://doi.org/10.3390/electronics12245021 - 15 Dec 2023
Viewed by 499
Abstract
Humanlike driving is significant in improving the safety and comfort of automated vehicles. This paper proposes a personalized motion planning method with driver characteristics in longitudinal and lateral directions for highway automated driving. The motion planning is decoupled into path optimization and speed [...] Read more.
Humanlike driving is significant in improving the safety and comfort of automated vehicles. This paper proposes a personalized motion planning method with driver characteristics in longitudinal and lateral directions for highway automated driving. The motion planning is decoupled into path optimization and speed optimization under the framework of the Baidu Apollo EM motion planner. For modeling driver behavior in the longitudinal direction, a car-following model is developed and integrated into the speed optimizer based on a weight ratio hypothesis model of the objective functional, whose parameters are obtained by Bayesian optimization and leave-one-out cross validation using the driving data. For modeling driver behavior in the lateral direction, a Bayesian network (BN), which maps the physical states of the ego vehicle and surrounding vehicles and the lateral intentions of the surrounding vehicles to the driver’s lateral intentions, is built in an efficient and lightweight way using driving data. Further, a personalized reference trajectory decider is developed based on the BN, considering traffic regulations, the driver’s preference, and the costs of the trajectories. According to the actual traffic scenarios in the driving data, a simulation is constructed, and the results validate the human likeness of the proposed motion planning method. Full article
Show Figures

Figure 1

14 pages, 7448 KiB  
Article
Next-Generation IoT: Harnessing AI for Enhanced Localization and Energy Harvesting in Backscatter Communications
Electronics 2023, 12(24), 5020; https://doi.org/10.3390/electronics12245020 - 15 Dec 2023
Viewed by 603
Abstract
Ongoing backscatter communications and localisation research have been able to obtain incredibly accurate results in controlled environments. The main issue with these systems is faced in complex RF environments. This paper investigates concurrent localization and ambient radio frequency (RF) energy harvesting using backscatter [...] Read more.
Ongoing backscatter communications and localisation research have been able to obtain incredibly accurate results in controlled environments. The main issue with these systems is faced in complex RF environments. This paper investigates concurrent localization and ambient radio frequency (RF) energy harvesting using backscatter communication systems for Internet of Things networks. Dynamic real-world environments introduce complexity from multipath reflection and shadowing, as well as interference from movements. A machine learning framework leveraging K-Nearest Neighbors and Random Forest classifiers creates robustness against such variability. Historically, received signal measurements construct a location fingerprint database resilient to perturbations. The Random Forest model demonstrates precise localization across customized benches with programmable shuffling of chairs outfitted with RF identification tags. Average precision accuracy exceeds 99% despite deliberate placement modifications, inducing signal fluctuations emulating mobility and clutter. Significantly, directional antennas can harvest over −3 dBm, while even omnidirectional antennas provide −10 dBm—both suitable for perpetually replenishing low-energy electronics. Consequently, the intelligent backscatter platform localizes unmodified objects to customizable precision while promoting self-sustainability. Full article
Show Figures

Figure 1

16 pages, 4936 KiB  
Article
NGDCNet: Noise Gating Dynamic Convolutional Network for Image Denoising
Electronics 2023, 12(24), 5019; https://doi.org/10.3390/electronics12245019 - 15 Dec 2023
Viewed by 497
Abstract
Deep convolution neural networks (CNNs) have become popular for image denoising due to their robust learning capabilities. However, many methods tend to increase the receptive field to improve performance, which leads to over-smoothed results and loss of critical high-frequency information such as edges [...] Read more.
Deep convolution neural networks (CNNs) have become popular for image denoising due to their robust learning capabilities. However, many methods tend to increase the receptive field to improve performance, which leads to over-smoothed results and loss of critical high-frequency information such as edges and texture. In this research, we introduce an innovative end-to-end denoising network named the noise gating dynamic convolutional network (NGDCNet). By integrating dynamic convolution and noise gating mechanisms, our approach effectively reduces noise while retaining finer image details. Through a series of experiments, we conduct a comprehensive evaluation of NGDCNet by comparing it quantitatively and visually against state-of-the-art denoising methods. Additionally, we provide an ablation study to analyze the contributions of dynamic convolutional blocks and noise gating blocks. Our experimental findings demonstrate that NGDCNet excels in noise reduction while preserving essential texture information. Full article
(This article belongs to the Special Issue Recent Advances in Object Detection and Image Processing)
Show Figures

Figure 1

Previous Issue
Back to TopTop