Next Issue
Volume 11, May-2
Previous Issue
Volume 11, April-2
 
 

Electronics, Volume 11, Issue 9 (May-1 2022) – 224 articles

Cover Story (view full-size image): Since aviation contributes to rising CO2 emissions, there is increasing interest in more electric aircraft. The concept aims at replacing mechanically, pneumatically, and hydraulically driven parts as well as the primary power sources with electrically driven components. Building efficient DC–DC converters is the key to unlocking the full potential of more electric aircraft. The converters must have high reliability, preferably with redundancies, in addition to fault tolerance. This work describes the design, hardware setup, and testing of an LLC-based converter that can be reconfigured in the case of faults. The converter in a reconfigured state remains operational and provides sufficient voltage and power without considerable addition of component numbers nor overstressing of components. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 10837 KiB  
Article
Analysis of the Applicable Range of the Standard Lambertian Model to Describe the Reflection in Visible Light Communication
by Xiangyang Zhang, Xiaodong Yang, Nan Zhao and Muhammad Bilal Khan
Electronics 2022, 11(9), 1514; https://doi.org/10.3390/electronics11091514 - 09 May 2022
Cited by 1 | Viewed by 1609
Abstract
The existing visible light communication simulation research on reflection is mainly based on the standard Lambertian model. In recent years, some papers have mentioned that the standard Lambertian model is too simplified and approximate to meet the actual situation. To solve this problem, [...] Read more.
The existing visible light communication simulation research on reflection is mainly based on the standard Lambertian model. In recent years, some papers have mentioned that the standard Lambertian model is too simplified and approximate to meet the actual situation. To solve this problem, a variety of more complex reflection models have been proposed. However, the more complex models require more computation. To balance computation and simulation accuracy, by consulting the literature, this study found that the standard Lambertian model has a certain requirement of the incident angle range to describe reflection on a wall covered in plaster. In this paper, the inappropriate index Q of the standard Lambertian model is defined, and then the relationship between Q and the light-emitting diode position with only the first reflection considered is determined through a preliminary calculation. The calculation shows that, in an empty room with plaster walls, and when the distance is greater than 0.685 m, the standard Lambertian model can be used; when the distance is less than 0.685 m, other, more complex models need to be adopted according to the actual situation. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

14 pages, 596 KiB  
Article
Feedback ARMA Models versus Bayesian Models towards Securing OpenFlow Controllers for SDNs
by Wael Hosny Fouad Aly, Hassan Kanj, Nour Mostafa and Samer Alabed
Electronics 2022, 11(9), 1513; https://doi.org/10.3390/electronics11091513 - 09 May 2022
Cited by 1 | Viewed by 1309
Abstract
In software-defined networking (SDN), the control layers are moved away from the forwarding switching layers. SDN gives more programmability and flexibility to the controllers. OpenFlow is a protocol that gives access to the forwarding plane of a network switch or router over the [...] Read more.
In software-defined networking (SDN), the control layers are moved away from the forwarding switching layers. SDN gives more programmability and flexibility to the controllers. OpenFlow is a protocol that gives access to the forwarding plane of a network switch or router over the SDN network. OpenFlow uses a centralized control of network switches and routers in and SDN environment. Security is of major importance for SDN deployment. Transport layer security (TLS) is used to implement security for OpenFlow. This paper proposed a new technique to improve the security of the OpenFlow controller through modifying the TLS implementation. The proposed model is referred to as the secured feedback model using autoregressive moving average (ARMA) for SDN networks (SFBARMASDN). SFBARMASDN depended on computing the feedback for incoming packets based on ARMA models. Filtering techniques based on ARMA techniques were used to filter the packets and detect malicious packets that needed to be dropped. SFBARMASDN was compared to two reference models. One reference model was Bayesian-based and the other reference model was the standard OpenFlow. Full article
(This article belongs to the Special Issue Next Generation Networks and Systems Security)
Show Figures

Figure 1

14 pages, 4505 KiB  
Article
Dual Phase Lock-In Amplifier with Photovoltaic Modules and Quasi-Invariant Common-Mode Signal
by Pavel Baranov, Ivan Zatonov and Bien Bui Duc
Electronics 2022, 11(9), 1512; https://doi.org/10.3390/electronics11091512 - 09 May 2022
Cited by 1 | Viewed by 1919
Abstract
In measuring small voltage deviations of about 1 µV and lower, it is important to separate useful signals from noise. The measurement of small voltage deviations between the amplitudes of two AC signals in wide frequency and voltage ranges, is performed by using [...] Read more.
In measuring small voltage deviations of about 1 µV and lower, it is important to separate useful signals from noise. The measurement of small voltage deviations between the amplitudes of two AC signals in wide frequency and voltage ranges, is performed by using lock-in amplifiers with the differential input as a comparator (null-indicator). The resolution and measurement accuracy of lock-in amplifiers is largely determined by the common-mode rejection ratio in their measuring channel. This work presents a developed differential signal recovery circuit with embedded photovoltaic modules, which allows implementing the dual phase lock-in amplifier with the differential input and quasi-invariant common-mode signal. The obtained metrological parameters of the proposed dual phase analog lock-in amplifier prove its applicability in comparing two signal amplitudes of 10√2 µV to 10√2 V in the frequency range of 20 Hz to 100 kHz with a 10 nV resolution. The proposed dual phase analog lock-in amplifier was characterized by a 130 to 185 dB CMRR in the frequency range up to 100 kHz with 20 nV/√Hz white noise. Full article
(This article belongs to the Collection Instrumentation, Noise, Reliability)
Show Figures

Figure 1

15 pages, 3775 KiB  
Article
Empowering the Internet of Things Using Light Communication and Distributed Edge Computing
by Abdelhamied A. Ateya, Mona Mahmoud, Adel Zaghloul, Naglaa. F. Soliman and Ammar Muthanna
Electronics 2022, 11(9), 1511; https://doi.org/10.3390/electronics11091511 - 09 May 2022
Cited by 4 | Viewed by 1883
Abstract
With the rapid growth of connected devices, new issues emerge, which will be addressed by boosting capacity, improving energy efficiency, spectrum usage, and cost, besides offering improved scalability to handle the growing number of linked devices. This can be achieved by introducing new [...] Read more.
With the rapid growth of connected devices, new issues emerge, which will be addressed by boosting capacity, improving energy efficiency, spectrum usage, and cost, besides offering improved scalability to handle the growing number of linked devices. This can be achieved by introducing new technologies to the traditional Internet of Things (IoT) networks. Visible light communication (VLC) is a promising technology that enables bidirectional transmission over the visible light spectrum achieving many benefits, including ultra-high data rate, ultra-low latency, high spectral efficiency, and ultra-high reliability. Light Fidelity (LiFi) is a form of VLC that represents an efficient solution for many IoT applications and use cases, including indoor and outdoor applications. Distributed edge computing is another technology that can assist communications in IoT networks and enable the dense deployment of IoT devices. To this end, this work considers designing a general framework for IoT networks using LiFi and a distributed edge computing scheme. It aims to enable dense deployment, increase reliability and availability, and reduce the communication latency of IoT networks. To meet the demands, the proposed architecture makes use of MEC and fog computing. For dense deployment situations, a proof-of-concept of the created model is presented. The LiFi-integrated fog-MEC model is tested in a variety of conditions, and the findings show that the model is efficient. Full article
Show Figures

Figure 1

13 pages, 970 KiB  
Article
Few-Shot Learning with Collateral Location Coding and Single-Key Global Spatial Attention for Medical Image Classification
by Wenjing Shuai and Jianzhao Li
Electronics 2022, 11(9), 1510; https://doi.org/10.3390/electronics11091510 - 09 May 2022
Cited by 7 | Viewed by 1897
Abstract
Humans are born with the ability to learn quickly by discerning objects from a few samples, to acquire new skills in a short period of time, and to make decisions based on limited prior experience and knowledge. The existing deep learning models for [...] Read more.
Humans are born with the ability to learn quickly by discerning objects from a few samples, to acquire new skills in a short period of time, and to make decisions based on limited prior experience and knowledge. The existing deep learning models for medical image classification often rely on a large number of labeled training samples, whereas the fast learning ability of deep neural networks has failed to develop. In addition, it requires a large amount of time and computing resource to retrain the model when the deep model encounters classes it has never seen before. However, for healthcare applications, enabling a model to generalize new clinical scenarios is of great importance. The existing image classification methods cannot explicitly use the location information of the pixel, making them insensitive to cues related only to the location. Besides, they also rely on local convolution and cannot properly utilize global information, which is essential for image classification. To alleviate these problems, we propose a collateral location coding to help the network explicitly exploit the location information of each pixel to make it easier for the network to recognize cues related to location only, and a single-key global spatial attention is designed to make the pixels at each location perceive the global spatial information in a low-cost way. Experimental results on three medical image benchmark datasets demonstrate that our proposed algorithm outperforms the state-of-the-art approaches in both effectiveness and generalization ability. Full article
(This article belongs to the Special Issue Applications of Computational Intelligence)
Show Figures

Figure 1

17 pages, 1477 KiB  
Article
Business Process Outcome Prediction Based on Deep Latent Factor Model
by Ke Lu, Xinjian Fang and Xianwen Fang
Electronics 2022, 11(9), 1509; https://doi.org/10.3390/electronics11091509 - 08 May 2022
Viewed by 1463
Abstract
Business process outcome prediction plays an essential role in business process monitoring. It continuously analyzes completed process events to predict the executing cases’ outcome. Most of the current outcome prediction focuses only on the activity information in historical logs and less on the [...] Read more.
Business process outcome prediction plays an essential role in business process monitoring. It continuously analyzes completed process events to predict the executing cases’ outcome. Most of the current outcome prediction focuses only on the activity information in historical logs and less on the embedded and implicit knowledge that has not been explicitly represented. To address these issues, this paper proposes a Deep Latent Factor Model Predictor (DLFM Predictor) for uncovering the implicit factors affecting system operation and predicting the final results of continuous operation cases based on log behavior characteristics and resource information. First, the event logs are analyzed from the control flow and resource perspectives to construct composite data. Then, the stack autoencoder model is trained to extract the data’s main feature components for improving the training data’s reliability. Next, we capture the implicit factors at the control and data flow levels among events and construct a deep implicit factor model to optimize the parameter settings. After that, an expansive prefix sequence construction method is proposed to realize the outcome prediction of online event streams. Finally, the proposed algorithm is implemented based on the mainstream framework of neural networks and evaluated by real logs. The results show that the algorithm performs well under several evaluation metrics. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

27 pages, 3460 KiB  
Article
Delivering Extended Cellular Coverage and Capacity Using High-Altitude Platforms
by Steve Chukwuebuka Arum, David Grace and Paul Daniel Mitchell
Electronics 2022, 11(9), 1508; https://doi.org/10.3390/electronics11091508 - 07 May 2022
Cited by 2 | Viewed by 1806
Abstract
Interest in delivering cellular communication using a high-altitude platform (HAP) is increasing partly due to its wide coverage capability. In this paper, we formulate analytical expressions for estimating the area of a HAP beam footprint, average per-user capacity per cell, average spectral efficiency [...] Read more.
Interest in delivering cellular communication using a high-altitude platform (HAP) is increasing partly due to its wide coverage capability. In this paper, we formulate analytical expressions for estimating the area of a HAP beam footprint, average per-user capacity per cell, average spectral efficiency (SE) and average area spectral efficiency (ASE), which are relevant for radio network planning, especially within the context of HAP extended contiguous cellular coverage and capacity. To understand the practical implications, we propose an enhanced and validated recursive HAP antenna beam-pointing algorithm, which forms HAP cells over an extended service area while considering beam broadening and the degree of overlap between neighbouring beams. The performance of the extended contiguous cellular structure resulting from the algorithm is compared with other alternative schemes using the carrier-to-noise ratio (CNR) and carrier-to-interference-plus-noise ratio (CINR). Results show that there is a steep reduction in average ASE at the edge of coverage. The achievable coverage is limited by the minimum acceptable average ASE at the edge, among other factors. In addition, the results highlight that efficient beam management can be achieved using the enhanced and validated algorithm, which significantly improves user CNR, CINR, and coverage area compared with other benchmark schemes. A simulated annealing comparison verifies that such an algorithm is close to optimal. Full article
(This article belongs to the Special Issue Feature Papers in "Networks" Section)
Show Figures

Figure 1

23 pages, 1301 KiB  
Article
Machine Learning Models for Early Prediction of Sepsis on Large Healthcare Datasets
by Javier Enrique Camacho-Cogollo, Isis Bonet, Bladimir Gil and Ernesto Iadanza
Electronics 2022, 11(9), 1507; https://doi.org/10.3390/electronics11091507 - 07 May 2022
Cited by 7 | Viewed by 4435
Abstract
Sepsis is a highly lethal syndrome with heterogeneous clinical manifestation that can be hard to identify and treat. Early diagnosis and appropriate treatment are critical to reduce mortality and promote survival in suspected cases and improve the outcomes. Several screening prediction systems have [...] Read more.
Sepsis is a highly lethal syndrome with heterogeneous clinical manifestation that can be hard to identify and treat. Early diagnosis and appropriate treatment are critical to reduce mortality and promote survival in suspected cases and improve the outcomes. Several screening prediction systems have been proposed for evaluating the early detection of patient deterioration, but the efficacy is still limited at individual level. The increasing amount and the versatility of healthcare data suggest implementing machine learning techniques to develop models for predicting sepsis. This work presents an experimental study of some machine-learning-based models for sepsis prediction considering vital signs, laboratory test results, and demographics using Medical Information Mart for Intensive Care III (MIMIC-III) (v1.4), a publicly available dataset. The experimental results demonstrate an overall higher performance of machine learning models over the commonly used Sequential Organ Failure Assessment (SOFA) and Quick SOFA (qSOFA) scoring systems at the time of sepsis onset. Full article
Show Figures

Figure 1

13 pages, 1861 KiB  
Article
A Novel Anti-Risk Method for Portfolio Trading Using Deep Reinforcement Learning
by Han Yue, Jiapeng Liu, Dongmei Tian and Qin Zhang
Electronics 2022, 11(9), 1506; https://doi.org/10.3390/electronics11091506 - 07 May 2022
Cited by 2 | Viewed by 2458
Abstract
In the past decade, the application of deep reinforcement learning (DRL) in portfolio management has attracted extensive attention. However, most classical RL algorithms do not consider the exogenous and noise of financial time series data, which may lead to treacherous trading decisions. To [...] Read more.
In the past decade, the application of deep reinforcement learning (DRL) in portfolio management has attracted extensive attention. However, most classical RL algorithms do not consider the exogenous and noise of financial time series data, which may lead to treacherous trading decisions. To address this issue, we propose a novel anti-risk portfolio trading method based on deep reinforcement learning (DRL). It consists of a stacked sparse denoising autoencoder (SSDAE) network and an actor–critic based reinforcement learning (RL) agent. SSDAE will carry out off-line training first, while the decoder will used for on-line feature extraction in each state. The SSDAE network is used for the noise resistance training of financial data. The actor–critic algorithm we use is advantage actor–critic (A2C) and consists of two networks: the actor network learns and implements an investment policy, which is then evaluated by the critic network to determine the best action plan by continuously redistributing various portfolio assets, taking Sharp ratio as the optimization function. Through extensive experiments, the results show that our proposed method is effective and superior to the Dow Jones Industrial Average index (DJIA), several variants of our proposed method, and a state-of-the-art (SOTA) method. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

30 pages, 13065 KiB  
Article
Efficient Colour Image Encryption Algorithm Using a New Fractional-Order Memcapacitive Hyperchaotic System
by Zain-Aldeen S. A. Rahman, Basil H. Jasim, Yasir I. A. Al-Yasir and Raed A. Abd-Alhameed
Electronics 2022, 11(9), 1505; https://doi.org/10.3390/electronics11091505 - 07 May 2022
Cited by 9 | Viewed by 1711
Abstract
In comparison with integer-order chaotic systems, fractional-order chaotic systems exhibit more complex dynamics. In recent years, research into fractional chaotic systems for the utilization of image cryptosystems has become increasingly highlighted. This paper describes the development, testing, numerical analysis, and electronic realization of [...] Read more.
In comparison with integer-order chaotic systems, fractional-order chaotic systems exhibit more complex dynamics. In recent years, research into fractional chaotic systems for the utilization of image cryptosystems has become increasingly highlighted. This paper describes the development, testing, numerical analysis, and electronic realization of a fractional-order memcapacitor. Then, a new four-dimensional (4D) fractional-order memcapacitive hyperchaotic system is suggested based on this memcapacitor. Analytically and numerically, the nonlinear dynamic properties of the hyperchaotic system have been explored, where various methods, including equilibrium points, phase portraits of chaotic attractors, bifurcation diagrams, and the Lyapunov exponent, are considered to demonstrate the chaos behaviour of this new hyperchaotic system. Consequently, an encryption cryptosystem algorithm is used for colour image encryption based on the chaotic behaviour of the memcapacitive model, where every pixel value of the original image is incorporated in the secret key to strengthen the encryption algorithm pirate anti-attack robustness. For generating the keyspace of that employed cryptosystem, the initial condition values, parameters, and fractional-order derivative value(s) (q) of the memcapacitive chaotic system are utilized. The common cryptanalysis metrics are verified in detail by histogram, keyspace, key sensitivity, correlation coefficient values, entropy, time efficiency, and comparisons with other recent related fieldwork in order to demonstrate the security level of the proposed cryptosystem approach. Finally, images of various sizes were encrypted and recovered to ensure that the utilized cryptosystem approach is capable of encrypting/decrypting images of various sizes. The obtained experimental results and security metrics analyses illustrate the excellent accuracy, high security, and perfect time efficiency of the utilized cryptosystem, which is highly resistant to various forms of pirate attacks. Full article
(This article belongs to the Special Issue RF/Microwave Circuits for 5G and Beyond)
Show Figures

Figure 1

21 pages, 15097 KiB  
Article
SSA-SL Transformer for Bearing Fault Diagnosis under Noisy Factory Environments
by Seoyeong Lee and Jongpil Jeong
Electronics 2022, 11(9), 1504; https://doi.org/10.3390/electronics11091504 - 07 May 2022
Cited by 2 | Viewed by 2339
Abstract
Among the smart factory studies, we describe defect detection research conducted on bearings, which are elements of mechanical facilities. Bearing research has been consistently conducted in the past; however, most of the research has been limited to using existing artificial intelligence models. In [...] Read more.
Among the smart factory studies, we describe defect detection research conducted on bearings, which are elements of mechanical facilities. Bearing research has been consistently conducted in the past; however, most of the research has been limited to using existing artificial intelligence models. In addition, previous studies assumed the factories situated in the bearing defect research were insufficient. Therefore, a recent research was conducted that applied an artificial intelligence model and the factory environment. The transformer model was selected as state-of-the-art (SOTA) and was also applied to bearing research. Then, an experiment was conducted with Gaussian noise applied to assume a factory situation. The swish-LSTM transformer (Sl transformer) framework was constructed by redesigning the internal structure of the transformer using the swish activation function and long short-term memory (LSTM). Then, the data in noise were removed and reconstructed using the singular spectrum analysis (SSA) preprocessing method. Based on the SSA-Sl transformer framework, an experiment was performed by adding Gaussian noise to the Case Western Reserve University (CWRU) dataset. In the case of no noise, the Sl transformer showed more than 95% performance, and when noise was inserted, the SSA-Sl transformer showed better performance than the comparative artificial intelligence models. Full article
(This article belongs to the Special Issue Advances in Fault Detection/Diagnosis of Electrical Power Devices)
Show Figures

Figure 1

15 pages, 6872 KiB  
Article
Current Collapse Conduction Losses Minimization in GaN Based PMSM Drive
by Pavel Skarolek, Ondrej Lipcak and Jiri Lettl
Electronics 2022, 11(9), 1503; https://doi.org/10.3390/electronics11091503 - 07 May 2022
Cited by 2 | Viewed by 1341
Abstract
The ever-increasing demands on the efficiency and power density of power electronics converters lead to the replacement of traditional silicon-based components with new structures. One of the promising technologies represents devices based on Gallium-Nitride (GaN). Compared to silicon transistors, GaN semiconductor switches offer [...] Read more.
The ever-increasing demands on the efficiency and power density of power electronics converters lead to the replacement of traditional silicon-based components with new structures. One of the promising technologies represents devices based on Gallium-Nitride (GaN). Compared to silicon transistors, GaN semiconductor switches offer superior performance in high-frequency converters, since their fast switching process significantly decreases the switching losses. However, when used in hard-switched converters such as voltage-source inverters (VSI) for motor control applications, GaN transistors increase the power dissipated due to the current conduction. The loss increase is caused by the current-collapse phenomenon, which increases the dynamic drain-source resistance of the device shortly after the turn-on. This disadvantage makes it hard for GaN converters to compete with other technologies in electric drives. Therefore, this paper offers a purely software-based solution to mitigate the negative consequences of the current-collapse phenomenon. The proposed method is based on the minimum pulse length optimization of the classical 7-segment space-vector modulation (SVM) and is verified within a field-oriented control (FOC) of a three-phase permanent magnet synchronous motor (PMSM) supplied by a two-level GaN VSI. The compensation in the control algorithm utilizes an offline measured look-up table dependent on the machine input power. Full article
(This article belongs to the Section Power Electronics)
Show Figures

Figure 1

20 pages, 966 KiB  
Review
Learning-Based Methods for Cyber Attacks Detection in IoT Systems: A Survey on Methods, Analysis, and Future Prospects
by Usman Inayat, Muhammad Fahad Zia, Sajid Mahmood, Haris M. Khalid and Mohamed Benbouzid
Electronics 2022, 11(9), 1502; https://doi.org/10.3390/electronics11091502 - 07 May 2022
Cited by 60 | Viewed by 7776
Abstract
Internet of Things (IoT) is a developing technology that provides the simplicity and benefits of exchanging data with other devices using the cloud or wireless networks. However, the changes and developments in the IoT environment are making IoT systems susceptible to cyber attacks [...] Read more.
Internet of Things (IoT) is a developing technology that provides the simplicity and benefits of exchanging data with other devices using the cloud or wireless networks. However, the changes and developments in the IoT environment are making IoT systems susceptible to cyber attacks which could possibly lead to malicious intrusions. The impacts of these intrusions could lead to physical and economical damages. This article primarily focuses on the IoT system/framework, the IoT, learning-based methods, and the difficulties faced by the IoT devices or systems after the occurrence of an attack. Learning-based methods are reviewed using different types of cyber attacks, such as denial-of-service (DoS), distributed denial-of-service (DDoS), probing, user-to-root (U2R), remote-to-local (R2L), botnet attack, spoofing, and man-in-the-middle (MITM) attacks. For learning-based methods, both machine and deep learning methods are presented and analyzed in relation to the detection of cyber attacks in IoT systems. A comprehensive list of publications to date in the literature is integrated to present a complete picture of various developments in this area. Finally, future research directions are also provided in the paper. Full article
(This article belongs to the Special Issue Resilience-Oriented Smart Grid Systems)
Show Figures

Figure 1

16 pages, 614 KiB  
Article
100 Gbps Dynamic Extensible Protocol Parser Based on an FPGA
by Ke Wang, Zhichuan Guo, Mangu Song and Meng Sha
Electronics 2022, 11(9), 1501; https://doi.org/10.3390/electronics11091501 - 07 May 2022
Cited by 1 | Viewed by 1616
Abstract
In order to facilitate the transition between networks and the integration of heterogeneous networks, the underlying link design of the current mainstream Information-Centric Networking (ICN) still considers the characteristics of the general network and extends the customized ICN protocol on this basis. This [...] Read more.
In order to facilitate the transition between networks and the integration of heterogeneous networks, the underlying link design of the current mainstream Information-Centric Networking (ICN) still considers the characteristics of the general network and extends the customized ICN protocol on this basis. This requires that the network transmission equipment can not only distinguish general network packets but also support the identification of ICN-specific protocols. However, traditional network protocol parsers are designed for specific network application scenarios, and it is difficult to flexibly expand new protocol parsing rules for different ICN network architectures. For this reason, we propose a general dynamic extensible protocol parser deployed on FPGA, which supports the real-time update of network protocol parsing rules by configuring extended protocol descriptors. At the same time, the multi-queue protocol management mechanism is adopted to realize the grouping management and rapid parsing of the extended protocol. The results demonstrate that the method can effectively support the protocol parsing of 100 Gbps high-speed network data packets and can dynamically update the protocol parsing rules under ultra-low latency. Compared with the current commercial programmable network equipment, this solution improves the protocol update efficiency by several orders of magnitude and better supports the online updating of network equipment. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

23 pages, 6905 KiB  
Article
Recognizing Students and Detecting Student Engagement with Real-Time Image Processing
by Mustafa Uğur Uçar and Ersin Özdemir
Electronics 2022, 11(9), 1500; https://doi.org/10.3390/electronics11091500 - 07 May 2022
Cited by 7 | Viewed by 2837
Abstract
With COVID-19, formal education was interrupted in all countries and the importance of distance learning has increased. It is possible to teach any lesson with various communication tools but it is difficult to know how far this lesson reaches to the students. In [...] Read more.
With COVID-19, formal education was interrupted in all countries and the importance of distance learning has increased. It is possible to teach any lesson with various communication tools but it is difficult to know how far this lesson reaches to the students. In this study, it is aimed to monitor the students in a classroom or in front of the computer with a camera in real time, recognizing their faces, their head poses, and scoring their distraction to detect student engagement based on their head poses and Eye Aspect Ratios. Distraction was determined by associating the students’ attention with looking at the teacher or the camera in the right direction. The success of the face recognition and head pose estimation was tested by using the UPNA Head Pose Database and, as a result of the conducted tests, the most successful result in face recognition was obtained with the Local Binary Patterns method with a 98.95% recognition rate. In the classification of student engagement as Engaged and Not Engaged, support vector machine gave results with 72.4% accuracy. The developed system will be used to recognize and monitor students in the classroom or in front of the computer, and to determine the course flow autonomously. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

17 pages, 17522 KiB  
Article
Video Super-Resolution Using Multi-Scale and Non-Local Feature Fusion
by Yanghui Li, Hong Zhu, Qian Hou, Jing Wang and Wenhuan Wu
Electronics 2022, 11(9), 1499; https://doi.org/10.3390/electronics11091499 - 07 May 2022
Cited by 4 | Viewed by 1629
Abstract
Video super-resolution can generate corresponding to high-resolution video frames from a plurality of low-resolution video frames which have rich details and temporally consistency. Most current methods use two-level structure to reconstruct video frames by combining optical flow network and super-resolution network, but this [...] Read more.
Video super-resolution can generate corresponding to high-resolution video frames from a plurality of low-resolution video frames which have rich details and temporally consistency. Most current methods use two-level structure to reconstruct video frames by combining optical flow network and super-resolution network, but this process does not deeply mine the effective information contained in video frames. Therefore, we propose a video super-resolution method that combines non-local features and multi-scale features to extract more in-depth effective information contained in video frames. Our method obtains long-distance effective information by calculating the similarity between any two pixels in the video frame through the non-local module, extracts the local information covered by different scale convolution cores through the multi-scale feature fusion module, and fully fuses feature information using different connection modes of convolution cores. Experiments on different data sets show that the proposed method is superior to the existing methods in quality and quantity. Full article
(This article belongs to the Section Electronic Multimedia)
Show Figures

Figure 1

18 pages, 1094 KiB  
Article
Modeling Distributed MQTT Systems Using Multicommodity Flow Analysis
by Pietro Manzoni, Vittorio Maniezzo and Marco A. Boschetti
Electronics 2022, 11(9), 1498; https://doi.org/10.3390/electronics11091498 - 07 May 2022
Cited by 1 | Viewed by 1422
Abstract
The development of technologies that exploit the Internet of Things (IoT) paradigm has led to the increasingly widespread use of networks formed by different devices scattered throughout the territory. The Publish/Subscribe paradigm is one of the most used communication paradigms for applications of [...] Read more.
The development of technologies that exploit the Internet of Things (IoT) paradigm has led to the increasingly widespread use of networks formed by different devices scattered throughout the territory. The Publish/Subscribe paradigm is one of the most used communication paradigms for applications of this type. However, adopting these systems due to their centralized structure also leads to the emergence of various problems and limitations. For example, the broker is typically the single point of failure of the system: no communication is possible if the broker is unavailable. Moreover, they may not scale well considering the massive numbers of IoT devices forecasted in the future. Finally, a network architecture with a single central broker is partially at odds with the edge-oriented approach. This work focuses on the development of an adaptive topology control approach, able to find the most efficient network configuration maximizing the number of connections and reduce the waste of resources within it, starting from the definition of the devices and the connections between them present in the system. To reach the goal, we leverage an integer linear programming mathematical formulation, providing the basis to solve and optimize the problem of network configuration in contexts where the resources available to the devices are limited. Full article
(This article belongs to the Special Issue Feature Papers in "Networks" Section)
Show Figures

Figure 1

15 pages, 34302 KiB  
Article
Remote Prototyping of FPGA-Based Devices in the IoT Concept during the COVID-19 Pandemic
by Michał Melosik, Mariusz Naumowicz, Marek Kropidłowski and Wieslaw Marszalek
Electronics 2022, 11(9), 1497; https://doi.org/10.3390/electronics11091497 - 07 May 2022
Cited by 1 | Viewed by 2367
Abstract
This paper presents a system for the remote design and testing of electronic circuits and devices with FPGAs during COVID-19 and similar lockdown periods when physical access to laboratories is not permitted. The system is based on the application of the IoT concept, [...] Read more.
This paper presents a system for the remote design and testing of electronic circuits and devices with FPGAs during COVID-19 and similar lockdown periods when physical access to laboratories is not permitted. The system is based on the application of the IoT concept, in which the final device is a test board with an FPGA chip. The system allows for remote visual inspection of the board and the devices linked to it in the laboratory. The system was developed for remote learning taking place during the lockdown periods at Poznan University of Technology (PUT) in Poland. The functionality of the system is confirmed by two demonstration tasks (the use of the temperature and humidity DHT11 sensor and the design of a generator of sinusoidal waveforms) for students in the fundamentals of digital design and synthesis courses. The proposed solution allows, in part, to bypass the time-consuming simulations, and accelerate the process of prototyping digital circuits by remotely accessing the infrastructure of the microelectronics laboratory. Full article
Show Figures

Figure 1

24 pages, 4851 KiB  
Article
Integration and Deployment of Cloud-Based Assistance System in Pharaon Large Scale Pilots—Experiences and Lessons Learned
by Andrej Grguric, Miran Mosmondor and Darko Huljenic
Electronics 2022, 11(9), 1496; https://doi.org/10.3390/electronics11091496 - 06 May 2022
Cited by 1 | Viewed by 1996
Abstract
The EU project Pharaon aims to support older European adults by integrating digital services, tools, interoperable open platforms, and devices. One of the objectives is to validate the integrated solutions in large-scale pilots. The integration of mature solutions and existing systems is one [...] Read more.
The EU project Pharaon aims to support older European adults by integrating digital services, tools, interoperable open platforms, and devices. One of the objectives is to validate the integrated solutions in large-scale pilots. The integration of mature solutions and existing systems is one of the preconditions for the successful realization of the different aims of the pilots. One such solution is an intelligent, privacy-aware home-care assistance system, SmartHabits. After briefly introducing the Pharaon and SmartHabits, the authors propose different Pharaon models in the Ambient/Active Assisted Living (AAL) domain, namely the Pharaon conceptual model, Pharaon reference logical architecture view, AAL ecosystem model, meta AAL ecosystem model, and Pharaon ecosystem and governance models. Building on the proposed models, the authors provide details of the holistic integration and deployment process of the SmartHabits system into the Pharaon ecosystem. Both technical and supporting integration challenges and activities are discussed. Technical activities, including syntactic and semantic integration and securing the transfer of the Pharaon sensitive data, are among the priorities. Supporting activities include achieving legal and regulatory compliance, device procurement, and use-case co-designing in COVID-19 conditions. Full article
Show Figures

Figure 1

44 pages, 17214 KiB  
Article
Coronary Artery Disease Detection Model Based on Class Balancing Methods and LightGBM Algorithm
by Shasha Zhang, Yuyu Yuan, Zhonghua Yao, Jincui Yang, Xinyan Wang and Jianwei Tian
Electronics 2022, 11(9), 1495; https://doi.org/10.3390/electronics11091495 - 06 May 2022
Cited by 5 | Viewed by 2481
Abstract
Coronary artery disease (CAD) is a disease with high mortality and disability. By 2019, there were 197 million CAD patients in the world. Additionally, the number of disability-adjusted life years (DALYs) owing to CAD reached 182 million. It is widely known that the [...] Read more.
Coronary artery disease (CAD) is a disease with high mortality and disability. By 2019, there were 197 million CAD patients in the world. Additionally, the number of disability-adjusted life years (DALYs) owing to CAD reached 182 million. It is widely known that the early and accurate diagnosis of CAD is the most efficient method to reduce the damage of CAD. In medical practice, coronary angiography is considered to be the most reliable basis for CAD diagnosis. However, unfortunately, due to the limitation of inspection equipment and expert resources, many low- and middle-income countries do not have the ability to perform coronary angiography. This has led to a large loss of life and medical burden. Therefore, many researchers expect to realize the accurate diagnosis of CAD based on conventional medical examination data with the help of machine learning and data mining technology. The goal of this study is to propose a model for early, accurate and rapid detection of CAD based on common medical test data. This model took the classical logistic regression algorithm, which is the most commonly used in medical model research as the classifier. The advantages of feature selection and feature combination of tree models were used to solve the problem of manual feature engineering in logical regression. At the same time, in order to solve the class imbalance problem in Z-Alizadeh Sani dataset, five different class balancing methods were applied to balance the dataset. In addition, according to the characteristics of the dataset, we also adopted appropriate preprocessing methods. These methods significantly improved the classification performance of logistic regression classifier in terms of accuracy, recall, precision, F1 score, specificity and AUC when used for CAD detection. The best accuracy, recall, F1 score, precision, specificity and AUC were 94.7%, 94.8%, 94.8%, 95.3%, 94.5% and 0.98, respectively. Experiments and results have confirmed that, according to common medical examination data, our proposed model can accurately identify CAD patients in the early stage of CAD. Our proposed model can be used to help clinicians make diagnostic decisions in clinical practice. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Figure 1

16 pages, 8253 KiB  
Article
Isolation and Grading of Faults in Battery Packs Based on Machine Learning Methods
by Sen Yang, Boran Xu and Hanlin Peng
Electronics 2022, 11(9), 1494; https://doi.org/10.3390/electronics11091494 - 06 May 2022
Cited by 4 | Viewed by 1747
Abstract
As the installed energy storage stations increase year by year, the safety of energy storage batteries has attracted the attention of industry and academia. In this work, an intelligent fault diagnosis scheme for series-connected battery packs based on wavelet characteristics of battery voltage [...] Read more.
As the installed energy storage stations increase year by year, the safety of energy storage batteries has attracted the attention of industry and academia. In this work, an intelligent fault diagnosis scheme for series-connected battery packs based on wavelet characteristics of battery voltage correlations is designed. First, the cross-cell voltages of multiple cells are preprocessed using an improved recursive Pearson correlation coefficient to capture the abnormal electrical signals. Secondly, the wavelet packet decomposition is applied to the coefficient series to obtain fault-related features from wavelet sub-bands, and the most representative characteristic principal components are extracted. Finally, the artificial neural network (ANN) and multi-classification relevance vector machine (mRVM) are employed to classify and evaluate fault mode and fault degree, respectively. Physical injection of external and internal short circuits, thermal damage, and loose connection failure is carried out to collect real fault data for model training and method validation. Experimental results show that the proposed method can effectively detect and locate different faults using the extracted fault features; mRVM is better than ANN in thermal fault diagnosis, while the overall diagnosis performance of ANN is better than mRVM. The success rates of fault isolation are 82% and 81%, and the success rates of fault grading are 98% and 90%, by ANN and mRVM, respectively. Full article
Show Figures

Figure 1

16 pages, 1758 KiB  
Article
A Design Methodology for Wideband Current-Reuse Receiver Front-Ends Aimed at Low-Power Applications
by Arash Abbasi and Frederic Nabki
Electronics 2022, 11(9), 1493; https://doi.org/10.3390/electronics11091493 - 06 May 2022
Cited by 3 | Viewed by 1541
Abstract
This work gives a design perspective on low-power and wideband RF-to-Baseband current-reuse receivers (CRR). The proposed CRR architecture design shares a single supply and biasing current among both LNTA and baseband circuits to reduce power consumption. The work discusses topology selection and a [...] Read more.
This work gives a design perspective on low-power and wideband RF-to-Baseband current-reuse receivers (CRR). The proposed CRR architecture design shares a single supply and biasing current among both LNTA and baseband circuits to reduce power consumption. The work discusses topology selection and a suitable design procedure of the low noise transconductance amplifier (LNTA), down-conversion passive-mixer, active-inductor (AI) and TIA circuits. Layout considerations are also discussed. The receiver was simulated in 130 nm CMOS technology and occupies an active area of 0.025 mm2. It achieves a wideband input matching of less than 10 dB from 0.8 GHz to 3.4 GHz. A conversion-gain of 39.5 dB, IIP3 of 28 dBm and a double-sideband (DSB) NF of 5.6 dB is simulated at a local-oscillator (LO) frequency of 2.4 GHz and an intermediate frequency (IF) of 10 MHz, while consuming 1.92 mA from a 1.2 V supply. Full article
(This article belongs to the Special Issue Design of Mixed Analog/Digital Circuits)
Show Figures

Figure 1

15 pages, 364 KiB  
Article
On the Potential of MP-QUIC as Transport Layer Aggregator for Multiple Cellular Networks
by Zsolt Krämer, Felicián Németh, Attila Mihály, Sándor Molnár, István Pelle, Gergely Pongrácz and Donát Scharnitzky
Electronics 2022, 11(9), 1492; https://doi.org/10.3390/electronics11091492 - 06 May 2022
Cited by 3 | Viewed by 1854
Abstract
Multipath transport protocols have the ability to simultaneously utilize the different paths and thus outperform single-path solutions in terms of achievable goodput, latency, or reliability. In this paper our goal is to examine the potential of connecting a mobile terminal to multiple mobile [...] Read more.
Multipath transport protocols have the ability to simultaneously utilize the different paths and thus outperform single-path solutions in terms of achievable goodput, latency, or reliability. In this paper our goal is to examine the potential of connecting a mobile terminal to multiple mobile networks simultaneously in a dynamically changing environment. To achieve this, first we analyze a dataset obtained from an LTE drive test involving two operators. Then we study the performance of MP-QUIC, the multipath extension of QUIC, in a dynamic emulated environment generated from the collected traces. Our results show that MP-QUIC may leverage multiple available channels to provide uninterrupted connectivity, and a better overall goodput even when compared to using only the best available channel for communication. We also compare the MP-QUIC performance with MPTCP, identify challenges with the current protocol implementations to fill in the available aggregate capacity, and give insights on how the achievable throughput could be increased. Full article
(This article belongs to the Special Issue Telecommunication Networks)
Show Figures

Figure 1

12 pages, 501 KiB  
Article
Quality Enhancement of MPEG-H 3DA Binaural Rendering Using a Spectral Compensation Technique
by Hyeongi Moon and Young-cheol Park
Electronics 2022, 11(9), 1491; https://doi.org/10.3390/electronics11091491 - 06 May 2022
Viewed by 1545
Abstract
The latest MPEG standard, MPEG-H 3D Audio, employs the virtual loudspeaker rendering (VLR) technique to support virtual reality (VR) and augmented reality (AR). During the rendering, the binaural downmixing of channel signals often induces the so-called comb filter effect, an undesirable spectral artifact, [...] Read more.
The latest MPEG standard, MPEG-H 3D Audio, employs the virtual loudspeaker rendering (VLR) technique to support virtual reality (VR) and augmented reality (AR). During the rendering, the binaural downmixing of channel signals often induces the so-called comb filter effect, an undesirable spectral artifact, due to the phase difference between the binaural filters. In this paper, we propose an efficient algorithm that can mitigate such spectral artifacts. The proposed algorithm performs spectral compensation in both the panning gain and downmix signal domains depending on the frequency range. In the low-frequency bands where a band has a wider bandwidth than the critical-frequency scale, panning gains are directly compensated. In the high-frequency bands, where a band has a narrower bandwidth than the critical-frequency scale, a signal compensation similar to the active downmix is performed. As a result, the proposed algorithm optimizes the performance and the complexity within MPEG-H 3DA framework. By implementing the algorithm on MPEG-H 3DA BR, we verify that the additional computation complexity is minor. We also show that the proposed algorithm improves the subjective quality of MPEG-H 3DA BR significantly. Full article
(This article belongs to the Special Issue Applications of Audio and Acoustic Signal)
Show Figures

Figure 1

9 pages, 3299 KiB  
Article
High-Power Electromagnetic Pulse Effect Prediction for Vehicles Based on Convolutional Neural Network
by Le Cao, Shuai Hao, Yuan Zhao and Cheng Wang
Electronics 2022, 11(9), 1490; https://doi.org/10.3390/electronics11091490 - 06 May 2022
Viewed by 1623
Abstract
This study presents a prediction model for high-power electromagnetic pulse (HPEMP) effects on aboveground vehicles based on convolutional neural networks (CNNs). Since a vehicle is often located aboveground and is close to the air-groundhalf-space interface, the electromagnetic energy coupled into the [...] Read more.
This study presents a prediction model for high-power electromagnetic pulse (HPEMP) effects on aboveground vehicles based on convolutional neural networks (CNNs). Since a vehicle is often located aboveground and is close to the air-groundhalf-space interface, the electromagnetic energy coupled into the vehicle by the ground reflected waves cannot be ignored. Consequently, the analysis of the vehicle’s HPEMP effect is a composite electromagnetic scattering problem of the half-space and the vehicles above it, which is often analyzed using different half-space numerical methods. However, traditional numerical methods are often limited by the complexity of the actual half-space models and the high computational demands of complex targets. In this study, a prediction method is proposed based on a CNN, which can analyze the electric field and energy density under different incident conditions and half-space environments. Compared with the half-space finite-difference time-domain (FDTD) method, the accuracy of the prediction results was above 98% after completing the training of the CNN network, which proves the correctness and effectiveness of the method. In summary, the CNN prediction model in this study can provide a reference for evaluating the HPEMP effect on the target over a complex half-space medium. Full article
Show Figures

Figure 1

21 pages, 2524 KiB  
Article
BSTProv: Blockchain-Based Secure and Trustworthy Data Provenance Sharing
by Lian-Shan Sun, Xue Bai, Chao Zhang, Yang Li, Yong-Bin Zhang and Wen-Qiang Guo
Electronics 2022, 11(9), 1489; https://doi.org/10.3390/electronics11091489 - 06 May 2022
Cited by 5 | Viewed by 1851
Abstract
In the Big Data era, data provenance has become an important concern for enhancing the trustworthiness of key data that are rapidly generated and shared across organizations. Prevailing solutions employ authoritative centers to efficiently manage and share massive data. They are not suitable [...] Read more.
In the Big Data era, data provenance has become an important concern for enhancing the trustworthiness of key data that are rapidly generated and shared across organizations. Prevailing solutions employ authoritative centers to efficiently manage and share massive data. They are not suitable for secure and trustworthy decentralized data provenance sharing due to the inevitable dishonesty or failure of trusted centers. With the advent of the blockchain technology, embedding data provenance in immutable blocks is believed to be a promising solution. However, a provenance file, usually a directed acyclic graph, cannot be embedded in blocks as a whole because its size may exceed the limit of a block, and may include various sensitive information that can be legally accessed by different users. To this end, this paper proposed the BSTProv, a blockchain-based system for secure and trustworthy decentralized data provenance sharing. It enables secure and trustworthy provenance sharing by partitioning a large provenance graph into multiple small subgraphs and embedding the encrypted subgraphs instead of raw subgraphs or their hash values into immutable blocks of a consortium blockchain; it enables decentralized and flexible authorization by allowing each peer to define appropriate permissions for selectively sharing some sets of subgraphs to specific requesters; and it enables efficient cross-domain provenance composition and tracing by maintaining a high-level dependency structure among provenance graphs from different domains in smart contracts, and by locally storing, decrypting, and composing subgraphs obtained from the blockchain. Finally, a prototype is implemented on top of an Ethereum-based consortium blockchain and experiment results show the advantages of our approach. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

15 pages, 5830 KiB  
Article
Analysis and Design of an S/PS−Compensated WPT System with Constant Current and Constant Voltage Charging
by Lin Yang, Zhi Geng, Shuai Jiang and Can Wang
Electronics 2022, 11(9), 1488; https://doi.org/10.3390/electronics11091488 - 06 May 2022
Cited by 2 | Viewed by 1937
Abstract
In recent years, more and more scholars have paid attention to the research of wireless power transfer (WPT) technology, and have achieved a lot of results. In practical charging application, ensuring that the WPT system can achieve constant current and constant voltage output [...] Read more.
In recent years, more and more scholars have paid attention to the research of wireless power transfer (WPT) technology, and have achieved a lot of results. In practical charging application, ensuring that the WPT system can achieve constant current and constant voltage output with zero phase angle (ZPA) operation is very important to prolong battery life and improve power transfer efficiency. This paper proposes an series/parallel series(S/PS)-compensated WPT system that can charge the battery load in constant current and constant voltage modes at two different frequency points through frequency switching. The proposed S/PS structure contains only three compensation capacitors, few compensation elements, simple structure, low economic cost, in addition, the secondary-side does not contain compensation inductor, ensuring the compactness of the secondary-side. An experimental prototype with an input voltage of 40 V is established, and the experiment proves that the model can obtain output voltage of 48 V and current of 2 A. Maximum system transmission efficiency of up to 92.48% The experimental results are consistent with the theoretical analysis results, which verifies the feasibility of the method. Full article
Show Figures

Figure 1

17 pages, 2275 KiB  
Article
Designing an Intelligent Virtual Educational System to Improve the Efficiency of Primary Education in Developing Countries
by Vidal Alonso-Secades, Alfonso-José López-Rivero, Manuel Martín-Merino-Acera, Manuel-José Ruiz-García and Olga Arranz-García
Electronics 2022, 11(9), 1487; https://doi.org/10.3390/electronics11091487 - 06 May 2022
Cited by 3 | Viewed by 3069
Abstract
Incorporating technology into virtual education encourages educational institutions to demand a migration from the current learning management system towards an intelligent virtual educational system, seeking greater benefit by exploiting the data generated by students in their day-to-day activities. Therefore, the design of these [...] Read more.
Incorporating technology into virtual education encourages educational institutions to demand a migration from the current learning management system towards an intelligent virtual educational system, seeking greater benefit by exploiting the data generated by students in their day-to-day activities. Therefore, the design of these intelligent systems must be performed from a new perspective, which will take advantage of the new analytical functions provided by technologies such as artificial intelligence, big data, educational data mining techniques, and web analytics. This paper focuses on primary education in developing countries, showing the design of an intelligent virtual educational system to improve the efficiency of primary education through recommendations based on reliable data. The intelligent system is formed of four subsystems: data warehousing, analytical data processing, monitoring process and recommender system for educational agents. To illustrate this, the paper contains two dashboards that analyze, respectively, the digital resources usage time and an aggregate profile of teachers’ digital skills, in order to infer new activities that improve efficiency. These intelligent virtual educational systems focus the teaching–learning process on new forms of interaction on an educational future oriented to personalized teaching for the students, and new evaluation and teaching processes for each professor. Full article
(This article belongs to the Special Issue Recent Trends in Intelligent Systems)
Show Figures

Figure 1

24 pages, 7338 KiB  
Article
Unsupervised and Self-Supervised Tensor Train for Change Detection in Multitemporal Hyperspectral Images
by Muhammad Sohail, Haonan Wu, Zhao Chen and Guohua Liu
Electronics 2022, 11(9), 1486; https://doi.org/10.3390/electronics11091486 - 06 May 2022
Cited by 3 | Viewed by 2461
Abstract
Remote sensing change detection (CD) using multitemporal hyperspectral images (HSIs) provides detailed information on spectral–spatial changes and is useful in a variety of applications such as environmental monitoring, urban planning, and disaster detection. However, the high dimensionality and low spatial resolution of HSIs [...] Read more.
Remote sensing change detection (CD) using multitemporal hyperspectral images (HSIs) provides detailed information on spectral–spatial changes and is useful in a variety of applications such as environmental monitoring, urban planning, and disaster detection. However, the high dimensionality and low spatial resolution of HSIs do not only lead to expensive computation but also bring about inter-class homogeneity and inner-class heterogeneity. Meanwhile, labeled samples are difficult to obtain in reality as field investigation is expensive, which limits the application of supervised CD methods. In this paper, two algorithms for CD based on the tensor train (TT) decomposition are proposed and are called the unsupervised tensor train (UTT) and self-supervised tensor train (STT). TT uses a well-balanced matricization strategy to capture global correlations from tensors and can therefore effectively extract low-rank discriminative features, so the curse of the dimensionality and spectral variability of HSIs can be overcome. In addition, the two proposed methods are based on unsupervised and self-supervised learning, where no manual annotations are needed. Meanwhile, the ket-augmentation (KA) scheme is used to transform the low-order tensor into a high-order tensor while keeping the total number of entries the same. Therefore, high-order features with richer texture can be extracted without increasing computational complexity. Experimental results on four benchmark datasets show that the proposed methods outperformed their tensor counterpart, the tucker decomposition (TD), the higher-order singular value decomposition (HOSVD), and some other state-of-the-art approaches. For the Yancheng dataset, OA and KAPPA of UTT reached as high as 98.11% and 0.9536, respectively, while OA and KAPPA of STT were at 98.20% and 0.9561, respectively. Full article
(This article belongs to the Special Issue Deep Learning for Big Data Processing)
Show Figures

Figure 1

16 pages, 1732 KiB  
Article
Mathematical Modelling of the Influence of Parasitic Capacitances of the Components of the Logarithmic Analogue-to-Digital Converter (LADC) with a Successive Approximation on Switched Capacitors for Increasing Accuracy of Conversion
by Zynoviy Mychuda, Igor Zhuravel, Lesia Mychuda, Adam Szcześniak, Zbigniew Szcześniak and Hanna Yelisieieva
Electronics 2022, 11(9), 1485; https://doi.org/10.3390/electronics11091485 - 06 May 2022
Cited by 4 | Viewed by 1189
Abstract
This paper presents an analysis of the influence of parasitic inter-electrode capacitances of the components of logarithmic analogue-to-digital converters with successive approximation with a variable logarithm base. Mathematical models of converter errors were developed and analyzed taking into account the parameters of modern [...] Read more.
This paper presents an analysis of the influence of parasitic inter-electrode capacitances of the components of logarithmic analogue-to-digital converters with successive approximation with a variable logarithm base. Mathematical models of converter errors were developed and analyzed taking into account the parameters of modern components. It has been shown that to achieve satisfactory accuracy for the 16 bit LADC, the capacitance of the capacitor cell must not be less than 10 nF; for the 12 bit LADC, 1 nF is sufficient. Full article
(This article belongs to the Special Issue Advances on Analog-to-Digital and Digital-to-Analog Converters)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop