Applications of Statistics and Machine Learning in Electronics

A special issue of Computation (ISSN 2079-3197). This special issue belongs to the section "Computational Engineering".

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 18725

Special Issue Editors


E-Mail Website
Guest Editor
Institute for Machine Learning and Analysis, Department for Electrical Engineering, University of Applied Sciences Offenburg, Offenburg, Germany
Interests: autonomous mobile systems; image processing and machine learning

E-Mail Website
Guest Editor
Faculty of Electronic Engineering and Technology, Technical University of Sofia, Sofia, Bulgaria
Interests: measurement; sensors; actuators; smart sensors; neural networks

E-Mail Website
Guest Editor
Faculty of Applied Mathematics and Informatics, Technical University of Sofia, Sofia, Bulgaria
Interests: design and analysis of electronic circuit through machine learning; security and privacy

E-Mail Website
Guest Editor
The Institute of Robotics, Bulgarian Academy of Science, PO Box 79, 1113 Sofia, Bulgaria
Interests: human-robot interaction; brain-like intelligent agents; pedagogical rehabilitation; socially competent robotic systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website1 Website2
Guest Editor
Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, Kitakyushu, Japan
Interests: emergent intelligence; episodic memory and emotion; societal robot; computational neuroscience; neuroinformatics; sport biomechanics; rehabilitation support

Special Issue Information

Dear Colleagues,

It is our great pleasure to invite you to participate in this Special Issue of Computation, named “Applications of Statistics and Machine Learning in Electronics”. It is devoted to better understanding the role of statistics and machine learning in supporting and facilitating a wide variety of engineering tasks in electronics. The Special Issue will publish extended variants of the accepted papers presented at the International Conference on Statistics and Machine Learning in Electronics, but colleagues from all over the world who cannot be a part of the conference are also invited. These papers will follow a rigorous peer-review process to satisfy a high standard of publication.

Statistical methods are utilized in different areas of electronics to model, analyse, and evaluate events, processes, and phenomena. Complex interactions between electronic components, as well as the influence of external or accidental internal factors, can impair the performance of an electronic circuit or device, cause unexpected behaviour and output response, or lead to irreversible damage. Statistical analysis allows malfunctioning components and devices to be thoroughly investigated and corrected. In addition, statistical methods are applied to assess the quality of the manufacturing process, ensuring the production of electronic components and devices that possess characteristics according to technical specifications.

The applications of machine and deep learning in electronics contribute to the study, prediction, and better understanding of the behaviour of electronic circuits and devices. Machine learning algorithms and artificial neural networks can model electronic circuits and solve complex problems. They are also applied in the field of measurement, testing, and diagnostics. Methodologies and models for processing "big data" and building forecasting and analytical models to support decision making and solve engineering problems could be created to implement intelligent electronic systems.

Fuzzy logic implementation in control systems and systems, regulating one or several variables, is also in the scope of the discussion.

The topics of the Special Issue include but are not limited to the following research fields: statistics; machine and deep learning; and fuzzy logic methods, algorithms, techniques, methodologies, and models in:

  • Electronic circuit design;
  • Electronic circuit analysis;
  • Electronic circuit and device testing and diagnosis;
  • Electronics circuit and device measurement;
  • Manufacturing of electronic components and devices;
  • Electronic process management;
  • Quality management in electronics;
  • Digital twins.

Prof. Dr. Stefan Hensel
Prof. Dr. Marin B. Marinov
Dr. Malinka Ivanova
Dr. Maya Dimitrova
Dr. Hiroaki Wagatsuma
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computation is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • artificial neural networks
  • electronics
  • intelligent systems
  • manufacturing process
  • measurement and control
  • smart sensors
  • testing and diagnosis

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 6601 KiB  
Article
Learning Trajectory Tracking for an Autonomous Surface Vehicle in Urban Waterways
by Toma Sikora, Jonathan Klein Schiphorst and Riccardo Scattolini
Computation 2023, 11(11), 216; https://doi.org/10.3390/computation11110216 - 02 Nov 2023
Viewed by 1260
Abstract
Roboat is an autonomous surface vessel (ASV) for urban waterways, developed as a research project by the AMS Institute and MIT. The platform can provide numerous functions to a city, such as transport, dynamic infrastructure, and an autonomous waste management system. This paper [...] Read more.
Roboat is an autonomous surface vessel (ASV) for urban waterways, developed as a research project by the AMS Institute and MIT. The platform can provide numerous functions to a city, such as transport, dynamic infrastructure, and an autonomous waste management system. This paper presents the development of a learning-based controller for the Roboat platform with the goal of achieving robustness and generalization properties. Specifically, when subject to uncertainty in the model or external disturbances, the proposed controller should be able to track set trajectories with less tracking error than the current nonlinear model predictive controller (NMPC) used on the ASV. To achieve this, a simulation of the system dynamics was developed as part of this work, based on the research presented in the literature and on the previous research performed on the Roboat platform. The simulation process also included the modeling of the necessary uncertainties and disturbances. In this simulation, a trajectory tracking agent was trained using the proximal policy optimization (PPO) algorithm. The trajectory tracking of the trained agent was then validated and compared to the current control strategy both in simulations and in the real world. Full article
(This article belongs to the Special Issue Applications of Statistics and Machine Learning in Electronics)
Show Figures

Graphical abstract

19 pages, 385 KiB  
Article
Simultaneous Integration of D-STATCOMs and PV Sources in Distribution Networks to Reduce Annual Investment and Operating Costs
by Adriana Rincón-Miranda, Giselle Viviana Gantiva-Mora and Oscar Danilo Montoya
Computation 2023, 11(7), 145; https://doi.org/10.3390/computation11070145 - 20 Jul 2023
Viewed by 864
Abstract
This research analyzes electrical distribution networks using renewable generation sources based on photovoltaic (PV) sources and distribution static compensators (D-STATCOMs) in order to minimize the expected annual grid operating costs for a planning period of 20 years. The separate and simultaneous placement of [...] Read more.
This research analyzes electrical distribution networks using renewable generation sources based on photovoltaic (PV) sources and distribution static compensators (D-STATCOMs) in order to minimize the expected annual grid operating costs for a planning period of 20 years. The separate and simultaneous placement of PVs and D-STATCOMs is evaluated through a mixed-integer nonlinear programming model (MINLP), whose binary part pertains to selecting the nodes where these devices must be located, and whose continuous part is associated with the power flow equations and device constraints. This optimization model is solved using the vortex search algorithm for the sake of comparison. Numerical results in the IEEE 33- and 69-bus grids demonstrate that combining PV sources and D-STATCOM devices entails the maximum reduction in the expected annual grid operating costs when compared to the solutions reached separately by each device, with expected reductions of about 35.50% and 35.53% in the final objective function value with respect to the benchmark case. All computational validations were carried out in the MATLAB programming environment (version 2021b) with our own scripts. Full article
(This article belongs to the Special Issue Applications of Statistics and Machine Learning in Electronics)
Show Figures

Figure 1

9 pages, 598 KiB  
Article
Stochastic Modeling with Applications in Supply Chain Management and ICT Systems
by Meglena Lazarova and Fatima Sapundzhi
Computation 2023, 11(2), 21; https://doi.org/10.3390/computation11020021 - 31 Jan 2023
Viewed by 3849
Abstract
Fast-growing technology and the development of IT services have yielded the idea of founding a new application of stochastic processes and their properties. We give a new connection between electronic process management and a relatively new stochastic process named the Non-central Polya-Aeppli process. [...] Read more.
Fast-growing technology and the development of IT services have yielded the idea of founding a new application of stochastic processes and their properties. We give a new connection between electronic process management and a relatively new stochastic process named the Non-central Polya-Aeppli process. This process is applied as a counting process in the mathematical construction of the given model, and it has been introduced as a counting process in electronic process management. Full article
(This article belongs to the Special Issue Applications of Statistics and Machine Learning in Electronics)
Show Figures

Figure 1

11 pages, 899 KiB  
Article
Reviewing and Discussing Graph Reduction in Edge Computing Context
by Asier Garmendia-Orbegozo, José David Núñez-Gonzalez and Miguel Ángel Antón
Computation 2022, 10(9), 161; https://doi.org/10.3390/computation10090161 - 16 Sep 2022
Viewed by 1392
Abstract
Much effort has been devoted to transferring efficiently different machine-learning algorithms, and especially deep neural networks, to edge devices in order to fulfill, among others, real-time, storage and energy-consumption issues. The limited resources of edge devices and the necessity for energy saving to [...] Read more.
Much effort has been devoted to transferring efficiently different machine-learning algorithms, and especially deep neural networks, to edge devices in order to fulfill, among others, real-time, storage and energy-consumption issues. The limited resources of edge devices and the necessity for energy saving to lengthen the durability of their batteries, has encouraged an interesting trend in reducing neural networks and graphs, while keeping their predictability almost untouched. In this work, an alternative to the latest techniques for finding these reductions in networks size is proposed, seeking to figure out a simplistic way to shrink networks while maintaining, as far as possible, their predictability testing on well-known datasets. Full article
(This article belongs to the Special Issue Applications of Statistics and Machine Learning in Electronics)
Show Figures

Figure 1

18 pages, 5596 KiB  
Article
Comparison and Evaluation of Machine Learning-Based Classification of Hand Gestures Captured by Inertial Sensors
by Ivo Stančić, Josip Musić, Tamara Grujić, Mirela Kundid Vasić and Mirjana Bonković
Computation 2022, 10(9), 159; https://doi.org/10.3390/computation10090159 - 14 Sep 2022
Cited by 2 | Viewed by 1799
Abstract
Gesture recognition is a topic in computer science and language technology that aims to interpret human gestures with computer programs and many different algorithms. It can be seen as the way computers can understand human body language. Today, the main interaction tools between [...] Read more.
Gesture recognition is a topic in computer science and language technology that aims to interpret human gestures with computer programs and many different algorithms. It can be seen as the way computers can understand human body language. Today, the main interaction tools between computers and humans are still the keyboard and mouse. Gesture recognition can be used as a tool for communication with the machine and interaction without any mechanical device such as a keyboard or mouse. In this paper, we present the results of a comparison of eight different machine learning (ML) classifiers in the task of human hand gesture recognition and classification to explore how to efficiently implement one or more tested ML algorithms on an 8-bit AVR microcontroller for on-line human gesture recognition with the intention to gesturally control the mobile robot. The 8-bit AVR microcontrollers are still widely used in the industry, but due to their lack of computational power and limited memory, it is a challenging task to efficiently implement ML algorithms on them for on-line classification. Gestures were recorded by using inertial sensors, gyroscopes, and accelerometers placed at the wrist and index finger. One thousand and eight hundred (1800) hand gestures were recorded and labelled. Six important features were defined for the identification of nine different hand gestures using eight different machine learning classifiers: Decision Tree (DT), Random Forests (RF), Logistic Regression (LR), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM) with linear kernel, Naïve Bayes classifier (NB), K-Nearest Neighbours (KNN), and Stochastic Gradient Descent (SGD). All tested algorithms were ranged according to Precision, Recall, and F1-score (abb.: P-R-F1). The best algorithms were SVM (P-R-F1: 0.9865, 0.9861, and 0.0863), and RF (P-R-F1: 0.9863, 0.9861, and 0.0862), but their main disadvantage is their unusability for on-line implementations in 8-bit AVR microcontrollers, as proven in the paper. The next best algorithms have had only slightly poorer performance than SVM and RF: KNN (P-R-F1: 0.9835, 0.9833, and 0.9834) and LR (P-R-F1: 0.9810, 0.9810, and 0.9810). Regarding the implementation on 8-bit microcontrollers, KNN has proven to be inadequate, like SVM and RF. However, the analysis for LR has proved that this classifier could be efficiently implemented on targeted microcontrollers. Having in mind its high F1-score (comparable to SVM, RF, and KNN), this leads to the conclusion that the LR is the most suitable classifier among tested for on-line applications in resource-constrained environments, such as embedded devices based on 8-bit AVR microcontrollers, due to its lower computational complexity in comparison with other tested algorithms. Full article
(This article belongs to the Special Issue Applications of Statistics and Machine Learning in Electronics)
Show Figures

Figure 1

19 pages, 41334 KiB  
Article
3D LiDAR Based SLAM System Evaluation with Low-Cost Real-Time Kinematics GPS Solution
by Stefan Hensel, Marin B. Marinov and Markus Obert
Computation 2022, 10(9), 154; https://doi.org/10.3390/computation10090154 - 04 Sep 2022
Cited by 2 | Viewed by 5899
Abstract
Positioning mobile systems with high accuracy is a prerequisite for intelligent autonomous behavior, both in industrial environments and in field robotics. This paper describes the setup of a robotic platform and its use for the evaluation of simultaneous localization and mapping (SLAM) algorithms. [...] Read more.
Positioning mobile systems with high accuracy is a prerequisite for intelligent autonomous behavior, both in industrial environments and in field robotics. This paper describes the setup of a robotic platform and its use for the evaluation of simultaneous localization and mapping (SLAM) algorithms. A configuration using a mobile robot Husky A200, and a LiDAR (light detection and ranging) sensor was used to implement the setup. For verification of the proposed setup, different scan matching methods for odometry determination in indoor and outdoor environments are tested. An assessment of the accuracy of the baseline 3D-SLAM system and the selected evaluation system is presented by comparing different scenarios and test situations. It was shown that the hdl_graph_slam in combination with the LiDAR OS1 and the scan matching algorithms FAST_GICP and FAST_VGICP achieves good mapping results with accuracies up to 2 cm. Full article
(This article belongs to the Special Issue Applications of Statistics and Machine Learning in Electronics)
Show Figures

Figure 1

19 pages, 4614 KiB  
Article
Machine Learning and Rules Induction in Support of Analog Amplifier Design
by Malinka Ivanova and Miona Andrejević Stošović
Computation 2022, 10(9), 145; https://doi.org/10.3390/computation10090145 - 25 Aug 2022
Cited by 2 | Viewed by 1581
Abstract
The aim of the paper is to present a two-step method for facilitating the design of analog amplifiers taking into account the bottom–top approach and utilizing machine learning techniques. The X-chart and a framework describing the specificity of analog circuit design using machine [...] Read more.
The aim of the paper is to present a two-step method for facilitating the design of analog amplifiers taking into account the bottom–top approach and utilizing machine learning techniques. The X-chart and a framework describing the specificity of analog circuit design using machine learning are introduced. The possibility of libraries with open machine learning models to support the designer is also discussed. The proposed method is verified for a three-stage amplifier design. In the first step, the stage type is predicted with 89.74% accuracy as the applied learner is a Decision Tree machine learning algorithm. Moreover, two induction rule algorithms are used for predictive logic generation. In the second step, some typical parameters for a given stage are predicted considering four learners: Decision Tree, Random Forest, Gradient Boosted Trees, and Support Vector Machine. The most suitable is found to be Support Vector Machine, which is characterized with the smallest obtained errors. Full article
(This article belongs to the Special Issue Applications of Statistics and Machine Learning in Electronics)
Show Figures

Figure 1

Back to TopTop