sensors-logo

Journal Browser

Journal Browser

Advances in Machine Learning for Intelligent Engineering Systems and Applications II

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (20 September 2022) | Viewed by 61010

Special Issue Editors


E-Mail Website
Guest Editor
Photogrammetry and Computer Vision Laboratory, National Technical University of Athens, 15773 Athens, Greece
Interests: image processing; computer vision; robotic systems; deep machine learning; machine learning; Markovian models; signal processing and pattern analysis
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Rural and Surveying Engineering, National Technical University of Athens, Athens, Greece
Interests: pattern recognition; machine learning; signal processing; image/hyper-spectral sensors
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Informatics and Computer Engineering, University of West Attica, Agiou Spiridonos 28, 122 43 Egaleo, Greece
Interests: machine learning; artificial intelligence; multimedia; intelligent systems; pervasive computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Latest advances in machine learning have contributed to various developments in many areas of interest to the engineering community. Data-driven or domain-oriented engineering applications are significantly benefitting from the latest developments in machine learning theories and methods (including deep, reinforcement, transfer, and extreme learning), but may also promote the development of learning algorithms, optimization approaches, fusion techniques for multimodal data, novel hardware, and network architectures. The rapid development in these fields has also stimulated new research on sensors and sensor networks.

The purpose of this Special Issue is to provide a forum for engineers, data scientists, researchers, and practitioners to present new academic research and industrial development on machine learning for engineering applications. The Special Issue gathers original research papers in the field, covering new theories, algorithms, systems, as well as new implementations and applications incorporating state-of-the-art machine learning techniques. Emphasis will be placed on systems that incorporate new sensors and the configuration of them. Review articles and works on performance evaluation and benchmark datasets are also solicited.

Indicative domains of application of interest to the Special Issue include:

  • Research on sensors for new critical engineering applications
  • Sensors networks and drones to survey critical infrastructure
  • Software and hardware architectures for new sensorial systems in managing critical infrastructure
  • Electrical and mechanical engineering, production management and optimization, manufacturing, failure detection, energy management, and smart grids
  • Robotics and automation, computer vision and pattern recognition applications, critical infrastructure protection
  • Civil engineering, construction management and optimization, structural health monitoring, earthquake engineering, urban planning
  • Transportation, hydraulics, water power and environmental engineering
  • Surveying and geospatial engineering, spatial planning, and remote sensing
  • Materials science and engineering
  • Biomedical engineering

The sequel Special Issue "Advances in Sensors-Based Machine Learning for Intelligent Engineering Systems and Applications III" has been announced. We look forward to receiving your submission for the new Special Issue.
https://www.mdpi.com/journal/sensors/special_issues/EX8T6F1JHP
Deadline for manuscript submissions is 10 July 2023.

Dr. Anastasios Doulamis
Dr. Nikolaos Doulamis
Dr. Athanasios Voulodimos
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Engineering applications
  • Machine learning
  • Deep learning
  • Intelligent systems

Related Special Issue

Published Papers (23 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 3260 KiB  
Article
Machine Learning Models for the Automatic Detection of Exercise Thresholds in Cardiopulmonary Exercising Tests: From Regression to Generation to Explanation
by Andrea Zignoli
Sensors 2023, 23(2), 826; https://doi.org/10.3390/s23020826 - 11 Jan 2023
Cited by 4 | Viewed by 2352
Abstract
The cardiopulmonary exercise test (CPET) constitutes a gold standard for the assessment of an individual’s cardiovascular fitness. A trend is emerging for the development of new machine-learning techniques applied to the automatic process of CPET data. Some of these focus on the precise [...] Read more.
The cardiopulmonary exercise test (CPET) constitutes a gold standard for the assessment of an individual’s cardiovascular fitness. A trend is emerging for the development of new machine-learning techniques applied to the automatic process of CPET data. Some of these focus on the precise task of detecting the exercise thresholds, which represent important physiological parameters. Three are the major challenges tackled by this contribution: (A) regression (i.e., the process of correctly identifying the exercise intensity domains and their crossing points); (B) generation (i.e., the process of artificially creating a CPET data file ex-novo); and (C) explanation (i.e., proving an interpretable explanation about the output of the machine learning model). The following methods were used for each challenge: (A) a convolutional neural network adapted for multi-variable time series; (B) a conditional generative adversarial neural network; and (C) visual explanations and calculations of model decisions have been conducted using cooperative game theory (Shapley’s values). The results for the regression, generation, and explanatory techniques for AI-assisted CPET interpretation are presented here in a unique framework for the first time: (A) machine learning techniques reported an expert-level accuracy in the classification of exercise intensity domains; (B) experts are not able to substantially differentiate between a real vs an artificially generated CPET; and (C) Shapley’s values can provide an explanation about the choices of the algorithms in terms of ventilatory variables. With the aim to increase their technology-readiness level, all the models discussed in this contribution have been incorporated into a free-to-use Python package called pyoxynet (ver. 12.1). This contribution should therefore be of interest to major players operating in the CPET device market and engineering. Full article
Show Figures

Figure 1

18 pages, 4400 KiB  
Article
Application of Machine Learning Methods for an Analysis of E-Nose Multidimensional Signals in Wastewater Treatment
by Magdalena Piłat-Rożek, Ewa Łazuka, Dariusz Majerek, Bartosz Szeląg, Sylwia Duda-Saternus and Grzegorz Łagód
Sensors 2023, 23(1), 487; https://doi.org/10.3390/s23010487 - 02 Jan 2023
Cited by 13 | Viewed by 2043
Abstract
The work represents a successful attempt to combine a gas sensors array with instrumentation (hardware), and machine learning methods as the basis for creating numerical codes (software), together constituting an electronic nose, to correct the classification of the various stages of the wastewater [...] Read more.
The work represents a successful attempt to combine a gas sensors array with instrumentation (hardware), and machine learning methods as the basis for creating numerical codes (software), together constituting an electronic nose, to correct the classification of the various stages of the wastewater treatment process. To evaluate the multidimensional measurement derived from the gas sensors array, dimensionality reduction was performed using the t-SNE method, which (unlike the commonly used PCA method) preserves the local structure of the data by minimizing the Kullback-Leibler divergence between the two distributions with respect to the location of points on the map. The k-median method was used to evaluate the discretization potential of the collected multidimensional data. It showed that observations from different stages of the wastewater treatment process have varying chemical fingerprints. In the final stage of data analysis, a supervised machine learning method, in the form of a random forest, was used to classify observations based on the measurements from the sensors array. The quality of the resulting model was assessed based on several measures commonly used in classification tasks. All the measures used confirmed that the classification model perfectly assigned classes to the observations from the test set, which also confirmed the absence of model overfitting. Full article
Show Figures

Figure 1

16 pages, 4031 KiB  
Article
Mutual-Aided INS/Vision Navigation System Analysis and Optimization Using Sequential Filtering with State Recalculation
by Ayham Shahoud, Dmitriy Shashev and Stanislav Shidlovskiy
Sensors 2023, 23(1), 79; https://doi.org/10.3390/s23010079 - 22 Dec 2022
Viewed by 1963
Abstract
This paper presents the implementation of a mutual-aided navigation system for an aerial vehicle. Employing all available sensors in navigation is effective at maintaining continuous and optimal results. The images offer a lot of information about the surrounding environment, but image processing is [...] Read more.
This paper presents the implementation of a mutual-aided navigation system for an aerial vehicle. Employing all available sensors in navigation is effective at maintaining continuous and optimal results. The images offer a lot of information about the surrounding environment, but image processing is time-consuming and causes timing problems. While traditional fusion algorithms tend to reduce the delay errors or ignore them, this research depends on state estimation recalculation during the delay time and on sequential filtering. To reduce the image matching time, the map is processed offline, then key point clusters are stored to avoid feature recalculation online. The sensors’ information is used to bound the search space for the matched features on the map, then they are reprojected on the captured images to exclude the unuseful part from processing. The suggested mutual-aided form compensates for the inertial system drift, which enhances the system’s accuracy and independence. The system was tested using data collected from a real flight using a DJI drone. The measurements from an inertial measurement unit (IMU), camera, barometer, and magnetometer were fused using a sequential Kalman Filter. The final results prove the efficiency of the suggested system to navigate with high independency, with an RMS position error of less than 3.5 m. Full article
Show Figures

Figure 1

16 pages, 5845 KiB  
Article
Path Generator with Unpaired Samples Employing Generative Adversarial Networks
by Javier Maldonado-Romo, Alberto Maldonado-Romo and Mario Aldape-Pérez
Sensors 2022, 22(23), 9411; https://doi.org/10.3390/s22239411 - 02 Dec 2022
Cited by 3 | Viewed by 1219
Abstract
Interactive technologies such as augmented reality have grown in popularity, but specialized sensors and high computer power must be used to perceive and analyze the environment in order to obtain an immersive experience in real time. However, these kinds of implementations have high [...] Read more.
Interactive technologies such as augmented reality have grown in popularity, but specialized sensors and high computer power must be used to perceive and analyze the environment in order to obtain an immersive experience in real time. However, these kinds of implementations have high costs. On the other hand, machine learning has helped create alternative solutions for reducing costs, but it is limited to particular solutions because the creation of datasets is complicated. Due to this problem, this work suggests an alternate strategy for dealing with limited information: unpaired samples from known and unknown surroundings are used to generate a path on embedded devices, such as smartphones, in real time. This strategy creates a path that avoids virtual elements through physical objects. The authors suggest an architecture for creating a path using imperfect knowledge. Additionally, an augmented reality experience is used to describe the generated path, and some users tested the proposal to evaluate the performance. Finally, the primary contribution is the approximation of a path produced from a known environment by using an unpaired dataset. Full article
Show Figures

Figure 1

14 pages, 678 KiB  
Article
Optimal Greedy Control in Reinforcement Learning
by Alexander Gorobtsov, Oleg Sychev, Yulia Orlova, Evgeniy Smirnov, Olga Grigoreva, Alexander Bochkin and Marina Andreeva
Sensors 2022, 22(22), 8920; https://doi.org/10.3390/s22228920 - 18 Nov 2022
Viewed by 1169
Abstract
We consider the problem of dimensionality reduction of state space in the variational approach to the optimal control problem, in particular, in the reinforcement learning method. The control problem is described by differential algebraic equations consisting of nonlinear differential equations and algebraic constraint [...] Read more.
We consider the problem of dimensionality reduction of state space in the variational approach to the optimal control problem, in particular, in the reinforcement learning method. The control problem is described by differential algebraic equations consisting of nonlinear differential equations and algebraic constraint equations interconnected with Lagrange multipliers. The proposed method is based on changing the Lagrange multipliers of one subset based on the Lagrange multipliers of another subset. We present examples of the application of the proposed method in robotics and vibration isolation in transport vehicles. The method is implemented in FRUND—a multibody system dynamics software package. Full article
Show Figures

Figure 1

27 pages, 6099 KiB  
Article
Design of a Forest Fire Early Alert System through a Deep 3D-CNN Structure and a WRF-CNN Bias Correction
by Alejandro Casallas, Camila Jiménez-Saenz, Victor Torres, Miguel Quirama-Aguilar, Augusto Lizcano, Ellie Anne Lopez-Barrera, Camilo Ferro, Nathalia Celis and Ricardo Arenas
Sensors 2022, 22(22), 8790; https://doi.org/10.3390/s22228790 - 14 Nov 2022
Cited by 9 | Viewed by 3389
Abstract
Throughout the years, wildfires have negatively impacted ecological systems and urban areas. Hence, reinforcing territorial risk management strategies against wildfires is essential. In this study, we built an early alert system (EAS) with two different Machine Learning (ML) techniques to calculate the meteorological [...] Read more.
Throughout the years, wildfires have negatively impacted ecological systems and urban areas. Hence, reinforcing territorial risk management strategies against wildfires is essential. In this study, we built an early alert system (EAS) with two different Machine Learning (ML) techniques to calculate the meteorological conditions of two Colombian areas: (i) A 3D convolutional neural net capable of learning from satellite data and (ii) a convolutional network to bias-correct the Weather Research and Forecasting (WRF) model output. The results were used to quantify the daily Fire Weather Index and were coupled with the outcomes from a land cover analysis conducted through a Naïve-Bayes classifier to estimate the probability of wildfire occurrence. These results, combined with an assessment of global vulnerability in both locations, allow the construction of daily risk maps in both areas. On the other hand, a set of short-term preventive and corrective measures were suggested to public authorities to implement, after an early alert prediction of a possible future wildfire. Finally, Soil Management Practices are proposed to tackle the medium- and long-term causes of wildfire development, with the aim of reducing vulnerability and promoting soil protection. In conclusion, this paper creates an EAS for wildfires, based on novel ML techniques and risk maps. Full article
Show Figures

Figure 1

22 pages, 1184 KiB  
Article
Online Multivariate Anomaly Detection and Localization for High-Dimensional Settings
by Mahsa Mozaffari, Keval Doshi and Yasin Yilmaz
Sensors 2022, 22(21), 8264; https://doi.org/10.3390/s22218264 - 28 Oct 2022
Cited by 3 | Viewed by 1945
Abstract
This paper considers the real-time detection of abrupt and persistent anomalies in high-dimensional data streams. The goal is to detect anomalies quickly and accurately so that the appropriate countermeasures could be taken in time before the system possibly gets harmed. We propose a [...] Read more.
This paper considers the real-time detection of abrupt and persistent anomalies in high-dimensional data streams. The goal is to detect anomalies quickly and accurately so that the appropriate countermeasures could be taken in time before the system possibly gets harmed. We propose a sequential and multivariate anomaly detection method that scales well to high-dimensional datasets. The proposed method follows a nonparametric, i.e., data-driven, and semi-supervised approach, i.e., trains only on nominal data. Thus, it is applicable to a wide range of applications and data types. Thanks to its multivariate nature, it can quickly and accurately detect challenging anomalies, such as changes in the correlation structure. Its asymptotic optimality and computational complexity are comprehensively analyzed. In conjunction with the detection method, an effective technique for localizing the anomalous data dimensions is also proposed. The practical use of proposed algorithms are demonstrated using synthetic and real data, and in variety of applications including seizure detection, DDoS attack detection, and video surveillance. Full article
Show Figures

Figure 1

17 pages, 639 KiB  
Article
Real-Time Detection and Classification of Power Quality Disturbances
by Mahsa Mozaffari, Keval Doshi and Yasin Yilmaz
Sensors 2022, 22(20), 7958; https://doi.org/10.3390/s22207958 - 19 Oct 2022
Cited by 13 | Viewed by 2058
Abstract
This paper considers the problem of real-time detection and classification of power quality disturbances in power delivery systems. We propose a sequential and multivariate disturbance detection method (aiming for quick and accurate detection). Our proposed detector follows a non-parametric and supervised approach, i.e., [...] Read more.
This paper considers the problem of real-time detection and classification of power quality disturbances in power delivery systems. We propose a sequential and multivariate disturbance detection method (aiming for quick and accurate detection). Our proposed detector follows a non-parametric and supervised approach, i.e., it learns nominal and anomalous patterns from training data involving clean and disturbance signals. The multivariate nature of the method enables joint processing of data from multiple meters, facilitating quicker detection as a result of the cooperative analysis. We further extend our supervised sequential detection method to a multi-hypothesis setting, which aims to classify the disturbance events as quickly and accurately as possible in a real-time manner. The multi-hypothesis method requires a training dataset per hypothesis, i.e., per each disturbance type as well as the ’no disturbance’ case. The proposed classification method is demonstrated to quickly and accurately detect and classify power disturbances. Full article
Show Figures

Figure 1

23 pages, 3319 KiB  
Article
Intelligent Traffic Monitoring through Heterogeneous and Autonomous Networks Dedicated to Traffic Automation
by Eduard Zadobrischi
Sensors 2022, 22(20), 7861; https://doi.org/10.3390/s22207861 - 16 Oct 2022
Cited by 6 | Viewed by 2004
Abstract
In direct line with the evolution of technology, but also with the density of vehicles that create congestion and often road accidents, traffic monitoring systems are parts that integrate intelligent transport systems (ITS). This is one of the most critical elements within transport [...] Read more.
In direct line with the evolution of technology, but also with the density of vehicles that create congestion and often road accidents, traffic monitoring systems are parts that integrate intelligent transport systems (ITS). This is one of the most critical elements within transport infrastructures, an aspect that involves extremely important financial investments in order to collect and analyze traffic data with the aim of designing systems capable of properly managing traffic. Technological progress in the field of wireless communications is advancing, highlighting new traffic monitoring solutions, and the need for major classification, but proposing a real-time analysis model to guide the new systems is a challenge addressed in this manuscript. The involvement of classifiers and computerized detection applied to traffic monitoring cameras can outline extremely vital systems for the future of logistic transport. Analyzing and debating vehicle classification systems, examining problems and challenges, as well as designing a software project capable of being the basis of new developments in the field of ITS systems are the aim of this study. The outline of a method based on intelligent algorithms and improved YOLOv3 can have a major impact on the effort to reduce the negative impact created by chaotic traffic and the outline of safety protocols in the field of transport. The reduction of waiting times and decongestion by up to 80% is a valid aspect, which we can deduce from the study carried out. Full article
Show Figures

Figure 1

24 pages, 2518 KiB  
Article
RSU-Based Online Intrusion Detection and Mitigation for VANET
by Ammar Haydari and Yasin Yilmaz
Sensors 2022, 22(19), 7612; https://doi.org/10.3390/s22197612 - 08 Oct 2022
Cited by 14 | Viewed by 2567
Abstract
Secure vehicular communication is a critical factor for secure traffic management. Effective security in intelligent transportation systems (ITS) requires effective and timely intrusion detection systems (IDS). In this paper, we consider false data injection attacks and distributed denial-of-service (DDoS) attacks, especially the stealthy [...] Read more.
Secure vehicular communication is a critical factor for secure traffic management. Effective security in intelligent transportation systems (ITS) requires effective and timely intrusion detection systems (IDS). In this paper, we consider false data injection attacks and distributed denial-of-service (DDoS) attacks, especially the stealthy DDoS attacks, targeting integrity and availability, respectively, in vehicular ad-hoc networks (VANET). Novel machine learning techniques for intrusion detection and mitigation based on centralized communications through roadside units (RSU) are proposed for the considered attacks. The performance of the proposed methods is evaluated using a traffic simulator and a real traffic dataset. Comparisons with the state-of-the-art solutions clearly demonstrate the superior detection and localization performance of the proposed methods by 78% in the best case and 27% in the worst case, while achieving the same level of false alarm probability. Full article
Show Figures

Figure 1

19 pages, 8672 KiB  
Article
Repetition-Based Approach for Task Adaptation in Imitation Learning
by Tho Nguyen Duc, Chanh Minh Tran, Nguyen Gia Bach, Phan Xuan Tan and Eiji Kamioka
Sensors 2022, 22(18), 6959; https://doi.org/10.3390/s22186959 - 14 Sep 2022
Viewed by 1216
Abstract
Transfer learning is an effective approach for adapting an autonomous agent to a new target task by transferring knowledge learned from the previously learned source task. The major problem with traditional transfer learning is that it only focuses on optimizing learning performance on [...] Read more.
Transfer learning is an effective approach for adapting an autonomous agent to a new target task by transferring knowledge learned from the previously learned source task. The major problem with traditional transfer learning is that it only focuses on optimizing learning performance on the target task. Thus, the performance on the target task may be improved in exchange for the deterioration of the source task’s performance, resulting in an agent that is not able to revisit the earlier task. Therefore, transfer learning methods are still far from being comparable with the learning capability of humans, as humans can perform well on both source and new target tasks. In order to address this limitation, a task adaptation method for imitation learning is proposed in this paper. Being inspired by the idea of repetition learning in neuroscience, the proposed adaptation method enables the agent to repeatedly review the learned knowledge of the source task, while learning the new knowledge of the target task. This ensures that the learning performance on the target task is high, while the deterioration of the learning performance on the source task is small. A comprehensive evaluation over several simulated tasks with varying difficulty levels shows that the proposed method can provide high and consistent performance on both source and target tasks, outperforming existing transfer learning methods. Full article
Show Figures

Figure 1

29 pages, 10934 KiB  
Article
Predicting Bulk Average Velocity with Rigid Vegetation in Open Channels Using Tree-Based Machine Learning: A Novel Approach Using Explainable Artificial Intelligence
by D. P. P. Meddage, I. U. Ekanayake, Sumudu Herath, R. Gobirahavan, Nitin Muttil and Upaka Rathnayake
Sensors 2022, 22(12), 4398; https://doi.org/10.3390/s22124398 - 10 Jun 2022
Cited by 12 | Viewed by 2147
Abstract
Predicting the bulk-average velocity (UB) in open channels with rigid vegetation is complicated due to the non-linear nature of the parameters. Despite their higher accuracy, existing regression models fail to highlight the feature importance or causality of the respective predictions. Therefore, [...] Read more.
Predicting the bulk-average velocity (UB) in open channels with rigid vegetation is complicated due to the non-linear nature of the parameters. Despite their higher accuracy, existing regression models fail to highlight the feature importance or causality of the respective predictions. Therefore, we propose a method to predict UB and the friction factor in the surface layer (fS) using tree-based machine learning (ML) models (decision tree, extra tree, and XGBoost). Further, Shapley Additive exPlanation (SHAP) was used to interpret the ML predictions. The comparison emphasized that the XGBoost model is superior in predicting UB (R = 0.984) and fS (R = 0.92) relative to the existing regression models. SHAP revealed the underlying reasoning behind predictions, the dependence of predictions, and feature importance. Interestingly, SHAP adheres to what is generally observed in complex flow behavior, thus, improving trust in predictions. Full article
Show Figures

Figure 1

14 pages, 2383 KiB  
Article
Smart Monitoring of Manufacturing Systems for Automated Decision-Making: A Multi-Method Framework
by Chen-Yang Cheng, Pourya Pourhejazy, Chia-Yu Hung and Chumpol Yuangyai
Sensors 2021, 21(20), 6860; https://doi.org/10.3390/s21206860 - 15 Oct 2021
Cited by 4 | Viewed by 2660
Abstract
Smart monitoring plays a principal role in the intelligent automation of manufacturing systems. Advanced data collection technologies, like sensors, have been widely used to facilitate real-time data collection. Computationally efficient analysis of the operating systems, however, remains relatively underdeveloped and requires more attention. [...] Read more.
Smart monitoring plays a principal role in the intelligent automation of manufacturing systems. Advanced data collection technologies, like sensors, have been widely used to facilitate real-time data collection. Computationally efficient analysis of the operating systems, however, remains relatively underdeveloped and requires more attention. Inspired by the capabilities of signal analysis and information visualization, this study proposes a multi-method framework for the smart monitoring of manufacturing systems and intelligent decision-making. The proposed framework uses the machine signals collected by noninvasive sensors for processing. For this purpose, the signals are filtered and classified to facilitate the realization of the operational status and performance measures to advise the appropriate course of managerial actions considering the detected anomalies. Numerical experiments based on real data are used to show the practicability of the developed monitoring framework. Results are supportive of the accuracy of the method. Applications of the developed approach are worthwhile research topics to research in other manufacturing environments. Full article
Show Figures

Figure 1

17 pages, 2934 KiB  
Article
Enhanced Changeover Detection in Industry 4.0 Environments with Machine Learning
by Eddi Miller, Vladyslav Borysenko, Moritz Heusinger, Niklas Niedner, Bastian Engelmann and Jan Schmitt
Sensors 2021, 21(17), 5896; https://doi.org/10.3390/s21175896 - 01 Sep 2021
Cited by 6 | Viewed by 2567
Abstract
Changeover times are an important element when evaluating the Overall Equipment Effectiveness (OEE) of a production machine. The article presents a machine learning (ML) approach that is based on an external sensor setup to automatically detect changeovers in a shopfloor environment. The door [...] Read more.
Changeover times are an important element when evaluating the Overall Equipment Effectiveness (OEE) of a production machine. The article presents a machine learning (ML) approach that is based on an external sensor setup to automatically detect changeovers in a shopfloor environment. The door statuses, coolant flow, power consumption, and operator indoor GPS data of a milling machine were used in the ML approach. As ML methods, Decision Trees, Support Vector Machines, (Balanced) Random Forest algorithms, and Neural Networks were chosen, and their performance was compared. The best results were achieved with the Random Forest ML model (97% F1 score, 99.72% AUC score). It was also carried out that model performance is optimal when only a binary classification of a changeover phase and a production phase is considered and less subphases of the changeover process are applied. Full article
Show Figures

Figure 1

14 pages, 11142 KiB  
Article
Domain Adaptation for Imitation Learning Using Generative Adversarial Network
by Tho Nguyen Duc, Chanh Minh Tran, Phan Xuan Tan and Eiji Kamioka
Sensors 2021, 21(14), 4718; https://doi.org/10.3390/s21144718 - 09 Jul 2021
Cited by 1 | Viewed by 2560
Abstract
Imitation learning is an effective approach for an autonomous agent to learn control policies when an explicit reward function is unavailable, using demonstrations provided from an expert. However, standard imitation learning methods assume that the agents and the demonstrations provided by the expert [...] Read more.
Imitation learning is an effective approach for an autonomous agent to learn control policies when an explicit reward function is unavailable, using demonstrations provided from an expert. However, standard imitation learning methods assume that the agents and the demonstrations provided by the expert are in the same domain configuration. Such an assumption has made the learned policies difficult to apply in another distinct domain. The problem is formalized as domain adaptive imitation learning, which is the process of learning how to perform a task optimally in a learner domain, given demonstrations of the task in a distinct expert domain. We address the problem by proposing a model based on Generative Adversarial Network. The model aims to learn both domain-shared and domain-specific features and utilizes it to find an optimal policy across domains. The experimental results show the effectiveness of our model in a number of tasks ranging from low to complex high-dimensional. Full article
Show Figures

Figure 1

22 pages, 24735 KiB  
Article
A Few-Shot U-Net Deep Learning Model for COVID-19 Infected Area Segmentation in CT Images
by Athanasios Voulodimos, Eftychios Protopapadakis, Iason Katsamenis, Anastasios Doulamis and Nikolaos Doulamis
Sensors 2021, 21(6), 2215; https://doi.org/10.3390/s21062215 - 22 Mar 2021
Cited by 65 | Viewed by 5568
Abstract
Recent studies indicate that detecting radiographic patterns on CT chest scans can yield high sensitivity and specificity for COVID-19 identification. In this paper, we scrutinize the effectiveness of deep learning models for semantic segmentation of pneumonia-infected area segmentation in CT images for the [...] Read more.
Recent studies indicate that detecting radiographic patterns on CT chest scans can yield high sensitivity and specificity for COVID-19 identification. In this paper, we scrutinize the effectiveness of deep learning models for semantic segmentation of pneumonia-infected area segmentation in CT images for the detection of COVID-19. Traditional methods for CT scan segmentation exploit a supervised learning paradigm, so they (a) require large volumes of data for their training, and (b) assume fixed (static) network weights once the training procedure has been completed. Recently, to overcome these difficulties, few-shot learning (FSL) has been introduced as a general concept of network model training using a very small amount of samples. In this paper, we explore the efficacy of few-shot learning in U-Net architectures, allowing for a dynamic fine-tuning of the network weights as new few samples are being fed into the U-Net. Experimental results indicate improvement in the segmentation accuracy of identifying COVID-19 infected regions. In particular, using 4-fold cross-validation results of the different classifiers, we observed an improvement of 5.388 ± 3.046% for all test data regarding the IoU metric and a similar increment of 5.394 ± 3.015% for the F1 score. Moreover, the statistical significance of the improvement obtained using our proposed few-shot U-Net architecture compared with the traditional U-Net model was confirmed by applying the Kruskal-Wallis test (p-value = 0.026). Full article
Show Figures

Graphical abstract

20 pages, 9244 KiB  
Article
Real-Time Rainfall Forecasts Based on Radar Reflectivity during Typhoons: Case Study in Southeastern Taiwan
by Chih-Chiang Wei and Chen-Chia Hsu
Sensors 2021, 21(4), 1421; https://doi.org/10.3390/s21041421 - 18 Feb 2021
Cited by 5 | Viewed by 2196
Abstract
This study developed a real-time rainfall forecasting system that can predict rainfall in a particular area a few hours before a typhoon’s arrival. The reflectivity of nine elevation angles obtained from the volume coverage pattern 21 Doppler radar scanning strategy and ground-weather data [...] Read more.
This study developed a real-time rainfall forecasting system that can predict rainfall in a particular area a few hours before a typhoon’s arrival. The reflectivity of nine elevation angles obtained from the volume coverage pattern 21 Doppler radar scanning strategy and ground-weather data of a specific area were used for accurate rainfall prediction. During rainfall prediction and analysis, rainfall retrievals were first performed to select the optimal radar scanning elevation angle for rainfall prediction at the current time. Subsequently, forecasting models were established using a single reflectivity and all elevation angles (10 prediction submodels in total) to jointly predict real-time rainfall and determine the optimal predicted values. This study was conducted in southeastern Taiwan and included three onshore weather stations (Chenggong, Taitung, and Dawu) and one offshore weather station (Lanyu). Radar reflectivities were collected from Hualien weather surveillance radar. The data for a total of 14 typhoons that affected the study area in 2008–2017 were collected. The gated recurrent unit (GRU) neural network was used to establish the forecasting model, and extreme gradient boosting and multiple linear regression were used as the benchmarks. Typhoons Nepartak, Meranti, and Megi were selected for simulation. The results revealed that the input data set merged with weather-station data, and radar reflectivity at the optimal elevation angle yielded optimal results for short-term rainfall forecasting. Moreover, the GRU neural network can obtain accurate predictions 1, 3, and 6 h before typhoon occurrence. Full article
Show Figures

Figure 1

13 pages, 2832 KiB  
Article
Laser Cut Interruption Detection from Small Images by Using Convolutional Neural Network
by Benedikt Adelmann, Max Schleier and Ralf Hellmann
Sensors 2021, 21(2), 655; https://doi.org/10.3390/s21020655 - 19 Jan 2021
Cited by 6 | Viewed by 3179
Abstract
In this publication, we use a small convolutional neural network to detect cut interruptions during laser cutting from single images of a high-speed camera. A camera takes images without additional illumination at a resolution of 32 × 64 pixels from cutting steel sheets [...] Read more.
In this publication, we use a small convolutional neural network to detect cut interruptions during laser cutting from single images of a high-speed camera. A camera takes images without additional illumination at a resolution of 32 × 64 pixels from cutting steel sheets of varying thicknesses with different laser parameter combinations and classifies them into cuts and cut interruptions. After a short learning period of five epochs on a certain sheet thickness, the images are classified with a low error rate of 0.05%. The use of color images reveals slight advantages with lower error rates over greyscale images, since, during cut interruptions, the image color changes towards blue. A training set on all sheet thicknesses in one network results in tests error rates below 0.1%. This low error rate and the short calculation time of 120 µs on a standard CPU makes the system industrially applicable. Full article
Show Figures

Figure 1

21 pages, 1740 KiB  
Article
Estimating Occupancy Levels in Enclosed Spaces Using Environmental Variables: A Fitness Gym and Living Room as Evaluation Scenarios
by Andree Vela, Joanna Alvarado-Uribe, Manuel Davila, Neil Hernandez-Gress and Hector G. Ceballos
Sensors 2020, 20(22), 6579; https://doi.org/10.3390/s20226579 - 18 Nov 2020
Cited by 15 | Viewed by 3310
Abstract
The understanding of occupancy patterns has been identified as a key contributor to achieve improvements in energy efficiency in buildings since occupancy information can benefit different systems, such as HVAC (Heating, Ventilation, and Air Conditioners), lighting, security, and emergency. This has meant that [...] Read more.
The understanding of occupancy patterns has been identified as a key contributor to achieve improvements in energy efficiency in buildings since occupancy information can benefit different systems, such as HVAC (Heating, Ventilation, and Air Conditioners), lighting, security, and emergency. This has meant that in the past decade, researchers have focused on improving the precision of occupancy estimation in enclosed spaces. Although several works have been done, one of the less addressed issues, regarding occupancy research, has been the availability of data for contrasting experimental results. Therefore, the main contributions of this work are: (1) the generation of two robust datasets gathered in enclosed spaces (a fitness gym and a living room) labeled with occupancy levels, and (2) the evaluation of three Machine Learning algorithms using different temporal resolutions. The results show that the prediction of 3–4 occupancy levels using the temperature, humidity, and pressure values provides an accuracy of at least 97%. Full article
Show Figures

Figure 1

19 pages, 2347 KiB  
Article
Interterminal Truck Routing Optimization Using Deep Reinforcement Learning
by Taufik Nur Adi, Yelita Anggiane Iskandar and Hyerim Bae
Sensors 2020, 20(20), 5794; https://doi.org/10.3390/s20205794 - 13 Oct 2020
Cited by 19 | Viewed by 4199
Abstract
The continued growth of the volume of global containerized transport necessitates that most of the major ports in the world improve port productivity by investing in more interconnected terminals. The development of the multiterminal system escalates the complexity of the container transport process [...] Read more.
The continued growth of the volume of global containerized transport necessitates that most of the major ports in the world improve port productivity by investing in more interconnected terminals. The development of the multiterminal system escalates the complexity of the container transport process and increases the demand for container exchange between different terminals within a port, known as interterminal transport (ITT). Trucks are still the primary modes of freight transportation to transport containers among most terminals. A trucking company needs to consider proper truck routing planning because, based on several studies, it played an essential role in coordinating ITT flows. Furthermore, optimal truck routing in the context of ITT significantly affects port productivity and efficiency. The study of deep reinforcement learning in truck routing optimization is still limited. In this study, we propose deep reinforcement learning to provide truck routes of a given container transport order by considering several significant factors such as order origin, destination, time window, and due date. To assess its performance, we compared between the proposed method and two approaches that are used to solve truck routing problems. The experiment results showed that the proposed method obtains considerably better results compared to the other algorithms. Full article
Show Figures

Figure 1

23 pages, 11705 KiB  
Article
Degradation Tendency Prediction for Pumped Storage Unit Based on Integrated Degradation Index Construction and Hybrid CNN-LSTM Model
by Jianzhong Zhou, Yahui Shan, Jie Liu, Yanhe Xu and Yang Zheng
Sensors 2020, 20(15), 4277; https://doi.org/10.3390/s20154277 - 31 Jul 2020
Cited by 18 | Viewed by 3060
Abstract
Accurate degradation tendency prediction (DTP) is vital for the secure operation of a pumped storage unit (PSU). However, the existing techniques and methodologies for DTP still face challenges, such as a lack of appropriate degradation indicators, insufficient accuracy, and poor capability to track [...] Read more.
Accurate degradation tendency prediction (DTP) is vital for the secure operation of a pumped storage unit (PSU). However, the existing techniques and methodologies for DTP still face challenges, such as a lack of appropriate degradation indicators, insufficient accuracy, and poor capability to track the data fluctuation. In this paper, a hybrid model is proposed for the degradation tendency prediction of a PSU, which combines the integrated degradation index (IDI) construction and convolutional neural network-long short-term memory (CNN-LSTM). Firstly, the health model of a PSU is constructed with Gaussian process regression (GPR) and the condition parameters of active power, working head, and guide vane opening. Subsequently, for comprehensively quantifying the degradation level of PSU, an IDI is developed using entropy weight (EW) theory. Finally, combining the local feature extraction of the CNN with the time series representation of LSTM, the CNN-LSTM model is constructed to realize DTP. To validate the effectiveness of the proposed model, the monitoring data collected from a PSU in China is taken as case studies. The root mean square error (RMSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) obtained by the proposed model are 1.1588, 0.8994, 0.0918, and 0.9713, which can meet the engineering application requirements. The experimental results show that the proposed model outperforms other comparison models. Full article
Show Figures

Figure 1

Review

Jump to: Research

33 pages, 5318 KiB  
Review
A Review on Machine Learning Applications for Solar Plants
by Ekaterina Engel and Nikita Engel
Sensors 2022, 22(23), 9060; https://doi.org/10.3390/s22239060 - 22 Nov 2022
Cited by 3 | Viewed by 2221
Abstract
A solar plant system has complex nonlinear dynamics with uncertainties due to variations in system parameters and insolation. Thereby, it is difficult to approximate these complex dynamics with conventional algorithms whereas Machine Learning (ML) methods yield the essential performance required. ML models are [...] Read more.
A solar plant system has complex nonlinear dynamics with uncertainties due to variations in system parameters and insolation. Thereby, it is difficult to approximate these complex dynamics with conventional algorithms whereas Machine Learning (ML) methods yield the essential performance required. ML models are key units in recent sensor systems for solar plant design, forecasting, maintenance, and control to provide the best safety, reliability, robustness, and performance as compared to classical methods which are usually employed in the hardware and software of solar plants. Considering this, the goal of our paper is to explore and analyze ML technologies and their advantages and shortcomings as compared to classical methods for the design, forecasting, maintenance, and control of solar plants. In contrast with other review articles, our research briefly summarizes our intelligent, self-adaptive models for sizing, forecasting, maintenance, and control of a solar plant; sets benchmarks for performance comparison of the reviewed ML models for a solar plant’s system; proposes a simple but effective integration scheme of an ML sensor solar plant system’s implementation and outlines its future digital transformation into a smart solar plant based on the integrated cutting-edge technologies; and estimates the impact of ML technologies based on the proposed scheme on a solar plant value chain. Full article
Show Figures

Figure 1

33 pages, 1844 KiB  
Review
Seven-Layer Model in Complex Networks Link Prediction: A Survey
by Hui Wang and Zichun Le
Sensors 2020, 20(22), 6560; https://doi.org/10.3390/s20226560 - 17 Nov 2020
Cited by 16 | Viewed by 3096
Abstract
Link prediction is the most basic and essential problem in complex networks. This study analyzes the observed topological, time, attributive, label, weight, directional, and symbolic features and auxiliary information to find the lack of connection and predict the future possible connection. For discussion [...] Read more.
Link prediction is the most basic and essential problem in complex networks. This study analyzes the observed topological, time, attributive, label, weight, directional, and symbolic features and auxiliary information to find the lack of connection and predict the future possible connection. For discussion and analysis of the evolution of the network, the network model is of great significance. In the past two decades, link prediction has attracted extensive attention from experts in various fields, who have published numerous high-level papers, but few combine interdisciplinary characteristics. This survey analyzes and discusses the existing link prediction methods. The idea of stratification is introduced into the classification system of link prediction for the first time and proposes the design idea of a seven-layer model, namely the network, metadata, feature classification, selection input, processing, selection, and output layers. Among them, the processing layer divides link prediction methods into similarity-based, probabilistic, likelihood, supervised learning, semi-supervised learning, unsupervised learning, and reinforcement learning methods. The input features, evaluation metrics, complex analysis, experimental comparisons, relative merits, common dataset and open-source implementations for each link prediction method are then discussed in detail. Through analysis and comparison, we found that the link prediction method based on graph structure features has better prediction performance. Finally, the future development direction of link prediction in complex networks is discussed. Full article
Show Figures

Figure 1

Back to TopTop