Next Issue
Volume 16, June
Previous Issue
Volume 16, April
 
 

Algorithms, Volume 16, Issue 5 (May 2023) – 44 articles

Cover Story (view full-size image): Motion artifact removal is a crucial preprocessing step in functional near-infrared spectroscopy (fNIRS) analysis; AMARA, an extremely effective automated motion artifact reduction algorithm, relies on accelerometry data, which are not readily available in multimodal fNIRS-fMRI experiments. We propose a method to retrospectively determine acceleration data by individually processing the groups of simultaneously acquired slices in simultaneous multislice (SMS) fMRI scans to measure head motion at high time resolution, and then differentiating the timecourses. We validate the method in a memory task study involving 10 participants and demonstrate improved motion correction for deoxyhemoglobin, but not oxyhemoglobin. The results also show a strong overlap between fNIRS and fMRI activation. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 9134 KiB  
Article
Signal Processing in Optical Frequency Domain Reflectometry Systems Based on Self-Sweeping Fiber Laser with Continuous-Wave Intensity Dynamics
by Nikita R. Poddubrovskii, Ivan A. Lobach and Sergey I. Kablukov
Algorithms 2023, 16(5), 260; https://doi.org/10.3390/a16050260 - 19 May 2023
Cited by 3 | Viewed by 1362
Abstract
We report on the development of an optical frequency domain reflectometry (OFDR) system based on a continuous-wave Er-doped self-sweeping fiber laser. In this work, we investigate the influence of the input data processing procedure in an OFDR system on the resulting reflectograms and [...] Read more.
We report on the development of an optical frequency domain reflectometry (OFDR) system based on a continuous-wave Er-doped self-sweeping fiber laser. In this work, we investigate the influence of the input data processing procedure in an OFDR system on the resulting reflectograms and noise level. In particular, several types of signal averaging (in time and frequency domain) and Fourier analysis are applied. We demonstrate that the averaging in the frequency domain can be applied to evaluate absolute values of the local scattering amplitudes related to the Rayleigh light scattering (RLS), which is associated with the interference of scattering signals on microscopic inhomogeneities in optical fibers. We found that the RLS signal remains unchanged in the case of signal averaging in time domain, while the noise floor level decreases by 30 dB with an increasing number of points from 1 to ~450. At the same time, it becomes possible to detect the spectral composition of the scattering at each point of the fiber using windowed Fourier transform. As a result, the sensitivity of the developed system allows us to measure the RLS signal at a level of about 20 dB above the noise floor. The described analysis methods can be useful in the development of distributed sensors based on Rayleigh OFDR systems. Full article
(This article belongs to the Special Issue Algorithms and Calculations in Fiber Optics and Photonics)
Show Figures

Figure 1

10 pages, 295 KiB  
Article
Using Deep-Learned Vector Representations for Page Stream Segmentation by Agglomerative Clustering
by Lukas Busch, Ruben van Heusden and Maarten Marx
Algorithms 2023, 16(5), 259; https://doi.org/10.3390/a16050259 - 18 May 2023
Cited by 1 | Viewed by 994
Abstract
Page stream segmentation (PSS) is the task of retrieving the boundaries that separate source documents given a consecutive stream of documents (for example, sequentially scanned PDF files). The task has recently gained more interest as a result of the digitization efforts of various [...] Read more.
Page stream segmentation (PSS) is the task of retrieving the boundaries that separate source documents given a consecutive stream of documents (for example, sequentially scanned PDF files). The task has recently gained more interest as a result of the digitization efforts of various companies and organizations, as they move towards having all their documents available online for improved searchability and accessibility for users. The current state-of-the-art approach is neural start of document page classification on representations of the text and/or images of pages using models such as Visual Geometry Group-16 (VGG-16) and BERT to classify individual pages. We view the task of PSS as a clustering task instead, hypothesizing that pages from one document are similar to each other and different to pages in other documents, something that is difficult to incorporate in the current approaches. We compare the segmentation performance of an agglomerative clustering method with a binary classification model based on images on a new publicly available dataset and experiment with using either pretrained or finetuned image vectors as inputs to the model. To adapt the clustering method to PSS, we propose the switch method to alleviate the effects of pages of the same class having a high similarity, and report an improvement in the scores using this method. Unfortunately, neither clustering with pretrained embeddings nor clustering with finetuned embeddings outperformed start of document page classification for PSS. However, clustering with either pretrained or finetuned representations is substantially more effective than the baseline, with finetuned embeddings outperforming pretrained embeddings. Finally, having the number of documents K as part of the input, in our use case a realistic assumption, has a surprisingly significant positive effect. In contrast to earlier papers, we evaluate PSS with the overlap weighted partial match F1 score, developed as a Panoptic Quality in the computer vision domain, a metric that is particularly well-suited to PSS as it can be used to measure document segmentation. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Figure 1

25 pages, 1896 KiB  
Article
Time-Efficient Identification Procedure for Neurological Complications of Rescue Patients in an Emergency Scenario Using Hardware-Accelerated Artificial Intelligence Models
by Abu Shad Ahammed, Aniebiet Micheal Ezekiel and Roman Obermaisser
Algorithms 2023, 16(5), 258; https://doi.org/10.3390/a16050258 - 18 May 2023
Cited by 1 | Viewed by 1680
Abstract
During an emergency rescue operation, rescuers have to deal with many different health complications like cardiovascular, respiratory, neurological, psychiatric, etc. The identification process of the common health complications in rescue events is not very difficult or time-consuming because the health vital symptoms or [...] Read more.
During an emergency rescue operation, rescuers have to deal with many different health complications like cardiovascular, respiratory, neurological, psychiatric, etc. The identification process of the common health complications in rescue events is not very difficult or time-consuming because the health vital symptoms or primary observations are enough to identify, but it is quite difficult with some complications related to neurology e.g., schizophrenia, epilepsy with non-motor seizures, or retrograde amnesia because they cannot be identified with the trend of health vital data. The symptoms have a wide spectrum and are often non-distinguishable from other types of complications. Further, waiting for results from medical tests like MRI and ECG is time-consuming and not suitable for emergency cases where a quick treatment path is an obvious necessity after the diagnosis. In this paper, we present a novel solution for overcoming these challenges by employing artificial intelligence (AI) models in the diagnostic procedure of neurological complications in rescue situations. The novelty lies in the procedure of generating input features from raw rescue data used in AI models, as the data are not like traditional clinical data collected from hospital repositories. Rather, the data were gathered directly from more than 200,000 rescue cases and required natural language processing techniques to extract meaningful information. A step-by-step analysis of developing multiple AI models that can facilitate the fast identification of neurological complications, in general, is presented in this paper. Advanced data analytics are used to analyze the complete record of 273,183 rescue events in a duration of almost 10 years, including rescuers’ analysis of the complications and their diagnostic methods. To develop the detection model, seven different machine learning algorithms-Support Vector Machine (SVM), Random Forest (RF), K-nearest neighbor (KNN), Extreme Gradient Boosting (XGB), Logistic Regression (LR), Naive Bayes (NB) and Artificial Neural Network (ANN) were used. Observing the model’s performance, we conclude that the neural network and extreme gradient boosting show the best performance in terms of selected evaluation criteria. To utilize this result in practical scenarios, the paper also depicts the possibility of embedding such machine learning models in hardware like FPGA. The goal is to achieve fast detection results, which is a primary requirement in any rescue mission. An inference time analysis of the selected ML models and VTA AI accelerator of Apache-TVM machine learning compiler used for the FPGA is also presented in this research. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms for Healthcare)
Show Figures

Figure 1

21 pages, 7218 KiB  
Article
Predicting Road Traffic Accidents—Artificial Neural Network Approach
by Dragan Gatarić, Nenad Ruškić, Branko Aleksić, Tihomir Đurić, Lato Pezo, Biljana Lončar and Milada Pezo
Algorithms 2023, 16(5), 257; https://doi.org/10.3390/a16050257 - 17 May 2023
Cited by 4 | Viewed by 1933
Abstract
Road traffic accidents are a significant public health issue, accounting for almost 1.3 million deaths worldwide annually, with millions more experiencing non-fatal injuries. A variety of subjective and objective factors contribute to the occurrence of traffic accidents, making it difficult to predict and [...] Read more.
Road traffic accidents are a significant public health issue, accounting for almost 1.3 million deaths worldwide annually, with millions more experiencing non-fatal injuries. A variety of subjective and objective factors contribute to the occurrence of traffic accidents, making it difficult to predict and prevent them on new road sections. Artificial neural networks (ANN) have demonstrated their effectiveness in predicting traffic accidents using limited data sets. This study presents two ANN models to predict traffic accidents on common roads in the Republic of Serbia and the Republic of Srpska (Bosnia and Herzegovina) using objective factors that can be easily determined, such as road length, terrain type, road width, average daily traffic volume, and speed limit. The models predict the number of traffic accidents, as well as the severity of their consequences, including fatalities, injuries and property damage. The developed optimal neural network models showed good generalization capabilities for the collected data foresee, and could be used to accurately predict the observed outputs, based on the input parameters. The highest values of r2 for developed models ANN1 and ANN2 were 0.986, 0.988, and 0.977, and 0.990, 0.969, and 0.990, accordingly, for training, testing and validation cycles. Identifying the most influential factors can assist in improving road safety and reducing the number of accidents. Overall, this research highlights the potential of ANN in predicting traffic accidents and supporting decision-making in transportation planning. Full article
(This article belongs to the Special Issue Neural Network for Traffic Forecasting)
Show Figures

Figure 1

21 pages, 515 KiB  
Article
Hierarchical Modelling for CO2 Variation Prediction for HVAC System Operation
by Ibrahim Shaer and Abdallah Shami
Algorithms 2023, 16(5), 256; https://doi.org/10.3390/a16050256 - 17 May 2023
Cited by 3 | Viewed by 1713
Abstract
Residential and industrial buildings are significant consumers of energy, which can be reduced by controlling their respective Heating, Ventilation, and Air Conditioning (HVAC) systems. Demand-based Ventilation (DCV) determines the operational times of ventilation systems that depend on indoor air quality (IAQ) conditions, including [...] Read more.
Residential and industrial buildings are significant consumers of energy, which can be reduced by controlling their respective Heating, Ventilation, and Air Conditioning (HVAC) systems. Demand-based Ventilation (DCV) determines the operational times of ventilation systems that depend on indoor air quality (IAQ) conditions, including CO2 concentration changes, and the occupants’ comfort requirements. The prediction of CO2 concentration changes can act as a proxy estimator of occupancy changes and provide feedback about the utility of current ventilation controls. This paper proposes a Hierarchical Model for CO2 Variation Predictions (HMCOVP) to accurately predict these variations. The proposed framework addresses two concerns in state-of-the-art implementations. First, the hierarchical structure enables fine-tuning of the produced models, facilitating their transferability to different spatial settings. Second, the formulation incorporates time dependencies, defining the relationship between different IAQ factors. Toward that goal, the HMCOVP decouples the variation prediction into two complementary steps. The first step transforms lagged versions of environmental features into image representations to predict the variations’ direction. The second step combines the first step’s result with environment-specific historical data to predict CO2 variations. Through the HMCOVP, these predictions, which outperformed state-of-the-art approaches, help the ventilation systems in their decision-making processes, reducing energy consumption and carbon-based emissions. Full article
(This article belongs to the Special Issue Algorithms for Smart Cities)
Show Figures

Figure 1

26 pages, 8062 KiB  
Article
Neural Network Entropy (NNetEn): Entropy-Based EEG Signal and Chaotic Time Series Classification, Python Package for NNetEn Calculation
by Andrei Velichko, Maksim Belyaev, Yuriy Izotov, Murugappan Murugappan and Hanif Heidari
Algorithms 2023, 16(5), 255; https://doi.org/10.3390/a16050255 - 16 May 2023
Cited by 4 | Viewed by 2964
Abstract
Entropy measures are effective features for time series classification problems. Traditional entropy measures, such as Shannon entropy, use probability distribution function. However, for the effective separation of time series, new entropy estimation methods are required to characterize the chaotic dynamic of the system. [...] Read more.
Entropy measures are effective features for time series classification problems. Traditional entropy measures, such as Shannon entropy, use probability distribution function. However, for the effective separation of time series, new entropy estimation methods are required to characterize the chaotic dynamic of the system. Our concept of Neural Network Entropy (NNetEn) is based on the classification of special datasets in relation to the entropy of the time series recorded in the reservoir of the neural network. NNetEn estimates the chaotic dynamics of time series in an original way and does not take into account probability distribution functions. We propose two new classification metrics: R2 Efficiency and Pearson Efficiency. The efficiency of NNetEn is verified on separation of two chaotic time series of sine mapping using dispersion analysis. For two close dynamic time series (r = 1.1918 and r = 1.2243), the F-ratio has reached the value of 124 and reflects high efficiency of the introduced method in classification problems. The electroencephalography signal classification for healthy persons and patients with Alzheimer disease illustrates the practical application of the NNetEn features. Our computations demonstrate the synergistic effect of increasing classification accuracy when applying traditional entropy measures and the NNetEn concept conjointly. An implementation of the algorithms in Python is presented. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing)
Show Figures

Figure 1

11 pages, 690 KiB  
Article
Well-Separated Pair Decompositions for High-Dimensional Datasets
by Domagoj Matijević
Algorithms 2023, 16(5), 254; https://doi.org/10.3390/a16050254 - 15 May 2023
Viewed by 1194
Abstract
Well-separated pair decomposition (WSPD) is a well known geometric decomposition used for encoding distances, introduced in a seminal paper by Paul B. Callahan and S. Rao Kosaraju in 1995. WSPD compresses O(n2) pairwise distances of n given points from [...] Read more.
Well-separated pair decomposition (WSPD) is a well known geometric decomposition used for encoding distances, introduced in a seminal paper by Paul B. Callahan and S. Rao Kosaraju in 1995. WSPD compresses O(n2) pairwise distances of n given points from Rd in O(n) space for a fixed dimension d. However, the main problem with this remarkable decomposition is the “hidden” dependence on the dimension d, which in practice does not allow for the computation of a WSPD for any dimension d>2 or d>3 at best. In this work, I will show how to compute a WSPD for points in Rd and for any dimension d. Instead of computing a WSPD directly in Rd, I propose to learn nonlinear mapping and transform the data to a lower-dimensional space Rd, d=2 or d=3, since only in such low-dimensional spaces can a WSPD be efficiently computed. Furthermore, I estimate the quality of the computed WSPD in the original Rd space. My experiments show that for different synthetic and real-world datasets my approach allows that a WSPD of size O(n) can still be computed for points in Rd for dimensions d much larger than two or three in practice. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

22 pages, 5386 KiB  
Article
Modernising Receiver Operating Characteristic (ROC) Curves
by Leslie R. Pendrill, Jeanette Melin, Anne Stavelin and Gunnar Nordin
Algorithms 2023, 16(5), 253; https://doi.org/10.3390/a16050253 - 13 May 2023
Cited by 3 | Viewed by 1961
Abstract
The justification for making a measurement can be sought in asking what decisions are based on measurement, such as in assessing the compliance of a quality characteristic of an entity in relation to a specification limit, SL. The relative performance of testing [...] Read more.
The justification for making a measurement can be sought in asking what decisions are based on measurement, such as in assessing the compliance of a quality characteristic of an entity in relation to a specification limit, SL. The relative performance of testing devices and classification algorithms used in assessing compliance is often evaluated using the venerable and ever popular receiver operating characteristic (ROC). However, the ROC tool has potentially all the limitations of classic test theory (CTT) such as the non-linearity, effects of ordinality and confounding task difficulty and instrument ability. These limitations, inherent and often unacknowledged when using the ROC tool, are tackled here for the first time with a modernised approach combining measurement system analysis (MSA) and item response theory (IRT), using data from pregnancy testing as an example. The new method of assessing device ability from separate Rasch IRT regressions for each axis of ROC curves is found to perform significantly better, with correlation coefficients with traditional area-under-curve metrics of at least 0.92 which exceeds that of linearised ROC plots, such as Linacre’s, and is recommended to replace other approaches for device assessment. The resulting improved measurement quality of each ROC curve achieved with this original approach should enable more reliable decision-making in conformity assessment in many scenarios, including machine learning, where its use as a metric for assessing classification algorithms has become almost indispensable. Full article
(This article belongs to the Collection Feature Papers in Algorithms)
Show Figures

Figure 1

40 pages, 10583 KiB  
Article
Efficient Mathematical Lower Bounds for City Logistics Distribution Network with Intra-Echelon Connection of Facilities: Bridging the Gap from Theoretical Model Formulations to Practical Solutions
by Zhiqiang Niu, Shengnan Wu and Xuesong (Simon) Zhou
Algorithms 2023, 16(5), 252; https://doi.org/10.3390/a16050252 - 12 May 2023
Viewed by 1749
Abstract
Focusing on the dynamic improvement of the underlying service network configuration, this paper aims to address a specific challenge of redesigning a multi-echelon city logistics distribution network. By considering the intra-echelon connection of facilities within the same layer of echelon, we propose a [...] Read more.
Focusing on the dynamic improvement of the underlying service network configuration, this paper aims to address a specific challenge of redesigning a multi-echelon city logistics distribution network. By considering the intra-echelon connection of facilities within the same layer of echelon, we propose a new distribution network design model by reformulating the classical quadratic assignment problem (QAP). To minimize the overall transportation costs, the proposed model jointly optimizes two types of decisions to enable agile distribution with dynamic “shortcuts”: (i) the allocation of warehouses to supply the corresponding distribution centers (DCs), and (ii) the demand coverage decision from distribution centers to delivery stations. Furthermore, a customized branch-and-bound algorithm is developed, where the lower bound is obtained by adopting Gilmore and Lawler lower Bound (GLB) for QAP. We conduct extensive computational experiments, highlighting the significant contribution of GLB-oriented lower bound, to obtain practical solutions; this type of efficient mathematical lower bounds offers a powerful tool for balancing theoretical research ideas with practical and industrial applicability. Full article
Show Figures

Figure 1

21 pages, 3792 KiB  
Article
Adjustable Pheromone Reinforcement Strategies for Problems with Efficient Heuristic Information
by Nikola Ivković, Robert Kudelić and Marin Golub
Algorithms 2023, 16(5), 251; https://doi.org/10.3390/a16050251 - 12 May 2023
Cited by 2 | Viewed by 1139
Abstract
Ant colony optimization (ACO) is a well-known class of swarm intelligence algorithms suitable for solving many NP-hard problems. An important component of such algorithms is a record of pheromone trails that reflect colonies’ experiences with previously constructed solutions of the problem instance that [...] Read more.
Ant colony optimization (ACO) is a well-known class of swarm intelligence algorithms suitable for solving many NP-hard problems. An important component of such algorithms is a record of pheromone trails that reflect colonies’ experiences with previously constructed solutions of the problem instance that is being solved. By using pheromones, the algorithm builds a probabilistic model that is exploited for constructing new and, hopefully, better solutions. Traditionally, there are two different strategies for updating pheromone trails. The best-so-far strategy (global best) is rather greedy and can cause a too-fast convergence of the algorithm toward some suboptimal solutions. The other strategy is named iteration best and it promotes exploration and slower convergence, which is sometimes too slow and lacks focus. To allow better adaptability of ant colony optimization algorithms we use κ-best, max-κ-best, and 1/λ-best strategies that form the entire spectrum of strategies between best-so-far and iteration best and go beyond. Selecting a suitable strategy depends on the type of problem, parameters, heuristic information, and conditions in which the ACO is used. In this research, we use two representative combinatorial NP-hard problems, the symmetric traveling salesman problem (TSP) and the asymmetric traveling salesman problem (ATSP), for which very effective heuristic information is widely known, to empirically analyze the influence of strategies on the algorithmic performance. The experiments are carried out on 45 TSP and 47 ATSP instances by using the MAX-MIN ant system variant of ACO with and without local optimizations, with each problem instance repeated 101 times for 24 different pheromone reinforcement strategies. The results show that, by using adjustable pheromone reinforcement strategies, the MMAS outperformed in a large majority of cases the MMAS with classical strategies. Full article
(This article belongs to the Special Issue Swarm Intelligence Applications and Algorithms)
Show Figures

Figure 1

13 pages, 695 KiB  
Article
Recovering the Forcing Function in Systems with One Degree of Freedom Using ANN and Physics Information
by Shadab Anwar Shaikh, Harish Cherukuri and Taufiquar Khan
Algorithms 2023, 16(5), 250; https://doi.org/10.3390/a16050250 - 12 May 2023
Viewed by 1619
Abstract
In engineering design, oftentimes a system’s dynamic response is known or can be measured, but the source generating these responses is not known. The mathematical problem where the focus is on inferring the source terms of the governing equations from the set of [...] Read more.
In engineering design, oftentimes a system’s dynamic response is known or can be measured, but the source generating these responses is not known. The mathematical problem where the focus is on inferring the source terms of the governing equations from the set of observations is known as an inverse source problem (ISP). ISPs are traditionally solved by optimization techniques with regularization, but in the past few years, there has been a lot of interest in approaching these problems from a deep-learning viewpoint. In this paper, we propose a deep learning approach—infused with physics information—to recover the forcing function (source term) of systems with one degree of freedom from the response data. We test our architecture first to recover smooth forcing functions, and later functions involving abruptly changing gradient and jump discontinuities in the case of a linear system. Finally, we recover the harmonic, the sum of two harmonics, and the gaussian function, in the case of a non-linear system. The results obtained are promising and demonstrate the efficacy of this approach in recovering the forcing functions from the data. Full article
(This article belongs to the Special Issue Deep Learning Architecture and Applications)
Show Figures

Figure 1

25 pages, 3991 KiB  
Article
Method for Determining the Dominant Type of Human Breathing Using Motion Capture and Machine Learning
by Yulia Orlova, Alexander Gorobtsov, Oleg Sychev, Vladimir Rozaliev, Alexander Zubkov and Anastasia Donsckaia
Algorithms 2023, 16(5), 249; https://doi.org/10.3390/a16050249 - 12 May 2023
Cited by 2 | Viewed by 1757
Abstract
Since the COVID-19 pandemic, the demand for respiratory rehabilitation has significantly increased. This makes developing home (remote) rehabilitation methods using modern technology essential. New techniques and tools, including wireless sensors and motion capture systems, have been developed to implement remote respiratory rehabilitation. Significant [...] Read more.
Since the COVID-19 pandemic, the demand for respiratory rehabilitation has significantly increased. This makes developing home (remote) rehabilitation methods using modern technology essential. New techniques and tools, including wireless sensors and motion capture systems, have been developed to implement remote respiratory rehabilitation. Significant attention during respiratory rehabilitation is paid to the type of human breathing. Remote rehabilitation requires the development of automated methods of breath analysis. Most currently developed methods for analyzing breathing do not work with different types of breathing. These methods are either designed for one type (for example, diaphragmatic) or simply analyze the lungs’ condition. Developing methods of determining the types of human breathing is necessary for conducting remote respiratory rehabilitation efficiently. We propose a method of determining the type of breathing using wireless sensors with the motion capture system. To develop that method, spectral analysis and machine learning methods were used to detect the prevailing spectrum, the marker coordinates, and the prevailing frequency for different types of breathing. An algorithm for determining the type of human breathing is described. It is based on approximating the shape of graphs of distances between markers using sinusoidal waves. Based on the features of the resulting waves, we trained machine learning models to determine the types of breathing. After the first stage of training, we found that the maximum accuracy of machine learning models was below 0.63, which was too low to be reliably used in respiratory rehabilitation. Based on the analysis of the obtained accuracy, the training and running time of the models, and the error function, we choose the strategy of achieving higher accuracy by increasing the training and running time of the model and using a two-stage method, composed of two machine learning models, trained separately. The first model determines whether the breath is of the mixed type; if it does not predict the mixed type of breathing, the second model determines whether breathing is thoracic or abdominal. The highest accuracy achieved by the composite model was 0.81, which surpasses single models and is high enough for use in respiratory rehabilitation. Therefore, using three wireless sensors placed on the patient’s body and a two-stage algorithm using machine learning models, it was possible to determine the type of human breathing with high enough precision to conduct remote respiratory rehabilitation. The developed algorithm can be used in building rehabilitation applications. Full article
(This article belongs to the Special Issue Machine Learning in Healthcare and Biomedical Application II)
Show Figures

Figure 1

16 pages, 5558 KiB  
Article
Time-Series Forecasting of Seasonal Data Using Machine Learning Methods
by Vadim Kramar and Vasiliy Alchakov
Algorithms 2023, 16(5), 248; https://doi.org/10.3390/a16050248 - 10 May 2023
Cited by 4 | Viewed by 5172
Abstract
The models for forecasting time series with seasonal variability can be used to build automatic real-time control systems. For example, predicting the water flowing in a wastewater treatment plant can be used to calculate the optimal electricity consumption. The article describes a performance [...] Read more.
The models for forecasting time series with seasonal variability can be used to build automatic real-time control systems. For example, predicting the water flowing in a wastewater treatment plant can be used to calculate the optimal electricity consumption. The article describes a performance analysis of various machine learning methods (SARIMA, Holt-Winters Exponential Smoothing, ETS, Facebook Prophet, XGBoost, and Long Short-Term Memory) and data-preprocessing algorithms implemented in Python. The general methodology of model building and the requirements of the input data sets are described. All models use actual data from sensors of the monitoring system. The novelty of this work is in an approach that allows using limited history data sets to obtain predictions with reasonable accuracy. The implemented algorithms made it possible to achieve an R-Squared accuracy of more than 0.95. The forecasting calculation time is minimized, which can be used to run the algorithm in real-time control and embedded systems. Full article
(This article belongs to the Special Issue Machine Learning for Time Series Analysis)
Show Figures

Figure 1

21 pages, 4209 KiB  
Article
Process Chain-Oriented Design Evaluation of Multi-Material Components by Knowledge-Based Engineering
by Kevin Herrmann, Stefan Plappert, Paul Christoph Gembarski and Roland Lachmayer
Algorithms 2023, 16(5), 247; https://doi.org/10.3390/a16050247 - 10 May 2023
Cited by 1 | Viewed by 1676
Abstract
The design of components suitable for manufacturing requires the application of knowledge about the manufacturing process chain with which the component is to be manufactured. This article presents an assistance system for decision support in the context of design for manufacturing. The assistance [...] Read more.
The design of components suitable for manufacturing requires the application of knowledge about the manufacturing process chain with which the component is to be manufactured. This article presents an assistance system for decision support in the context of design for manufacturing. The assistance system includes explicit manufacturing process chain knowledge and has an inference engine that can automatically evaluate the manufacturability of a component design based on a given manufacturing process chain and resolve emerging manufacturing conflicts by making adjustments on the component or resource side. A link with a CAD system additionally enables the three-dimensional representation of derived manufacturing stages and manufacturing resources. Within the assistance system, a manufacturing process chain is understood as a configurable design object and is implemented via a constraint satisfaction problem. Furthermore, the required abstraction of manufacturing processes within finite domains can be reduced to the extent that necessary modeling resolution is achieved by incorporating empirical or simulative surrogate models into the CSP. The assistance system was conceptually validated on a tailored forming process chain for the production of a multimaterial shaft and provides added value, as valuable manufacturing information for component designs is automatically derived and made available in explicit form during the component development. Full article
Show Figures

Figure 1

25 pages, 6569 KiB  
Article
Subgroup Discovery in Machine Learning Problems with Formal Concepts Analysis and Test Theory Algorithms
by Igor Masich, Natalya Rezova, Guzel Shkaberina, Sergei Mironov, Mariya Bartosh and Lev Kazakovtsev
Algorithms 2023, 16(5), 246; https://doi.org/10.3390/a16050246 - 09 May 2023
Viewed by 2264
Abstract
A number of real-world problems of automatic grouping of objects or clustering require a reasonable solution and the possibility of interpreting the result. More specific is the problem of identifying homogeneous subgroups of objects. The number of groups in such a dataset is [...] Read more.
A number of real-world problems of automatic grouping of objects or clustering require a reasonable solution and the possibility of interpreting the result. More specific is the problem of identifying homogeneous subgroups of objects. The number of groups in such a dataset is not specified, and it is required to justify and describe the proposed grouping model. As a tool for interpretable machine learning, we consider formal concept analysis (FCA). To reduce the problem with real attributes to a problem that allows the use of FCA, we use the search for the optimal number and location of cut points and the optimization of the support set of attributes. The approach to identifying homogeneous subgroups was tested on tasks for which interpretability is important: the problem of clustering industrial products according to primary tests (for example, transistors, diodes, and microcircuits) as well as gene expression data (collected to solve the problem of predicting cancerous tumors). For the data under consideration, logical concepts are identified, formed in the form of a lattice of formal concepts. Revealed concepts are evaluated according to indicators of informativeness and can be considered as homogeneous subgroups of elements and their indicative descriptions. The proposed approach makes it possible to single out homogeneous subgroups of elements and provides a description of their characteristics, which can be considered as tougher norms that the elements of the subgroup satisfy. A comparison is made with the COBWEB algorithm designed for conceptual clustering of objects. This algorithm is aimed at discovering probabilistic concepts. The resulting lattices of logical concepts and probabilistic concepts for the considered datasets are simple and easy to interpret. Full article
Show Figures

Figure 1

18 pages, 474 KiB  
Article
Consensus Big Data Clustering for Bayesian Mixture Models
by Christos Karras, Aristeidis Karras, Konstantinos C. Giotopoulos, Markos Avlonitis and Spyros Sioutas
Algorithms 2023, 16(5), 245; https://doi.org/10.3390/a16050245 - 09 May 2023
Cited by 4 | Viewed by 1754
Abstract
In the context of big-data analysis, the clustering technique holds significant importance for the effective categorization and organization of extensive datasets. However, pinpointing the ideal number of clusters and handling high-dimensional data can be challenging. To tackle these issues, several strategies have been [...] Read more.
In the context of big-data analysis, the clustering technique holds significant importance for the effective categorization and organization of extensive datasets. However, pinpointing the ideal number of clusters and handling high-dimensional data can be challenging. To tackle these issues, several strategies have been suggested, such as a consensus clustering ensemble that yields more significant outcomes compared to individual models. Another valuable technique for cluster analysis is Bayesian mixture modelling, which is known for its adaptability in determining cluster numbers. Traditional inference methods such as Markov chain Monte Carlo may be computationally demanding and limit the exploration of the posterior distribution. In this work, we introduce an innovative approach that combines consensus clustering and Bayesian mixture models to improve big-data management and simplify the process of identifying the optimal number of clusters in diverse real-world scenarios. By addressing the aforementioned hurdles and boosting accuracy and efficiency, our method considerably enhances cluster analysis. This fusion of techniques offers a powerful tool for managing and examining large and intricate datasets, with possible applications across various industries. Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning)
Show Figures

Figure 1

16 pages, 1825 KiB  
Article
A Privacy-Preserving Symptoms Retrieval System with the Aid of Homomorphic Encryption and Private Set Intersection Schemes
by Yi-Wei Wang and Ja-Ling Wu
Algorithms 2023, 16(5), 244; https://doi.org/10.3390/a16050244 - 09 May 2023
Cited by 2 | Viewed by 1434
Abstract
This work presents an efficient and effective system allowing hospitals to share patients’ private information while ensuring that each hospital database’s medical records will not be leaked; moreover, the privacy of patients who access the data will also be protected. We assume that [...] Read more.
This work presents an efficient and effective system allowing hospitals to share patients’ private information while ensuring that each hospital database’s medical records will not be leaked; moreover, the privacy of patients who access the data will also be protected. We assume that the thread model of the hospital’s security is semi-honest (i.e., curious but honest), and each hospital hired a trusted medical records department administrator to manage patients’ private information from other hospitals. With the help of Homomorphic Encryption- and Private Set Intersection -related algorithms, our proposed system protects patient privacy, allows physicians to obtain patient information across hospitals, and prevents threats such as troublesome insider attacks and man-in-the-middle attacks. Full article
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

11 pages, 327 KiB  
Article
Optimizing Crop Yield and Reducing Energy Consumption in Greenhouse Control Using PSO-MPC Algorithm
by Liyun Gong, Miao Yu and Stefanos Kollias
Algorithms 2023, 16(5), 243; https://doi.org/10.3390/a16050243 - 07 May 2023
Cited by 2 | Viewed by 1492
Abstract
In this study, we present a novel smart greenhouse control algorithm that optimizes crop yield while minimizing energy consumption costs. To achieve this, we relied on both a greenhouse climate model and a greenhouse crop yield model. Our approach involves applying the model [...] Read more.
In this study, we present a novel smart greenhouse control algorithm that optimizes crop yield while minimizing energy consumption costs. To achieve this, we relied on both a greenhouse climate model and a greenhouse crop yield model. Our approach involves applying the model predictive control (MPC) method, which utilizes the particle swarm optimization (PSO) algorithm to identify optimal controllable parameters such as heating, lighting, ventilation levels. The objective of the optimization is to maximize crop yield while minimizing energy consumption costs. We demonstrate the superiority of our proposed control algorithm in terms of performance and energy efficiency compared to the traditional control algorithm. The effectiveness of the PSO-based optimization strategy for finding optimal controllable parameters for MPC control is also demonstrated, outperforming the traditional genetic algorithm optimization. This study provides a promising approach to smart greenhouse control with the potential for increasing crop yield while minimizing energy costs. Full article
Show Figures

Figure 1

27 pages, 958 KiB  
Article
Parallel Algorithm for Solving Overdetermined Systems of Linear Equations, Taking into Account Round-Off Errors
by Dmitry Lukyanenko
Algorithms 2023, 16(5), 242; https://doi.org/10.3390/a16050242 - 07 May 2023
Cited by 4 | Viewed by 2049
Abstract
The paper proposes a parallel algorithm for solving large overdetermined systems of linear algebraic equations with a dense matrix. This algorithm is based on the use of a modification of the conjugate gradient method, which is able to take into account rounding errors [...] Read more.
The paper proposes a parallel algorithm for solving large overdetermined systems of linear algebraic equations with a dense matrix. This algorithm is based on the use of a modification of the conjugate gradient method, which is able to take into account rounding errors accumulated during calculations when making a decision to terminate the iterative process. The parallel algorithm is constructed in such a way that it takes into account the capabilities of the message passing interface (MPI) parallel programming technology, which is used for the software implementation of the proposed algorithm. The programming examples are shown using the Python programming language and the mpi4py package, but all programs are built in such a way that they can be easily rewritten using the C/C++/Fortran programming languages. The advantage of using the modern MPI-4.0 standard is demonstrated. Full article
(This article belongs to the Collection Parallel and Distributed Computing: Algorithms and Applications)
Show Figures

Graphical abstract

33 pages, 1874 KiB  
Article
Quantum Circuit-Width Reduction through Parameterisation and Specialisation
by Youssef Moawad, Wim Vanderbauwhede and René Steijl
Algorithms 2023, 16(5), 241; https://doi.org/10.3390/a16050241 - 05 May 2023
Viewed by 1920
Abstract
As quantum computing technology continues to develop, the need for research into novel quantum algorithms is growing. However, such algorithms cannot yet be reliably tested on actual quantum hardware, which is still limited in several ways, including qubit coherence times, connectivity, and available [...] Read more.
As quantum computing technology continues to develop, the need for research into novel quantum algorithms is growing. However, such algorithms cannot yet be reliably tested on actual quantum hardware, which is still limited in several ways, including qubit coherence times, connectivity, and available qubits. To facilitate the development of novel algorithms despite this, simulators on classical computing systems are used to verify the correctness of an algorithm, and study its behaviour under different error models. In general, this involves operating on a memory space that grows exponentially with the number of qubits. In this work, we introduce quantum circuit transformations that allow for the construction of parameterised circuits for quantum algorithms. The parameterised circuits are in an ideal form to be processed by quantum compilation tools, such that the circuit can be partially evaluated prior to simulation, and a smaller specialised circuit can be constructed by eliminating fixed input qubits. We show significant reduction in the number of qubits for various quantum arithmetic circuits. Divide-by-n-bits quantum integer dividers are used as an example demonstration. It is shown that the complexity reduces from 4n+2 to 3n+2 qubits in the specialised versions. For quantum algorithms involving divide-by-8 arithmetic operations, a reduction by 28=256 in required memory is achieved for classical simulation, reducing the memory required from 137 GB to 0.53 GB. Full article
(This article belongs to the Special Issue Space-Efficient Algorithms and Data Structures)
Show Figures

Figure 1

25 pages, 4849 KiB  
Article
Cooperative Attention-Based Learning between Diverse Data Sources
by Harshit Srivastava and Ravi Sankar
Algorithms 2023, 16(5), 240; https://doi.org/10.3390/a16050240 - 04 May 2023
Viewed by 1301
Abstract
Cooperative attention provides a new method to study how epidemic diseases are spread. It is derived from the social data with the help of survey data. Cooperative attention enables the detection possible anomalies in an event by formulating the spread variable, which determines [...] Read more.
Cooperative attention provides a new method to study how epidemic diseases are spread. It is derived from the social data with the help of survey data. Cooperative attention enables the detection possible anomalies in an event by formulating the spread variable, which determines the disease spread rate decision score. This work proposes a determination spread variable using a disease spread model and cooperative learning. It is a four-stage model that determines answers by identifying semantic cooperation using the spread model to identify events, infection factors, location spread, and change in spread rate. The proposed model analyses the spread of COVID-19 throughout the United States using a new approach by defining data cooperation using the dynamic variable of the spread rate and the optimal cooperative strategy. Game theory is used to define cooperative strategy and to analyze the dynamic variable determined with the help of a control algorithm. Our analysis successfully identifies the spread rate of disease from social data with an accuracy of 67% and can dynamically optimize the decision model using a control algorithm with a complexity of order O(n2). Full article
Show Figures

Figure 1

12 pages, 862 KiB  
Article
An Improved Heteroscedastic Modeling Method for Chest X-ray Image Classification with Noisy Labels
by Qingji Guan, Qinrun Chen and Yaping Huang
Algorithms 2023, 16(5), 239; https://doi.org/10.3390/a16050239 - 04 May 2023
Cited by 1 | Viewed by 1218
Abstract
Chest X-ray image classification suffers from the high inter-similarity in appearance that is vulnerable to noisy labels. The data-dependent and heteroscedastic characteristic label noise make chest X-ray image classification more challenging. To address this problem, in this paper, we first revisit the heteroscedastic [...] Read more.
Chest X-ray image classification suffers from the high inter-similarity in appearance that is vulnerable to noisy labels. The data-dependent and heteroscedastic characteristic label noise make chest X-ray image classification more challenging. To address this problem, in this paper, we first revisit the heteroscedastic modeling (HM) for image classification with noise labels. Rather than modeling all images in one fell swoop as in HM, we instead propose a novel framework that considers the noisy and clean samples separately for chest X-ray image classification. The proposed framework consists of a Gaussian Mixture Model-based noise detector and a Heteroscedastic Modeling-based noise-aware classification network, named GMM-HM. The noise detector is constructed to judge whether one sample is clean or noisy. The noise-aware classification network models the noisy and clean samples with heteroscedastic and homoscedastic hypotheses, respectively. Through building the correlations between the corrupted noisy samples, the GMM-HM is much more robust than HM, which uses only the homoscedastic hypothesis. Compared with HM, we show consistent improvements on the ChestX-ray2017 dataset with different levels of symmetric and asymmetric noise. Furthermore, we also conduct experiments on a real asymmetric noisy dataset, ChestX-ray14. The experimental results on ChestX-ray14 show the superiority of the proposed method. Full article
Show Figures

Figure 1

36 pages, 3556 KiB  
Article
Time Series Analysis by Fuzzy Logic Methods
by Sergey M. Agayan, Dmitriy A. Kamaev, Shamil R. Bogoutdinov, Andron O. Aleksanyan and Boris V. Dzeranov
Algorithms 2023, 16(5), 238; https://doi.org/10.3390/a16050238 - 03 May 2023
Cited by 2 | Viewed by 1306
Abstract
The method of analyzing data known as Discrete Mathematical Analysis (DMA) incorporates fuzzy mathematics and logic. This paper focuses on applying DMA to study the morphology of time series by utilizing the language of fuzzy mathematics. The morphological characteristics of the time series, [...] Read more.
The method of analyzing data known as Discrete Mathematical Analysis (DMA) incorporates fuzzy mathematics and logic. This paper focuses on applying DMA to study the morphology of time series by utilizing the language of fuzzy mathematics. The morphological characteristics of the time series, such as background, slopes, and vertices, are considered fuzzy sets within the domain of its definition. This allows for the use of fuzzy logic in examining the morphology of time series, ultimately leading to the detection of anomalies. Full article
(This article belongs to the Special Issue Space-Efficient Algorithms and Data Structures)
Show Figures

Figure 1

21 pages, 8254 KiB  
Article
A Mayfly-Based Approach for CMOS Inverter Design with Symmetrical Switching
by Fadi Nessir Zghoul, Haneen Alteehi and Ahmad Abuelrub
Algorithms 2023, 16(5), 237; https://doi.org/10.3390/a16050237 - 30 Apr 2023
Cited by 2 | Viewed by 1437
Abstract
This paper presents a novel approach to designing a CMOS inverter using the Mayfly Optimization Algorithm (MA). The MA is utilized in this paper to obtain symmetrical switching of the inverter, which is crucial in many digital electronic circuits. The MA method is [...] Read more.
This paper presents a novel approach to designing a CMOS inverter using the Mayfly Optimization Algorithm (MA). The MA is utilized in this paper to obtain symmetrical switching of the inverter, which is crucial in many digital electronic circuits. The MA method is found to have a fast convergence rate compared to other optimization methods, such as the Symbiotic Organisms Search (SOS), Particle Swarm Optimization (PSO), and Differential Evolution (DE). A total of eight different sets of design parameters and criteria were analyzed in Case I, and the results confirmed compatibility between the MA and Spice techniques. The maximum discrepancy in fall time across all design sets was found to be 2.075711 ns. In Case II, the objective was to create a symmetrical inverter with identical fall and rise times. The difference in fall and rise times was minimized based on Spice simulations, with the maximum difference measuring 0.9784731 ns. In Case III, the CMOS inverter was designed to achieve symmetrical fall and rise times as well as propagation delays. The Spice simulation results demonstrated that symmetry had been successfully achieved, with the minimum difference measuring 0.312893 ns and the maximum difference measuring 1.076540 ns. These Spice simulation results are consistent with the MA results. The results conclude that the MA is a reliable and simple optimization technique and can be used in similar electronic topologies. Full article
Show Figures

Figure 1

28 pages, 5319 KiB  
Article
Twenty Years of Machine-Learning-Based Text Classification: A Systematic Review
by Ashokkumar Palanivinayagam, Claude Ziad El-Bayeh and Robertas Damaševičius
Algorithms 2023, 16(5), 236; https://doi.org/10.3390/a16050236 - 29 Apr 2023
Cited by 9 | Viewed by 4490
Abstract
Machine-learning-based text classification is one of the leading research areas and has a wide range of applications, which include spam detection, hate speech identification, reviews, rating summarization, sentiment analysis, and topic modelling. Widely used machine-learning-based research differs in terms of the datasets, training [...] Read more.
Machine-learning-based text classification is one of the leading research areas and has a wide range of applications, which include spam detection, hate speech identification, reviews, rating summarization, sentiment analysis, and topic modelling. Widely used machine-learning-based research differs in terms of the datasets, training methods, performance evaluation, and comparison methods used. In this paper, we surveyed 224 papers published between 2003 and 2022 that employed machine learning for text classification. The Preferred Reporting Items for Systematic Reviews (PRISMA) statement is used as the guidelines for the systematic review process. The comprehensive differences in the literature are analyzed in terms of six aspects: datasets, machine learning models, best accuracy, performance evaluation metrics, training and testing splitting methods, and comparisons among machine learning models. Furthermore, we highlight the limitations and research gaps in the literature. Although the research works included in the survey perform well in terms of text classification, improvement is required in many areas. We believe that this survey paper will be useful for researchers in the field of text classification. Full article
(This article belongs to the Special Issue Machine Learning in Statistical Data Processing)
Show Figures

Figure 1

27 pages, 893 KiB  
Article
Official International Mahjong: A New Playground for AI Research
by Yunlong Lu, Wenxin Li and Wenlong Li
Algorithms 2023, 16(5), 235; https://doi.org/10.3390/a16050235 - 28 Apr 2023
Cited by 3 | Viewed by 2207
Abstract
Games have long been benchmarks and testbeds for AI research. In recent years, with the development of new algorithms and the boost in computational power, many popular games played by humans have been solved by AI systems. Mahjong is one of the most [...] Read more.
Games have long been benchmarks and testbeds for AI research. In recent years, with the development of new algorithms and the boost in computational power, many popular games played by humans have been solved by AI systems. Mahjong is one of the most popular games played in China and has been spread worldwide, which presents challenges for AI research due to its multi-agent nature, rich hidden information, and complex scoring rules, but it has been somehow overlooked in the community of game AI research. In 2020 and 2022, we held two AI competitions of Official International Mahjong, the standard variant of Mahjong rules, in conjunction with a top-tier AI conference called IJCAI. We are the first to adopt the duplicate format in evaluating Mahjong AI agents to mitigate the high variance in this game. By comparing the algorithms and performance of AI agents in the competitions, we conclude that supervised learning and reinforcement learning are the current state-of-the-art methods in this game and perform much better than heuristic methods based on human knowledge. We also held a human-versus-AI competition and found that the top AI agent still could not beat professional human players. We claim that this game can be a new benchmark for AI research due to its complexity and popularity among people. Full article
(This article belongs to the Special Issue Algorithms for Games AI)
Show Figures

Figure 1

16 pages, 725 KiB  
Article
Deep Cross-Network Alignment with Anchor Node Pair Diverse Local Structure
by Yinghui Wang, Wenjun Wang, Minglai Shao and Yueheng Sun
Algorithms 2023, 16(5), 234; https://doi.org/10.3390/a16050234 - 28 Apr 2023
Viewed by 1182
Abstract
Network alignment (NA) offers a comprehensive way to build associations between different networks by identifying shared nodes. While the majority of current NA methods rely on the topological consistency assumption, which posits that shared nodes across different networks typically have similar local structures [...] Read more.
Network alignment (NA) offers a comprehensive way to build associations between different networks by identifying shared nodes. While the majority of current NA methods rely on the topological consistency assumption, which posits that shared nodes across different networks typically have similar local structures or neighbors, we argue that anchor nodes, which play a pivotal role in NA, face a more challenging scenario that is often overlooked. In this paper, we conduct extensive statistical analysis across networks to investigate the connection status of labeled anchor node pairs and categorize them into four situations. Based on our analysis, we propose an end-to-end network alignment framework that uses node representations as a distribution rather than a point vector to better handle the structural diversity of networks. To mitigate the influence of specific nodes, we introduce a mask mechanism during the representation learning process. In addition, we utilize meta-learning to generalize the learned information on labeled anchor node pairs to other node pairs. Finally, we perform comprehensive experiments on both real-world and synthetic datasets to confirm the efficacy of our proposed method. The experimental results demonstrate that the proposed model outperforms the state-of-the-art methods significantly. Full article
(This article belongs to the Topic Complex Networks and Social Networks)
Show Figures

Figure 1

17 pages, 583 KiB  
Article
A Bayesian Multi-Armed Bandit Algorithm for Dynamic End-to-End Routing in SDN-Based Networks with Piecewise-Stationary Rewards
by Pedro Santana and José Moura
Algorithms 2023, 16(5), 233; https://doi.org/10.3390/a16050233 - 28 Apr 2023
Viewed by 1458
Abstract
To handle the exponential growth of data-intensive network edge services and automatically solve new challenges in routing management, machine learning is steadily being incorporated into software-defined networking solutions. In this line, the article presents the design of a piecewise-stationary Bayesian multi-armed bandit approach [...] Read more.
To handle the exponential growth of data-intensive network edge services and automatically solve new challenges in routing management, machine learning is steadily being incorporated into software-defined networking solutions. In this line, the article presents the design of a piecewise-stationary Bayesian multi-armed bandit approach for the online optimum end-to-end dynamic routing of data flows in the context of programmable networking systems. This learning-based approach has been analyzed with simulated and emulated data, showing the proposal’s ability to sequentially and proactively self-discover the end-to-end routing path with minimal delay among a considerable number of alternatives, even when facing abrupt changes in transmission delay distributions due to both variable congestion levels on path network devices and dynamic delays to transmission links. Full article
Show Figures

Figure 1

17 pages, 6707 KiB  
Article
Machine-Learning-Based Model for Hurricane Storm Surge Forecasting in the Lower Laguna Madre
by Cesar Davila Hernandez, Jungseok Ho, Dongchul Kim and Abdoul Oubeidillah
Algorithms 2023, 16(5), 232; https://doi.org/10.3390/a16050232 - 28 Apr 2023
Cited by 4 | Viewed by 1859
Abstract
During every Atlantic hurricane season, storms represent a constant risk to Texan coastal communities and other communities along the Atlantic coast of the United States. A storm surge refers to the abnormal rise of sea water level due to hurricanes and storms; traditionally, [...] Read more.
During every Atlantic hurricane season, storms represent a constant risk to Texan coastal communities and other communities along the Atlantic coast of the United States. A storm surge refers to the abnormal rise of sea water level due to hurricanes and storms; traditionally, hurricane storm surge predictions are generated using complex numerical models that require high amounts of computing power to be run, which grow proportionally with the extent of the area covered by the model. In this work, a machine-learning-based storm surge forecasting model for the Lower Laguna Madre is implemented. The model considers gridded forecasted weather data on winds and atmospheric pressure over the Gulf of Mexico, as well as previous sea levels obtained from a Laguna Madre ocean circulation numerical model. Using architectures such as Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) combined, the resulting model is capable of identifying upcoming hurricanes and predicting storm surges, as well as normal conditions in several locations along the Lower Laguna Madre. Overall, the model is able to predict storm surge peaks with an average difference of 0.04 m when compared with a numerical model and an average RMSE of 0.08 for normal conditions and 0.09 for storm surge conditions. Full article
(This article belongs to the Special Issue Deep Learning Architecture and Applications)
Show Figures

Figure 1

25 pages, 4698 KiB  
Article
Order-Based Schedule of Dynamic Topology for Recurrent Neural Network
by Diego Sanchez Narvaez, Carlos Villaseñor, Carlos Lopez-Franco and Nancy Arana-Daniel
Algorithms 2023, 16(5), 231; https://doi.org/10.3390/a16050231 - 28 Apr 2023
Viewed by 1488
Abstract
It is well-known that part of the neural networks capacity is determined by their topology and the employed training process. How a neural network should be designed and how it should be updated every time that new data is acquired, is an issue [...] Read more.
It is well-known that part of the neural networks capacity is determined by their topology and the employed training process. How a neural network should be designed and how it should be updated every time that new data is acquired, is an issue that remains open since it its usually limited to a process of trial and error, based mainly on the experience of the designer. To address this issue, an algorithm that provides plasticity to recurrent neural networks (RNN) applied to time series forecasting is proposed. A decision-making grow and prune paradigm is created, based on the calculation of the data’s order, indicating in which situations during the re-training process (when new data is received), should the network increase or decrease its connections, giving as a result a dynamic architecture that can facilitate the design and implementation of the network, as well as improve its behavior. The proposed algorithm was tested with some time series of the M4 forecasting competition, using Long-Short Term Memory (LSTM) models. Better results were obtained for most of the tests, with new models both larger and smaller than their static versions, showing an average improvement of up to 18%. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop