Next Issue
Volume 15, November
Previous Issue
Volume 15, September
 
 

Algorithms, Volume 15, Issue 10 (October 2022) – 56 articles

Cover Story (view full-size image): We propose single-cell topological simplicial analysis (scTSA), a topological data analysis framework to analyze high-throughput time series data. Applying this approach to single-cell gene expression profiles from local networks of cells in different developmental stages reveals a previously unseen topology of cellular ecology. These networks contain an abundance of cliques of single-cell profiles bound into cavities that guide the emergence of more complicated habitation forms. We visualize these ecological patterns with their topological simplicial architectures and highlight critical developmental stages. As a nonlinear, model-independent, and unsupervised framework, scTSA can be applied to tracing multi-scale cell lineage, identifying critical stages, or creating pseudo-time series. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 4571 KiB  
Article
A Fast Point Clouds Registration Algorithm Based on ISS-USC Feature for the 3D Laser Scanner
by Aihua Wu, Yinjia Ding, Jingfeng Mao and Xudong Zhang
Algorithms 2022, 15(10), 389; https://doi.org/10.3390/a15100389 - 21 Oct 2022
Cited by 4 | Viewed by 1836
Abstract
The point clouds registration is a key step in data processing for the 3D laser scanner to obtain complete information of the object surface, and there are many algorithms. In order to overcome the disadvantages of slow calculation speed and low accuracy of [...] Read more.
The point clouds registration is a key step in data processing for the 3D laser scanner to obtain complete information of the object surface, and there are many algorithms. In order to overcome the disadvantages of slow calculation speed and low accuracy of existing point clouds registration algorithms, a fast point clouds registration algorithm based on the improved voxel filter and ISS-USC feature is proposed. Firstly, the improved voxel filter is used for down-sampling to reduce the size of the original point clouds data. Secondly, the intrinsic shape signature (ISS) feature point detection algorithm is used to extra feature points from the down-sampled point clouds data, and then the unique shape context (USC) descriptor is calculated to describe the extracted feature points. Next, the improved random sampling consensus (RANSAC) algorithm is used for coarse registration to obtain the initial position. Finally, the iterative closest point (ICP) algorithm based on KD tree is used for fine registration, which realizes the transform from the point clouds scanned by the 3D laser scanner at different angles to the same coordinate system. Through comparing with other algorithms and the registration experiment of the VGA connector for monitor, the experimental results verify the effectiveness and feasibility of the proposed algorithm, and it has fastest registration speed while maintaining high registration accuracy. Full article
(This article belongs to the Topic Intelligent Systems and Robotics)
Show Figures

Figure 1

11 pages, 452 KiB  
Article
Computational Modeling of Lymph Filtration and Absorption in the Lymph Node by Boundary Integral Equations
by Alexey Setukha and Rufina Tretiakova
Algorithms 2022, 15(10), 388; https://doi.org/10.3390/a15100388 - 21 Oct 2022
Cited by 2 | Viewed by 1450
Abstract
We develop a numerical method for solving three-dimensional problems of fluid filtration and absorption in a piecewise homogeneous medium by means of boundary integral equations. This method is applied to a simulation of the lymph flow in a lymph node. The lymph node [...] Read more.
We develop a numerical method for solving three-dimensional problems of fluid filtration and absorption in a piecewise homogeneous medium by means of boundary integral equations. This method is applied to a simulation of the lymph flow in a lymph node. The lymph node is considered as a piecewise homogeneous domain containing porous media. The lymph flow is described by Darcy’s law. Taking into account the lymph absorption, we propose an integral representation for the velocity and pressure fields, where the lymph absorption imitates the lymph outflow from a lymph node through a system of capillaries. The original problem is reduced to a system of boundary integral equations, and a numerical algorithm for solving this system is provided. We simulate the lymph velocity and pressure as well as the total lymph flux. The method is verified by comparison with experimental data. Full article
(This article belongs to the Special Issue Computational Methods and Optimization for Numerical Analysis)
Show Figures

Figure 1

13 pages, 5012 KiB  
Article
Dynamics and Stability on a Family of Optimal Fourth-Order Iterative Methods
by Alicia Cordero, Miguel A. Leonardo Sepúlveda and Juan R. Torregrosa
Algorithms 2022, 15(10), 387; https://doi.org/10.3390/a15100387 - 21 Oct 2022
Cited by 5 | Viewed by 1287
Abstract
In this manuscript, we propose a parametric family of iterative methods of fourth-order convergence, and the stability of the class is studied through the use of tools of complex dynamics. We obtain the fixed and critical points of the rational operator associated with [...] Read more.
In this manuscript, we propose a parametric family of iterative methods of fourth-order convergence, and the stability of the class is studied through the use of tools of complex dynamics. We obtain the fixed and critical points of the rational operator associated with the family. A stability analysis of the fixed points allows us to find sets of values of the parameter for which the behavior of the corresponding method is stable or unstable; therefore, we can select the regions of the parameter in which the methods behave more efficiently when they are applied for solving nonlinear equations or the regions in which the schemes have chaotic behavior. Full article
Show Figures

Figure 1

14 pages, 356 KiB  
Article
Biomedical Image Classification via Dynamically Early Stopped Artificial Neural Network
by Giorgia Franchini, Micaela Verucchi, Ambra Catozzi, Federica Porta and Marco Prato
Algorithms 2022, 15(10), 386; https://doi.org/10.3390/a15100386 - 20 Oct 2022
Viewed by 1584
Abstract
It is well known that biomedical imaging analysis plays a crucial role in the healthcare sector and produces a huge quantity of data. These data can be exploited to study diseases and their evolution in a deeper way or to predict their onsets. [...] Read more.
It is well known that biomedical imaging analysis plays a crucial role in the healthcare sector and produces a huge quantity of data. These data can be exploited to study diseases and their evolution in a deeper way or to predict their onsets. In particular, image classification represents one of the main problems in the biomedical imaging context. Due to the data complexity, biomedical image classification can be carried out by trainable mathematical models, such as artificial neural networks. When employing a neural network, one of the main challenges is to determine the optimal duration of the training phase to achieve the best performance. This paper introduces a new adaptive early stopping technique to set the optimal training time based on dynamic selection strategies to fix the learning rate and the mini-batch size of the stochastic gradient method exploited as the optimizer. The numerical experiments, carried out on different artificial neural networks for image classification, show that the developed adaptive early stopping procedure leads to the same literature performance while finalizing the training in fewer epochs. The numerical examples have been performed on the CIFAR100 dataset and on two distinct MedMNIST2D datasets which are the large-scale lightweight benchmark for biomedical image classification. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

38 pages, 1009 KiB  
Article
Anomaly Detection in Financial Time Series by Principal Component Analysis and Neural Networks
by Stéphane Crépey, Noureddine Lehdili, Nisrine Madhar and Maud Thomas
Algorithms 2022, 15(10), 385; https://doi.org/10.3390/a15100385 - 19 Oct 2022
Cited by 5 | Viewed by 3917
Abstract
A major concern when dealing with financial time series involving a wide variety of market risk factors is the presence of anomalies. These induce a miscalibration of the models used to quantify and manage risk, resulting in potential erroneous risk measures. We propose [...] Read more.
A major concern when dealing with financial time series involving a wide variety of market risk factors is the presence of anomalies. These induce a miscalibration of the models used to quantify and manage risk, resulting in potential erroneous risk measures. We propose an approach that aims to improve anomaly detection in financial time series, overcoming most of the inherent difficulties. Valuable features are extracted from the time series by compressing and reconstructing the data through principal component analysis. We then define an anomaly score using a feedforward neural network. A time series is considered to be contaminated when its anomaly score exceeds a given cutoff value. This cutoff value is not a hand-set parameter but rather is calibrated as a neural network parameter throughout the minimization of a customized loss function. The efficiency of the proposed approach compared to several well-known anomaly detection algorithms is numerically demonstrated on both synthetic and real data sets, with high and stable performance being achieved with the PCA NN approach. We show that value-at-risk estimation errors are reduced when the proposed anomaly detection model is used with a basic imputation approach to correct the anomaly. Full article
Show Figures

Figure 1

24 pages, 1704 KiB  
Article
Model and Training Method of the Resilient Image Classifier Considering Faults, Concept Drift, and Adversarial Attacks
by Viacheslav Moskalenko, Vyacheslav Kharchenko, Alona Moskalenko and Sergey Petrov
Algorithms 2022, 15(10), 384; https://doi.org/10.3390/a15100384 - 19 Oct 2022
Cited by 3 | Viewed by 2317
Abstract
Modern trainable image recognition models are vulnerable to different types of perturbations; hence, the development of resilient intelligent algorithms for safety-critical applications remains a relevant concern to reduce the impact of perturbation on model performance. This paper proposes a model and training method [...] Read more.
Modern trainable image recognition models are vulnerable to different types of perturbations; hence, the development of resilient intelligent algorithms for safety-critical applications remains a relevant concern to reduce the impact of perturbation on model performance. This paper proposes a model and training method for a resilient image classifier capable of efficiently functioning despite various faults, adversarial attacks, and concept drifts. The proposed model has a multi-section structure with a hierarchy of optimized class prototypes and hyperspherical class boundaries, which provides adaptive computation, perturbation absorption, and graceful degradation. The proposed training method entails the application of a complex loss function assembled from its constituent parts in a particular way depending on the result of perturbation detection and the presence of new labeled and unlabeled data. The training method implements principles of self-knowledge distillation, the compactness maximization of class distribution and the interclass gap, the compression of feature representations, and consistency regularization. Consistency regularization makes it possible to utilize both labeled and unlabeled data to obtain a robust model and implement continuous adaptation. Experiments are performed on the publicly available CIFAR-10 and CIFAR-100 datasets using model backbones based on modules ResBlocks from the ResNet50 architecture and Swin transformer blocks. It is experimentally proven that the proposed prototype-based classifier head is characterized by a higher level of robustness and adaptability in comparison with the dense layer-based classifier head. It is also shown that multi-section structure and self-knowledge distillation feature conserve resources when processing simple samples under normal conditions and increase computational costs to improve the reliability of decisions when exposed to perturbations. Full article
(This article belongs to the Special Issue Self-Learning and Self-Adapting Algorithms in Machine Learning)
Show Figures

Figure 1

16 pages, 3361 KiB  
Article
Distributed Fuzzy Cognitive Maps for Feature Selection in Big Data Classification
by K. Haritha, M. V. Judy, Konstantinos Papageorgiou, Vassilis C. Georgiannis and Elpiniki Papageorgiou
Algorithms 2022, 15(10), 383; https://doi.org/10.3390/a15100383 - 19 Oct 2022
Cited by 4 | Viewed by 1692
Abstract
The features of a dataset play an important role in the construction of a machine learning model. Because big datasets often have a large number of features, they may contain features that are less relevant to the machine learning task, which makes the [...] Read more.
The features of a dataset play an important role in the construction of a machine learning model. Because big datasets often have a large number of features, they may contain features that are less relevant to the machine learning task, which makes the process more time-consuming and complex. In order to facilitate learning, it is always recommended to remove the less significant features. The process of eliminating the irrelevant features and finding an optimal feature set involves comprehensively searching the dataset and considering every subset in the data. In this research, we present a distributed fuzzy cognitive map based learning-based wrapper method for feature selection that is able to extract those features from a dataset that play the most significant role in decision making. Fuzzy cognitive maps (FCMs) represent a hybrid computing technique combining elements of both fuzzy logic and cognitive maps. Using Spark’s resilient distributed datasets (RDDs), the proposed model can work effectively in a distributed manner for quick, in-memory processing along with effective iterative computations. According to the experimental results, when the proposed model is applied to a classification task, the features selected by the model help to expedite the classification process. The selection of relevant features using the proposed algorithm is on par with existing feature selection algorithms. In conjunction with a random forest classifier, the proposed model produced an average accuracy above 90%, as opposed to 85.6% accuracy when no feature selection strategy was adopted. Full article
(This article belongs to the Special Issue Algorithms in Data Classification)
Show Figures

Figure 1

25 pages, 751 KiB  
Article
Modeling Different Deployment Variants of a Composite Application in a Single Declarative Deployment Model
by Miles Stötzner, Steffen Becker, Uwe Breitenbücher, Kálmán Képes and Frank Leymann
Algorithms 2022, 15(10), 382; https://doi.org/10.3390/a15100382 - 19 Oct 2022
Cited by 2 | Viewed by 1620
Abstract
For automating the deployment of composite applications, typically, declarative deployment models are used. Depending on the context, the deployment of an application has to fulfill different requirements, such as costs and elasticity. As a consequence, one and the same application, i.e., its components, [...] Read more.
For automating the deployment of composite applications, typically, declarative deployment models are used. Depending on the context, the deployment of an application has to fulfill different requirements, such as costs and elasticity. As a consequence, one and the same application, i.e., its components, and their dependencies, often need to be deployed in different variants. If each different variant of a deployment is described using an individual deployment model, it quickly results in a large number of models, which are error prone to maintain. Deployment technologies, such as Terraform or Ansible, support conditional components and dependencies which allow modeling different deployment variants of a composite application in a single deployment model. However, there are deployment technologies, such as TOSCA and Docker Compose, which do not support such conditional elements. To address this, we extend the Essential Deployment Metamodel (EDMM) by conditional components and dependencies. EDMM is a declarative deployment model which can be mapped to several deployment technologies including Terraform, Ansible, TOSCA, and Docker Compose. Preprocessing such an extended model, i.e., conditional elements are evaluated and either preserved or removed, generates an EDMM conform model. As a result, conditional elements can be integrated on top of existing deployment technologies that are unaware of such concepts. We evaluate this by implementing a preprocessor for TOSCA, called OpenTOSCA Vintner, which employs the open-source TOSCA orchestrators xOpera and Unfurl to execute the generated TOSCA conform models. Full article
(This article belongs to the Collection Parallel and Distributed Computing: Algorithms and Applications)
Show Figures

Figure 1

17 pages, 812 KiB  
Article
Weibull-Open-World (WOW) Multi-Type Novelty Detection in CartPole3D
by Terrance E. Boult, Nicolas M. Windesheim, Steven Zhou, Christopher Pereyda and Lawrence B. Holder
Algorithms 2022, 15(10), 381; https://doi.org/10.3390/a15100381 - 18 Oct 2022
Cited by 2 | Viewed by 1535
Abstract
Algorithms for automated novelty detection and management are of growing interest but must address the inherent uncertainty from variations in non-novel environments while detecting the changes from the novelty. This paper expands on a recent unified framework to develop an operational theory for [...] Read more.
Algorithms for automated novelty detection and management are of growing interest but must address the inherent uncertainty from variations in non-novel environments while detecting the changes from the novelty. This paper expands on a recent unified framework to develop an operational theory for novelty that includes multiple (sub)types of novelty. As an example, this paper explores the problem of multi-type novelty detection in a 3D version of CartPole, wherein the cart Weibull-Open-World control-agent (WOW-agent) is confronted by different sub-types/levels of novelty from multiple independent agents moving in the environment. The WOW-agent must balance the pole and detect and characterize the novelties while adapting to maintain that balance. The approach develops static, dynamic, and prediction-error measures of dissimilarity to address different signals/sources of novelty. The WOW-agent uses the Extreme Value Theory, applied per dimension of the dissimilarity measures, to detect outliers and combines different dimensions to characterize the novelty. In blind/sequestered testing, the system detects nearly 100% of the non-nuisance novelties, detects many nuisance novelties, and shows it is better than novelty detection using a Gaussian-based approach. We also show the WOW-agent’s lookahead collision avoiding control is significantly better than a baseline Deep-Q-learning Networktrained controller. Full article
(This article belongs to the Collection Feature Paper in Algorithms and Complexity Theory)
Show Figures

Figure 1

28 pages, 6356 KiB  
Article
A Novel Adaptive FCM with Cooperative Multi-Population Differential Evolution Optimization
by Amit Banerjee and Issam Abu-Mahfouz
Algorithms 2022, 15(10), 380; https://doi.org/10.3390/a15100380 - 17 Oct 2022
Cited by 1 | Viewed by 1348
Abstract
Fuzzy c-means (FCM), the fuzzy variant of the popular k-means, has been used for data clustering when cluster boundaries are not well defined. The choice of initial cluster prototypes (or the initialization of cluster memberships), and the fact that the number of [...] Read more.
Fuzzy c-means (FCM), the fuzzy variant of the popular k-means, has been used for data clustering when cluster boundaries are not well defined. The choice of initial cluster prototypes (or the initialization of cluster memberships), and the fact that the number of clusters needs to be defined a priori are two major factors that can affect the performance of FCM. In this paper, we review algorithms and methods used to overcome these two specific drawbacks. We propose a new cooperative multi-population differential evolution method with elitism to identify near-optimal initial cluster prototypes and also determine the most optimal number of clusters in the data. The differential evolution populations use a smaller subset of the dataset, one that captures the same structure of the dataset. We compare the proposed methodology to newer methods proposed in the literature, with simulations performed on standard benchmark data from the UCI machine learning repository. Finally, we present a case study for clustering time-series patterns from sensor data related to real-time machine health monitoring using the proposed method. Simulation results are promising and show that the proposed methodology can be effective in clustering a wide range of datasets. Full article
(This article belongs to the Special Issue Algorithms in Data Classification)
Show Figures

Figure 1

19 pages, 593 KiB  
Article
Fair Benchmark for Unsupervised Node Representation Learning
by Zhihao Guo, Shengyuan Chen, Xiao Huang, Zhiqiang Qian, Chunsing Yu, Yan Xu and Fang Ding
Algorithms 2022, 15(10), 379; https://doi.org/10.3390/a15100379 - 17 Oct 2022
Viewed by 1409
Abstract
Most machine-learning algorithms assume that instances are independent of each other. This does not hold for networked data. Node representation learning (NRL) aims to learn low-dimensional vectors to represent nodes in a network, such that all actionable patterns in topological structures and side [...] Read more.
Most machine-learning algorithms assume that instances are independent of each other. This does not hold for networked data. Node representation learning (NRL) aims to learn low-dimensional vectors to represent nodes in a network, such that all actionable patterns in topological structures and side information can be preserved. The widespread availability of networked data, e.g., social media, biological networks, and traffic networks, along with plentiful applications, facilitate the development of NRL. However, it has become challenging for researchers and practitioners to track the state-of-the-art NRL algorithms, given that they were evaluated using different experimental settings and datasets. To this end, in this paper, we focus on unsupervised NRL and propose a fair and comprehensive evaluation framework to systematically evaluate state-of-the-art unsupervised NRL algorithms. We comprehensively evaluate each algorithm by applying it to three evaluation tasks, i.e., classification fine tuned via a validation set, link prediction fine-tuned in the first run, and classification fine tuned via link prediction. In each task and each dataset, all NRL algorithms were fine-tuned using a random search within a fixed amount of time. Based on the results for three tasks and eight datasets, we evaluate and rank thirteen unsupervised NRL algorithms. Full article
(This article belongs to the Special Issue Graph Embedding Applications)
Show Figures

Figure 1

22 pages, 5459 KiB  
Article
A Method of Accuracy Increment Using Segmented Regression
by Jamil Al-Azzeh, Abdelwadood Mesleh, Maksym Zaliskyi, Roman Odarchenko and Valeriyi Kuzmin
Algorithms 2022, 15(10), 378; https://doi.org/10.3390/a15100378 - 17 Oct 2022
Cited by 13 | Viewed by 1453
Abstract
The main purpose of mathematical model building while employing statistical data analysis is to obtain high accuracy of approximation within the range of observed data and sufficient predictive properties. One of the methods for creating mathematical models is to use the techniques of [...] Read more.
The main purpose of mathematical model building while employing statistical data analysis is to obtain high accuracy of approximation within the range of observed data and sufficient predictive properties. One of the methods for creating mathematical models is to use the techniques of regression analysis. Regression analysis usually applies single polynomial functions of higher order as approximating curves. Such an approach provides high accuracy; however, in many cases, it does not match the geometrical structure of the observed data, which results in unsatisfactory predictive properties. Another approach is associated with the use of segmented functions as approximating curves. Such an approach has the problem of estimating the coordinates of the breakpoint between adjacent segments. This article proposes a new method for determining abscissas of the breakpoint for segmented regression, minimizing the standard deviation based on multidimensional paraboloid usage. The proposed method is explained by calculation examples obtained using statistical simulation and real data observation. Full article
Show Figures

Figure 1

27 pages, 1237 KiB  
Article
The Assignment Problem and Its Relation to Logistics Problems
by Milos Seda
Algorithms 2022, 15(10), 377; https://doi.org/10.3390/a15100377 - 16 Oct 2022
Cited by 3 | Viewed by 3109
Abstract
The assignment problem is a problem that takes many forms in optimization and graph theory, and by changing some of the constraints or interpreting them differently and adding other constraints, it can be converted to routing, distribution, and scheduling problems. Showing such correlations [...] Read more.
The assignment problem is a problem that takes many forms in optimization and graph theory, and by changing some of the constraints or interpreting them differently and adding other constraints, it can be converted to routing, distribution, and scheduling problems. Showing such correlations is one of the aims of this paper. For some of the derived problems having exponential time complexity, the question arises of their solvability for larger instances. Instead of the traditional approach based on the use of approximate or stochastic heuristic methods, we focus here on the direct use of mixed integer programming models in the GAMS environment, which is now capable of solving instances much larger than in the past and does not require complex parameter settings or statistical evaluation of the results as in the case of stochastic heuristics because the computational core of software tools, nested in GAMS, is deterministic in nature. The source codes presented may be an aid because this tool is not yet as well known as the MATLAB Optimisation Toolbox. Benchmarks of the permutation flow shop scheduling problem with the informally derived MIP model and the traveling salesman problem are used to present the limits of the software’s applicability. Full article
(This article belongs to the Special Issue Scheduling: Algorithms and Applications)
Show Figures

Figure 1

22 pages, 2227 KiB  
Article
Listening to the City, Attentively: A Spatio-Temporal Attention-Boosted Autoencoder for the Short-Term Flow Prediction Problem
by Stefano Fiorini, Michele Ciavotta and Andrea Maurino
Algorithms 2022, 15(10), 376; https://doi.org/10.3390/a15100376 - 14 Oct 2022
Cited by 5 | Viewed by 1636
Abstract
In recent years, studying and predicting mobility patterns in urban environments has become increasingly important as accurate and timely information on current and future vehicle flows can successfully increase the quality and availability of transportation services (e.g., sharing services). However, predicting the number [...] Read more.
In recent years, studying and predicting mobility patterns in urban environments has become increasingly important as accurate and timely information on current and future vehicle flows can successfully increase the quality and availability of transportation services (e.g., sharing services). However, predicting the number of incoming and outgoing vehicles for different city areas is challenging due to the nonlinear spatial and temporal dependencies typical of urban mobility patterns. In this work, we propose STREED-Net, a novel autoencoder architecture featuring time-distributed convolutions, cascade hierarchical units and two distinct attention mechanisms (one spatial and one temporal) that effectively captures and exploits complex spatial and temporal patterns in mobility data for short-term flow prediction problem. The results of a thorough experimental analysis using real-life data are reported, indicating that the proposed model improves the state-of-the-art for this task. Full article
(This article belongs to the Special Issue Neural Network for Traffic Forecasting)
Show Figures

Figure 1

23 pages, 3198 KiB  
Article
Blocking Cyclic Job-Shop Scheduling Problems
by Atabak Elmi, Dhananjay R. Thiruvady and Andreas T. Ernst
Algorithms 2022, 15(10), 375; https://doi.org/10.3390/a15100375 - 14 Oct 2022
Viewed by 1560
Abstract
Cyclic scheduling is of vital importance in a repetitive discrete manufacturing environment. We investigate scheduling in the context of general cyclic job shops with blocking where there are no intermediate buffers between the machines. We also consider sequence-dependent setups (anticipatory and nonanticipatory), which [...] Read more.
Cyclic scheduling is of vital importance in a repetitive discrete manufacturing environment. We investigate scheduling in the context of general cyclic job shops with blocking where there are no intermediate buffers between the machines. We also consider sequence-dependent setups (anticipatory and nonanticipatory), which commonly appear in different manufacturing environments. The choice of blocking condition, that is whether the sequence-dependent setups are anticipatory or not, significantly impacts the optimal schedules. We provide a novel mixed-integer programming (MIP) model for the above problem, namely blocking cyclic job-shop scheduling. Furthermore, we study the impact of sequence-dependent setups in this research. The problem is analysed in detail with respect to anticipatory and nonanticipatory setups and the efficiency of the proposed model is investigated via a computational study that is conducted on a set of randomly generated problem instances. The proposed MIP models are capable of solving small-to-medium-sized problems. Moreover, the analysis presented demonstrates that anticipatory setups directly affect blocking conditions, since intermediate buffers between the machines are not present. Hence, in systems with anticipatory setups, cycle times increase to a greater extent compared to systems with nonanticipatory setups. Full article
Show Figures

Figure 1

15 pages, 6471 KiB  
Article
Impact of Iterative Bilateral Filtering on the Noise Power Spectrum of Computed Tomography Images
by Choirul Anam, Ariij Naufal, Heri Sutanto, Kusworo Adi and Geoff Dougherty
Algorithms 2022, 15(10), 374; https://doi.org/10.3390/a15100374 - 13 Oct 2022
Cited by 3 | Viewed by 2044
Abstract
A bilateral filter is a non-linear denoising algorithm that can reduce noise while preserving the edges. This study explores the characteristics of a bilateral filter in changing the noise and texture within computed tomography (CT) images in an iterative implementation. We collected images [...] Read more.
A bilateral filter is a non-linear denoising algorithm that can reduce noise while preserving the edges. This study explores the characteristics of a bilateral filter in changing the noise and texture within computed tomography (CT) images in an iterative implementation. We collected images of a homogeneous Neusoft phantom scanned with tube currents of 77, 154, and 231 mAs. The images for each tube current were filtered five times with a configuration of sigma space (σd) = 2 pixels, sigma intensity (σr) = noise level, and a kernel of 5 × 5 pixels. To observe the noise texture in each filter iteration, the noise power spectrum (NPS) was obtained for the five slices of each dataset and averaged to generate a stable curve. The modulation-transfer function (MTF) was also measured from the original and the filtered images. Tests on an anthropomorphic phantom image were carried out to observe their impact on clinical scenarios. Noise measurements and visual observations of edge sharpness were performed on this image. Our results showed that the bilateral filter was effective in suppressing noise at high frequencies, which is confirmed by the sloping NPS curve for different tube currents. The peak frequency was shifted from about 0.2 to about 0.1 mm−1 for all tube currents, and the noise magnitude was reduced by more than 50% compared to the original images. The spatial resolution does not change with the number of iterations of the filter, which is confirmed by the constant values of MTF50 and MTF10. The test results on the anthropomorphic phantom image show a similar pattern, with noise reduced by up to 60% and object edges remaining sharp. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

14 pages, 2888 KiB  
Article
Computational Complexity of Modified Blowfish Cryptographic Algorithm on Video Data
by Abidemi Emmanuel Adeniyi, Sanjay Misra, Eniola Daniel and Anthony Bokolo, Jr.
Algorithms 2022, 15(10), 373; https://doi.org/10.3390/a15100373 - 10 Oct 2022
Cited by 7 | Viewed by 2225
Abstract
Background: The technological revolution has allowed users to exchange data and information in various fields, and this is one of the most prevalent uses of computer technologies. However, in a world where third parties are capable of collecting, stealing, and destroying information without [...] Read more.
Background: The technological revolution has allowed users to exchange data and information in various fields, and this is one of the most prevalent uses of computer technologies. However, in a world where third parties are capable of collecting, stealing, and destroying information without authorization, cryptography remains the primary tool that assists users in keeping their information secure using various techniques. Blowfish is an encryption process that is modest, protected, and proficient, with the size of the message and the key size affecting its performance. Aim: the goal of this study is to design a modified Blowfish algorithm by changing the structure of the F function to encrypt and decrypt video data. After which, the performance of the normal and modified Blowfish algorithm will be obtained in terms of time complexity and the avalanche effect. Methods: To compare the encryption time and security, the modified Blowfish algorithm will use only two S-boxes in the F function instead of the four used in Blowfish. Encryption and decryption times were calculated to compare Blowfish to the modified Blowfish algorithm, with the findings indicating that the modified Blowfish algorithm performs better. Results: The Avalanche Effect results reveal that normal Blowfish has a higher security level for all categories of video file size than the modified Blowfish algorithm, with 50.7176% for normal Blowfish and 43.3398% for the modified Blowfish algorithm of 187 kb; hence, it is preferable to secure data and programs that demand a high level of security with Blowfish. Conclusions: From the experimental results, the modified Blowfish algorithm performs faster than normal Blowfish in terms of time complexity with an average execution time of 250.0 ms for normal Blowfish and 248.4 ms for the modified Blowfish algorithm. Therefore, it can be concluded that the modified Blowfish algorithm using the F-structure is time-efficient while normal Blowfish is better in terms of security. Full article
(This article belongs to the Collection Feature Paper in Algorithms and Complexity Theory)
Show Figures

Figure 1

23 pages, 1396 KiB  
Article
An Application of a Decision Support System Enabled by a Hybrid Algorithmic Framework for Production Scheduling in an SME Manufacturer
by Athanasios C. Spanos, Sotiris P. Gayialis, Evripidis P. Kechagias and Georgios A. Papadopoulos
Algorithms 2022, 15(10), 372; https://doi.org/10.3390/a15100372 - 10 Oct 2022
Viewed by 1915
Abstract
In this research, we present a hybrid algorithmic framework and its integration into the precise production scheduling system of a Greek metal forming factory. The system was created as a decision support tool to assist production planners in arranging weekly production orders to [...] Read more.
In this research, we present a hybrid algorithmic framework and its integration into the precise production scheduling system of a Greek metal forming factory. The system was created as a decision support tool to assist production planners in arranging weekly production orders to work centers and other manufacturing cells. The functionality offered includes dispatching priority rules, bottleneck identification for capacity planning, production order reallocation to alternate work centers and planning periods, interchangeable scheduling scenarios, and work-in-process availability checks based on bill of materials (BOM) precedence constraints. As a consequence, a solid short-term production plan is created, capable of absorbing shop floor risks such as machine failures and urgent orders. The primary design ideas are simplicity, ease of use, a flexible Gantt-chart-based graphical user interface (GUI), controllable report creation, and a modest development budget. The practical application takes place in a make-to-stock (MTS) environment with a complicated multi-level production process, defined due dates, and parallel machines. A critical component is the integration with legacy applications and the existing enterprise resource planning (ERP) system. The method adopted here avoids both overburdening the existing information system architecture with software pipeline spaghetti, as is common with point-to-point integration, and overshooting implementation costs, as is often the case with service-oriented architectures. Full article
(This article belongs to the Special Issue Optimization Methods in Operations and Supply Chain Management)
Show Figures

Figure 1

19 pages, 17531 KiB  
Article
Topological Data Analysis in Time Series: Temporal Filtration and Application to Single-Cell Genomics
by Baihan Lin
Algorithms 2022, 15(10), 371; https://doi.org/10.3390/a15100371 - 10 Oct 2022
Cited by 3 | Viewed by 2408
Abstract
The absence of a conventional association between the cell–cell cohabitation and its emergent dynamics into cliques during development has hindered our understanding of how cell populations proliferate, differentiate, and compete (i.e., the cell ecology). With the recent advancement of single-cell RNA sequencing (RNA-seq), [...] Read more.
The absence of a conventional association between the cell–cell cohabitation and its emergent dynamics into cliques during development has hindered our understanding of how cell populations proliferate, differentiate, and compete (i.e., the cell ecology). With the recent advancement of single-cell RNA sequencing (RNA-seq), we can potentially describe such a link by constructing network graphs that characterize the similarity of the gene expression profiles of the cell-specific transcriptional programs and analyze these graphs systematically using the summary statistics given by the algebraic topology. We propose single-cell topological simplicial analysis (scTSA). Applying this approach to the single-cell gene expression profiles from local networks of cells in different developmental stages with different outcomes reveals a previously unseen topology of cellular ecology. These networks contain an abundance of cliques of single-cell profiles bound into cavities that guide the emergence of more complicated habitation forms. We visualize these ecological patterns with topological simplicial architectures of these networks, compared with the null models. Benchmarked on the single-cell RNA-seq data of zebrafish embryogenesis spanning 38,731 cells, 25 cell types, and 12 time steps, our approach highlights gastrulation as the most critical stage, consistent with the consensus in developmental biology. As a nonlinear, model-independent, and unsupervised framework, our approach can also be applied to tracing multi-scale cell lineage, identifying critical stages, or creating pseudo-time series. Full article
Show Figures

Figure 1

17 pages, 4401 KiB  
Article
Deep Learning Process and Application for the Detection of Dangerous Goods Passing through Motorway Tunnels
by George Sisias, Myrto Konstantinidou and Sotirios Kontogiannis
Algorithms 2022, 15(10), 370; https://doi.org/10.3390/a15100370 - 10 Oct 2022
Cited by 1 | Viewed by 2104
Abstract
Automated deep learning and data mining algorithms can provide accurate detection, frequency patterns, and predictions of dangerous goods passing through motorways and tunnels. This paper presents a post-processing image detection application and a three-stage deep learning detection algorithm that identifies and records dangerous [...] Read more.
Automated deep learning and data mining algorithms can provide accurate detection, frequency patterns, and predictions of dangerous goods passing through motorways and tunnels. This paper presents a post-processing image detection application and a three-stage deep learning detection algorithm that identifies and records dangerous goods’ passage through motorways and tunnels. This tool receives low-resolution input from toll camera images and offers timely information on vehicles carrying dangerous goods. According to the authors’ experimentation, the mean accuracy achieved by stage 2 of the proposed algorithm in identifying the ADR plates is close to 96% and 92% of both stages 1 and 2 of the algorithm. In addition, for the successful optical character recognition of the ADR numbers, the algorithm’s stage 3 mean accuracy is between 90 and 97%, and overall successful detection and Optical Character Recognition accuracy are close to 94%. Regarding execution time, the proposed algorithm can achieve real-time detection capabilities by processing one image in less than 2.69 s. Full article
Show Figures

Figure 1

15 pages, 665 KiB  
Article
A Static Assignment Algorithm of Uniform Jobs to Workers in a User-PC Computing System Using Simultaneous Linear Equations
by Xudong Zhou, Nobuo Funabiki, Hein Htet, Ariel Kamoyedji, Irin Tri Anggraini, Yuanzhi Huo and Yan Watequlis Syaifudin
Algorithms 2022, 15(10), 369; https://doi.org/10.3390/a15100369 - 07 Oct 2022
Cited by 1 | Viewed by 1542
Abstract
Currently, the User-PC computingsystem (UPC) has been studied as a low-cost and high-performance distributed computing platform. It uses idling resources of personal computers (PCs) in a group. The job-worker assignment for minimizing makespan is critical to determine the performance of the UPC system. [...] Read more.
Currently, the User-PC computingsystem (UPC) has been studied as a low-cost and high-performance distributed computing platform. It uses idling resources of personal computers (PCs) in a group. The job-worker assignment for minimizing makespan is critical to determine the performance of the UPC system. Some applications need to execute a lot of uniform jobs that use the identical program but with slightly different data, where they take the similar CPU time on a PC. Then, the total CPU time of a worker is almost linear to the number of assigned jobs. In this paper, we propose a static assignment algorithm of uniform jobs to workers in the UPC system, using simultaneous linear equations to find the lower bound on makespan, where every worker requires the same CPU time to complete the assigned jobs. For the evaluations of the proposal, we consider the uniform jobs in three applications. In OpenPose, the CNN-based keypoint estimation program runs with various images of human bodies. In OpenFOAM, the physics simulation program runs with various parameter sets. In code testing, two open-source programs run with various source codes from students for the Android programming learning assistance system (APLAS). Using the proposal, we assigned the jobs to six workers in the testbed UPC system and measured the CPU time. The results show that makespan was reduced by 10% on average, which confirms the effectiveness of the proposal. Full article
(This article belongs to the Special Issue Scheduling: Algorithms and Applications)
Show Figures

Figure 1

13 pages, 6694 KiB  
Article
A Computational Approach to Overtaking Station Track Layout Design Using Graphs: An Extension That Supports Special Turnouts—An Improved Alternative Track Layout Proposal
by Eugenio Roanes-Lozano
Algorithms 2022, 15(10), 368; https://doi.org/10.3390/a15100368 - 03 Oct 2022
Viewed by 1686
Abstract
The author recently designed, developed and implemented in Maple a package based on the use of digraphs that analyses the connectivity of an overtaking station on a double-track line. It was used to propose an alternative track layout for this kind of station, [...] Read more.
The author recently designed, developed and implemented in Maple a package based on the use of digraphs that analyses the connectivity of an overtaking station on a double-track line. It was used to propose an alternative track layout for this kind of station, with advantages over the track layouts usually adopted. However, that package could only deal with “standard” turnouts (but neither with crossings nor with “special” turnouts, such as “single slip turnouts” or “scissors crossings”). This new article presents an improved version of the package. It uses a trick consisting in including virtual vertices in the associated digraph that are dead ends. This way it is possible to include the “special” turnouts in the track layout. It makes it possible to evaluate different alternative track layouts, including “special” turnouts; and to finally find one track layout that has advantages over the standard one and over the one proposed in the previous article. Let us observe that the design of the track layout is key for the exploitation of the infrastructure. In fact, the Spanish infrastructure administrator is nowadays remodelling the track layout of some of its main railway stations, as well as other smaller facilities. Full article
Show Figures

Figure 1

22 pages, 5031 KiB  
Article
Comparing Approaches for Explaining DNN-Based Facial Expression Classifications
by Kaya ter Burg and Heysem Kaya
Algorithms 2022, 15(10), 367; https://doi.org/10.3390/a15100367 - 03 Oct 2022
Cited by 5 | Viewed by 1981
Abstract
Classifying facial expressions is a vital part of developing systems capable of aptly interacting with users. In this field, the use of deep-learning models has become the standard. However, the inner workings of these models are unintelligible, which is an important issue when [...] Read more.
Classifying facial expressions is a vital part of developing systems capable of aptly interacting with users. In this field, the use of deep-learning models has become the standard. However, the inner workings of these models are unintelligible, which is an important issue when deploying them to high-stakes environments. Recent efforts to generate explanations for emotion classification systems have been focused on this type of models. In this work, an alternative way of explaining the decisions of a more conventional model based on geometric features is presented. We develop a geometric-features-based deep neural network (DNN) and a convolutional neural network (CNN). Ensuring a sufficient level of predictive accuracy, we analyze explainability using both objective quantitative criteria and a user study. Results indicate that the fidelity and accuracy scores of the explanations approximate the DNN well. From the performed user study, it becomes clear that the explanations increase the understanding of the DNN and that they are preferred over the explanations for the CNN, which are more commonly used. All scripts used in the study are publicly available. Full article
(This article belongs to the Special Issue Machine Learning in Pattern Recognition)
Show Figures

Figure 1

34 pages, 7944 KiB  
Article
Efficient 0/1-Multiple-Knapsack Problem Solving by Hybrid DP Transformation and Robust Unbiased Filtering
by Patcharin Buayen and Jeeraporn Werapun
Algorithms 2022, 15(10), 366; https://doi.org/10.3390/a15100366 - 30 Sep 2022
Cited by 1 | Viewed by 2571
Abstract
The multiple knapsack problem (0/1-mKP) is a valuable NP-hard problem involved in many science-and-engineering applications. In current research, there exist two main approaches: 1. the exact algorithms for the optimal solutions (i.e., branch-and-bound, dynamic programming (DP), etc.) and 2. the approximate algorithms in [...] Read more.
The multiple knapsack problem (0/1-mKP) is a valuable NP-hard problem involved in many science-and-engineering applications. In current research, there exist two main approaches: 1. the exact algorithms for the optimal solutions (i.e., branch-and-bound, dynamic programming (DP), etc.) and 2. the approximate algorithms in polynomial time (i.e., Genetic algorithm, swarm optimization, etc.). In the past, the exact-DP could find the optimal solutions of the 0/1-KP (one knapsack, n objects) in O(nC). For large n and massive C, the unbiased filtering was incorporated with the exact-DP to solve the 0/1-KP in O(n + C′) with 95% optimal solutions. For the complex 0/1-mKP (m knapsacks) in this study, we propose a novel research track with hybrid integration of DP-transformation (DPT), exact-fit (best) knapsack order (m!-to-m2 reduction), and robust unbiased filtering. First, the efficient DPT algorithm is proposed to find the optimal solutions for each knapsack in O([n2,nC]). Next, all knapsacks are fulfilled by the exact-fit (best) knapsack order in O(m2[n2,nC]) over O(m![n2,nC]) while retaining at least 99% optimal solutions as m! orders. Finally, robust unbiased filtering is incorporated to solve the 0/1-mKP in O(m2n). In experiments, our efficient 0/1-mKP reduction confirmed 99% optimal solutions on random and benchmark datasets (n δ 10,000, m δ 100). Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

29 pages, 4471 KiB  
Review
Enhanced Maximum Power Point Techniques for Solar Photovoltaic System under Uniform Insolation and Partial Shading Conditions: A Review
by Laxman Bhukya, Narender Reddy Kedika and Surender Reddy Salkuti
Algorithms 2022, 15(10), 365; https://doi.org/10.3390/a15100365 - 29 Sep 2022
Cited by 27 | Viewed by 3215
Abstract
In the recent past, the solar photovoltaic (PV) system has emerged as the most promising source of alternative energy. This solar PV system suffers from an unavoidable phenomenon due to the fluctuating environmental conditions. It has nonlinearity in I-V curves, which reduces the [...] Read more.
In the recent past, the solar photovoltaic (PV) system has emerged as the most promising source of alternative energy. This solar PV system suffers from an unavoidable phenomenon due to the fluctuating environmental conditions. It has nonlinearity in I-V curves, which reduces the output efficiency. Hence, the optimum maximum power point (MPP) extraction of the PV system is difficult to achieve. Therefore, for maximizing the power output of PV systems, a maximum power point tracking (MPPT) mechanism, which is a control algorithm that can constantly track the MPP during operation, is required. However, choosing a suitable MPPT technique might be confusing because each method has its own set of advantages and disadvantages. Hence, a proper review of these methods is essential. In this paper, a state-of-the-art review on various MPPT techniques based on their classifications, such as offline, online, and hybrid techniques under uniform and nonuniform irradiances, is presented. In comparison to offline and online MPPT methods, intelligent MPPT techniques have better tracking accuracy and tracking efficiency with less steady state oscillations. Unlike online and offline techniques, intelligent methods track the global MPP under partial shade conditions. This review paper will be a useful resource for researchers, as well as practicing engineers, to pave the way for additional research and development in the MPPT field. Full article
Show Figures

Figure 1

16 pages, 472 KiB  
Article
A Constructive Heuristics and an Iterated Neighborhood Search Procedure to Solve the Cost-Balanced Path Problem
by Daniela Ambrosino, Carmine Cerrone and Anna Sciomachen
Algorithms 2022, 15(10), 364; https://doi.org/10.3390/a15100364 - 29 Sep 2022
Cited by 1 | Viewed by 1432
Abstract
This paper presents a new heuristic algorithm tailored to solve large instances of an NP-hard variant of the shortest path problem, denoted the cost-balanced path problem, recently proposed in the literature. The problem consists in finding the origin–destination path in a direct graph, [...] Read more.
This paper presents a new heuristic algorithm tailored to solve large instances of an NP-hard variant of the shortest path problem, denoted the cost-balanced path problem, recently proposed in the literature. The problem consists in finding the origin–destination path in a direct graph, having both negative and positive weights associated with the arcs, such that the total sum of the weights of the selected arcs is as close to zero as possible. At least to the authors’ knowledge, there are no solution algorithms for facing this problem. The proposed algorithm integrates a constructive procedure and an improvement procedure, and it is validated thanks to the implementation of an iterated neighborhood search procedure. The reported numerical experimentation shows that the proposed algorithm is computationally very efficient. In particular, the proposed algorithm is most suitable in the case of large instances where it is possible to prove the existence of a perfectly balanced path and thus the optimality of the solution by finding a good percentage of optimal solutions in negligible computational time. Full article
Show Figures

Figure 1

17 pages, 2075 KiB  
Article
Joints Trajectory Planning of Robot Based on Slime Mould Whale Optimization Algorithm
by Xinning Li, Qin Yang, Hu Wu, Shuai Tan, Qun He, Neng Wang and Xianhai Yang
Algorithms 2022, 15(10), 363; https://doi.org/10.3390/a15100363 - 29 Sep 2022
Cited by 4 | Viewed by 1511
Abstract
The joints running trajectory of a robot directly affects it’s working efficiency, stationarity and working quality. To solve the problems of slow convergence speed and weak global search ability in the current commonly used joint trajectory optimization algorithms, a joint trajectory planning method [...] Read more.
The joints running trajectory of a robot directly affects it’s working efficiency, stationarity and working quality. To solve the problems of slow convergence speed and weak global search ability in the current commonly used joint trajectory optimization algorithms, a joint trajectory planning method based on slime mould whale optimization algorithm (SMWOA) was researched, which could obtain the joint trajectory within a short time and with low energy consumption. On the basis of analyses of the whale optimization algorithm (WOA) and slime mould algorithm (SMA) in detail, the SMWOA was proposed by combining the two methods. By adjusting dynamic parameters and introducing dynamic weights, the proposed SMWOA increased the probability of obtaining the global optimal solution. The optimized results of 15 benchmark functions verified that the optimization accuracy of the SMWOA is clearly better than that of other classical algorithms. An experiment was carried out in which this algorithm was applied to joint trajectory optimization. Taking 6-DOF UR5 manipulator as an example, the results show that the optimized running time of the joints is reduced by 37.674% compared with that before optimization. The efficiency of robot joint motion was improved. This study provides a theoretical basis for the optimization of other engineering fields. Full article
(This article belongs to the Special Issue Metaheuristics Algorithms and Their Applications)
Show Figures

Figure 1

23 pages, 391 KiB  
Article
Non-Stationary Stochastic Global Optimization Algorithms
by Jonatan Gomez and Andres Rivera
Algorithms 2022, 15(10), 362; https://doi.org/10.3390/a15100362 - 29 Sep 2022
Cited by 1 | Viewed by 1410
Abstract
Studying the theoretical properties of optimization algorithms such as genetic algorithms and evolutionary strategies allows us to determine when they are suitable for solving a particular type of optimization problem. Such a study consists of three main steps. The first step is considering [...] Read more.
Studying the theoretical properties of optimization algorithms such as genetic algorithms and evolutionary strategies allows us to determine when they are suitable for solving a particular type of optimization problem. Such a study consists of three main steps. The first step is considering such algorithms as Stochastic Global Optimization Algorithms (SGoals ), i.e., iterative algorithm that applies stochastic operations to a set of candidate solutions. The second step is to define a formal characterization of the iterative process in terms of measure theory and define some of such stochastic operations as stationary Markov kernels (defined in terms of transition probabilities that do not change over time). The third step is to characterize non-stationary SGoals, i.e., SGoals having stochastic operations with transition probabilities that may change over time. In this paper, we develop the third step of this study. First, we generalize the sufficient conditions convergence from stationary to non-stationary Markov processes. Second, we introduce the necessary theory to define kernels for arithmetic operations between measurable functions. Third, we develop Markov kernels for some selection and recombination schemes. Finally, we formalize the simulated annealing algorithm and evolutionary strategies using the systematic formal approach. Full article
(This article belongs to the Special Issue Optimization under Uncertainty 2022)
28 pages, 1072 KiB  
Article
Foremost Walks and Paths in Interval Temporal Graphs
by Anuj Jain and Sartaj Sahni
Algorithms 2022, 15(10), 361; https://doi.org/10.3390/a15100361 - 29 Sep 2022
Cited by 1 | Viewed by 1470
Abstract
The min-wait foremost, min-hop foremost and min-cost foremost paths and walks problems in interval temporal graphs are considered. We prove that finding min-wait foremost and min-cost foremost walks and paths in interval temporal graphs is NP-hard. We develop a polynomial time algorithm for [...] Read more.
The min-wait foremost, min-hop foremost and min-cost foremost paths and walks problems in interval temporal graphs are considered. We prove that finding min-wait foremost and min-cost foremost walks and paths in interval temporal graphs is NP-hard. We develop a polynomial time algorithm for the single-source all-destinations min-hop foremost paths problem and a pseudopolynomial time algorithm for the single-source all-destinations min-wait foremost walks problem in interval temporal graphs. We benchmark our algorithms against algorithms presented by Bentert et al. for contact sequence graphs and show, experimentally, that our algorithms perform up to 207.5 times faster for finding min-hop foremost paths and up to 23.3 times faster for finding min-wait foremost walks. Full article
Show Figures

Figure 1

16 pages, 831 KiB  
Article
Algorithms for Automatic Data Validation and Performance Assessment of MOX Gas Sensor Data Using Time Series Analysis
by Christof Hammer, Sebastian Sporrer, Johannes Warmer, Peter Kaul, Ronald Thoelen and Norbert Jung
Algorithms 2022, 15(10), 360; https://doi.org/10.3390/a15100360 - 28 Sep 2022
Viewed by 1255
Abstract
The following work presents algorithms for semi-automatic validation, feature extraction and ranking of time series measurements acquired from MOX gas sensors. Semi-automatic measurement validation is accomplished by extending established curve similarity algorithms with a slope-based signature calculation. Furthermore, a feature-based ranking metric is [...] Read more.
The following work presents algorithms for semi-automatic validation, feature extraction and ranking of time series measurements acquired from MOX gas sensors. Semi-automatic measurement validation is accomplished by extending established curve similarity algorithms with a slope-based signature calculation. Furthermore, a feature-based ranking metric is introduced. It allows for individual prioritization of each feature and can be used to find the best performing sensors regarding multiple research questions. Finally, the functionality of the algorithms, as well as the developed software suite, are demonstrated with an exemplary scenario, illustrating how to find the most power-efficient MOX gas sensor in a data set collected during an extensive screening consisting of 16,320 measurements, all taken with different sensors at various temperatures and analytes. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop