Next Issue
Volume 17, January
Previous Issue
Volume 16, November
 
 

Algorithms, Volume 16, Issue 12 (December 2023) – 44 articles

Cover Story (view full-size image): Addressing the dynamic Capacitated Dispersion Problem (CDP), this study presents a novel 'Learnheuristic Algorithm'. Combining heuristic techniques with reinforcement learning, it adeptly manages fluctuating network capacities, crucial in sectors like telecommunications and logistics. This innovative approach transcends traditional static methods, showcasing adaptability and enhanced optimization in variable environments. The algorithm's effectiveness, demonstrated through rigorous simulations, offers a robust solution for complex, dynamic optimization challenges. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
13 pages, 6702 KiB  
Communication
Image Deblurring Based on Convex Non-Convex Sparse Regularization and Plug-and-Play Algorithm
by Yi Wang, Yating Xu, Tianjian Li, Tao Zhang and Jian Zou
Algorithms 2023, 16(12), 574; https://doi.org/10.3390/a16120574 - 18 Dec 2023
Viewed by 1246
Abstract
Image deblurring based on sparse regularization has garnered significant attention, but there are still certain limitations that need to be addressed. For instance, convex sparse regularization tends to exhibit biased estimation, which can adversely impact the deblurring performance, while non-convex sparse regularization poses [...] Read more.
Image deblurring based on sparse regularization has garnered significant attention, but there are still certain limitations that need to be addressed. For instance, convex sparse regularization tends to exhibit biased estimation, which can adversely impact the deblurring performance, while non-convex sparse regularization poses challenges in terms of solving techniques. Furthermore, the performance of the traditional iterative algorithm also needs to be improved. In this paper, we propose an image deblurring method based on convex non-convex (CNC) sparse regularization and a plug-and-play (PnP) algorithm. The utilization of CNC sparse regularization not only mitigates estimation bias but also guarantees the overall convexity of the image deblurring model. The PnP algorithm is an advanced learning-based optimization algorithm that surpasses traditional optimization algorithms in terms of efficiency and performance by utilizing the state-of-the-art denoiser to replace the proximal operator. Numerical experiments verify the performance of our proposed algorithm in image deblurring. Full article
Show Figures

Figure 1

46 pages, 21402 KiB  
Article
On the Development of Descriptor-Based Machine Learning Models for Thermodynamic Properties: Part 2—Applicability Domain and Outliers
by Cindy Trinh, Silvia Lasala, Olivier Herbinet and Dimitrios Meimaroglou
Algorithms 2023, 16(12), 573; https://doi.org/10.3390/a16120573 - 18 Dec 2023
Viewed by 1680
Abstract
This article investigates the applicability domain (AD) of machine learning (ML) models trained on high-dimensional data, for the prediction of the ideal gas enthalpy of formation and entropy of molecules via descriptors. The AD is crucial as it describes the space of chemical [...] Read more.
This article investigates the applicability domain (AD) of machine learning (ML) models trained on high-dimensional data, for the prediction of the ideal gas enthalpy of formation and entropy of molecules via descriptors. The AD is crucial as it describes the space of chemical characteristics in which the model can make predictions with a given reliability. This work studies the AD definition of a ML model throughout its development procedure: during data preprocessing, model construction and model deployment. Three AD definition methods, commonly used for outlier detection in high-dimensional problems, are compared: isolation forest (iForest), random forest prediction confidence (RF confidence) and k-nearest neighbors in the 2D projection of descriptor space obtained via t-distributed stochastic neighbor embedding (tSNE2D/kNN). These methods compute an anomaly score that can be used instead of the distance metrics of classical low-dimension AD definition methods, the latter being generally unsuitable for high-dimensional problems. Typically, in low- (high-) dimensional problems, a molecule is considered to lie within the AD if its distance from the training domain (anomaly score) is below a given threshold. During data preprocessing, the three AD definition methods are used to identify outlier molecules and the effect of their removal is investigated. A more significant improvement of model performance is observed when outliers identified with RF confidence are removed (e.g., for a removal of 30% of outliers, the MAE (Mean Absolute Error) of the test dataset is divided by 2.5, 1.6 and 1.1 for RF confidence, iForest and tSNE2D/kNN, respectively). While these three methods identify X-outliers, the effect of other types of outliers, namely Model-outliers and y-outliers, is also investigated. In particular, the elimination of X-outliers followed by that of Model-outliers enables us to divide MAE and RMSE (Root Mean Square Error) by 2 and 3, respectively, while reducing overfitting. The elimination of y-outliers does not display a significant effect on the model performance. During model construction and deployment, the AD serves to verify the position of the test data and of different categories of molecules with respect to the training data and associate this position with their prediction accuracy. For the data that are found to be close to the training data, according to RF confidence, and display high prediction errors, tSNE 2D representations are deployed to identify the possible sources of these errors (e.g., representation of the chemical information in the training data). Full article
(This article belongs to the Special Issue Nature-Inspired Algorithms in Machine Learning (2nd Edition))
Show Figures

Graphical abstract

27 pages, 4457 KiB  
Article
Improving Clustering Accuracy of K-Means and Random Swap by an Evolutionary Technique Based on Careful Seeding
by Libero Nigro and Franco Cicirelli
Algorithms 2023, 16(12), 572; https://doi.org/10.3390/a16120572 - 17 Dec 2023
Viewed by 1229
Abstract
K-Means is a “de facto” standard clustering algorithm due to its simplicity and efficiency. K-Means, though, strongly depends on the initialization of the centroids (seeding method) and often gets stuck in a local sub-optimal solution. K-Means, in fact, mainly acts as a local [...] Read more.
K-Means is a “de facto” standard clustering algorithm due to its simplicity and efficiency. K-Means, though, strongly depends on the initialization of the centroids (seeding method) and often gets stuck in a local sub-optimal solution. K-Means, in fact, mainly acts as a local refiner of the centroids, and it is unable to move centroids all over the data space. Random Swap was defined to go beyond K-Means, and its modus operandi integrates K-Means in a global strategy of centroids management, which can often generate a clustering solution close to the global optimum. This paper proposes an approach which extends both K-Means and Random Swap and improves the clustering accuracy through an evolutionary technique and careful seeding. Two new algorithms are proposed: the Population-Based K-Means (PB-KM) and the Population-Based Random Swap (PB-RS). Both algorithms consist of two steps: first, a population of J candidate solutions is built, and then the candidate centroids are repeatedly recombined toward a final accurate solution. The paper motivates the design of PB-KM and PB-RS, outlines their current implementation in Java based on parallel streams, and demonstrates the achievable clustering accuracy using both synthetic and real-world datasets. Full article
(This article belongs to the Collection Feature Paper in Metaheuristic Algorithms and Applications)
Show Figures

Figure 1

31 pages, 9710 KiB  
Article
Evolutionary Algorithms in a Bacterial Consortium of Synthetic Bacteria
by Sara Lledó Villaescusa and Rafael Lahoz-Beltra
Algorithms 2023, 16(12), 571; https://doi.org/10.3390/a16120571 - 17 Dec 2023
Viewed by 1427
Abstract
At present, synthetic biology applications are based on the programming of synthetic bacteria with custom-designed genetic circuits through the application of a top-down strategy. These genetic circuits are the programs that implement a certain algorithm, the bacterium being the agent or shell responsible [...] Read more.
At present, synthetic biology applications are based on the programming of synthetic bacteria with custom-designed genetic circuits through the application of a top-down strategy. These genetic circuits are the programs that implement a certain algorithm, the bacterium being the agent or shell responsible for the execution of the program in a given environment. In this work, we study the possibility that instead of programming synthesized bacteria through a custom-designed genetic circuit, it is the circuit itself which emerges as a result of the evolution simulated through an evolutionary algorithm. This study is conducted by performing in silico experiments in a community composed of synthetic bacteria in which one species or strain behaves as pathogenic bacteria against the rest of the non-pathogenic bacteria that are also part of the bacterial consortium. The goal is the eradication of the pathogenic strain through the evolutionary programming of the agents or synthetic bacteria. The results obtained suggest the plausibility of the evolutionary design of the appropriate genetic circuit resulting from the application of a bottom-up strategy and therefore the experimental feasibility of the evolutionary programming of synthetic bacteria. Full article
(This article belongs to the Collection Feature Paper in Metaheuristic Algorithms and Applications)
Show Figures

Figure 1

14 pages, 525 KiB  
Article
Solving NP-Hard Challenges in Logistics and Transportation under General Uncertainty Scenarios Using Fuzzy Simheuristics
by Angel A. Juan, Markus Rabe, Majsa Ammouriova, Javier Panadero, David Peidro and Daniel Riera
Algorithms 2023, 16(12), 570; https://doi.org/10.3390/a16120570 - 16 Dec 2023
Viewed by 1476
Abstract
In the field of logistics and transportation (L&T), this paper reviews the utilization of simheuristic algorithms to address NP-hard optimization problems under stochastic uncertainty. Then, the paper explores an extension of the simheuristics concept by introducing a fuzzy layer to tackle complex optimization [...] Read more.
In the field of logistics and transportation (L&T), this paper reviews the utilization of simheuristic algorithms to address NP-hard optimization problems under stochastic uncertainty. Then, the paper explores an extension of the simheuristics concept by introducing a fuzzy layer to tackle complex optimization problems involving both stochastic and fuzzy uncertainties. The hybrid approach combines simulation, metaheuristics, and fuzzy logic, offering a feasible methodology to solve large-scale NP-hard problems under general uncertainty scenarios. These scenarios are commonly encountered in L&T optimization challenges, such as the vehicle routing problem or the team orienteering problem, among many others. The proposed methodology allows for modeling various problem components—including travel times, service times, customers’ demands, or the duration of electric batteries—as deterministic, stochastic, or fuzzy items. A cross-problem analysis of several computational experiments is conducted to validate the effectiveness of the fuzzy simheuristic methodology. Being a flexible methodology that allows us to tackle NP-hard challenges under general uncertainty scenarios, fuzzy simheuristics can also be applied in fields other than L&T. Full article
(This article belongs to the Special Issue Optimization Algorithms in Logistics, Transportation, and SCM)
Show Figures

Figure 1

41 pages, 9086 KiB  
Article
Generator of Fuzzy Implications
by Athina Daniilidou, Avrilia Konguetsof, Georgios Souliotis and Basil Papadopoulos
Algorithms 2023, 16(12), 569; https://doi.org/10.3390/a16120569 - 15 Dec 2023
Cited by 1 | Viewed by 1387
Abstract
In this research paper, a generator of fuzzy methods based on theorems and axioms of fuzzy logic is derived, analyzed and applied. The family presented generates fuzzy implications according to the value of a selected parameter. The obtained fuzzy implications should satisfy a [...] Read more.
In this research paper, a generator of fuzzy methods based on theorems and axioms of fuzzy logic is derived, analyzed and applied. The family presented generates fuzzy implications according to the value of a selected parameter. The obtained fuzzy implications should satisfy a number of axioms, and the conditions of satisfying the maximum number of axioms are denoted. New theorems are stated and proven based on the rule that the fuzzy function of fuzzy implication, which is strong, leads to fuzzy negation. In this work, the data taken were fuzzified for the application of the new formulae. The fuzzification of the data was undertaken using four kinds of membership degree functions. The new fuzzy functions were compared based on the results obtained after a number of repetitions. The new proposed methodology presents a new family of fuzzy implications, and also an algorithm is shown that produces fuzzy implications so as to be able to select the optimal method of the generator according to the value of a free parameter. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms)
Show Figures

Figure 1

18 pages, 1063 KiB  
Article
Vision-Based Concrete-Crack Detection on Railway Sleepers Using Dense U-Net Model
by Md. Al-Masrur Khan, Seong-Hoon Kee and Abdullah-Al Nahid
Algorithms 2023, 16(12), 568; https://doi.org/10.3390/a16120568 - 15 Dec 2023
Viewed by 1439
Abstract
Crack inspection in railway sleepers is crucial for ensuring rail safety and avoiding deadly accidents. Traditional methods for detecting cracks on railway sleepers are very time-consuming and lack efficiency. Therefore, nowadays, researchers are paying attention to vision-based algorithms, especially Deep Learning algorithms. In [...] Read more.
Crack inspection in railway sleepers is crucial for ensuring rail safety and avoiding deadly accidents. Traditional methods for detecting cracks on railway sleepers are very time-consuming and lack efficiency. Therefore, nowadays, researchers are paying attention to vision-based algorithms, especially Deep Learning algorithms. In this work, we adopted the U-net for the first time for detecting cracks on a railway sleeper and proposed a modified U-net architecture named Dense U-net for segmenting the cracks. In the Dense U-net structure, we established several short connections between the encoder and decoder blocks, which enabled the architecture to obtain better pixel information flow. Thus, the model extracted the necessary information in more detail to predict the cracks. We collected images from railway sleepers, processed them in a dataset, and finally trained the model with the images. The model achieved an overall F1-score, precision, Recall, and IoU of 86.5%, 88.53%, 84.63%, and 76.31%, respectively. We compared our suggested model with the original U-net, and the results demonstrate that our model performed better than the U-net in both quantitative and qualitative results. Moreover, we considered the necessity of crack severity analysis and measured a few parameters of the cracks. The engineers must know the severity of the cracks to have an idea about the most severe locations and take the necessary steps to repair the badly affected sleepers. Full article
(This article belongs to the Topic Lightweight Deep Neural Networks for Video Analytics)
Show Figures

Figure 1

16 pages, 3171 KiB  
Article
Deep Learning-Based Visual Complexity Analysis of Electroencephalography Time-Frequency Images: Can It Localize the Epileptogenic Zone in the Brain?
by Navaneethakrishna Makaram, Sarvagya Gupta, Matthew Pesce, Jeffrey Bolton, Scellig Stone, Daniel Haehn, Marc Pomplun, Christos Papadelis, Phillip Pearl, Alexander Rotenberg, Patricia Ellen Grant and Eleonora Tamilia
Algorithms 2023, 16(12), 567; https://doi.org/10.3390/a16120567 - 15 Dec 2023
Viewed by 1615
Abstract
In drug-resistant epilepsy, a visual inspection of intracranial electroencephalography (iEEG) signals is often needed to localize the epileptogenic zone (EZ) and guide neurosurgery. The visual assessment of iEEG time-frequency (TF) images is an alternative to signal inspection, but subtle variations may escape the [...] Read more.
In drug-resistant epilepsy, a visual inspection of intracranial electroencephalography (iEEG) signals is often needed to localize the epileptogenic zone (EZ) and guide neurosurgery. The visual assessment of iEEG time-frequency (TF) images is an alternative to signal inspection, but subtle variations may escape the human eye. Here, we propose a deep learning-based metric of visual complexity to interpret TF images extracted from iEEG data and aim to assess its ability to identify the EZ in the brain. We analyzed interictal iEEG data from 1928 contacts recorded from 20 children with drug-resistant epilepsy who became seizure-free after neurosurgery. We localized each iEEG contact in the MRI, created TF images (1–70 Hz) for each contact, and used a pre-trained VGG16 network to measure their visual complexity by extracting unsupervised activation energy (UAE) from 13 convolutional layers. We identified points of interest in the brain using the UAE values via patient- and layer-specific thresholds (based on extreme value distribution) and using a support vector machine classifier. Results show that contacts inside the seizure onset zone exhibit lower UAE than outside, with larger differences in deep layers (L10, L12, and L13: p < 0.001). Furthermore, the points of interest identified using the support vector machine, localized the EZ with 7 mm accuracy. In conclusion, we presented a pre-surgical computerized tool that facilitates the EZ localization in the patient’s MRI without requiring long-term iEEG inspection. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Medical Image Processing)
Show Figures

Figure 1

16 pages, 4407 KiB  
Article
Predicting Pedestrian Trajectories with Deep Adversarial Networks Considering Motion and Spatial Information
by Liming Lao, Dangkui Du and Pengzhan Chen
Algorithms 2023, 16(12), 566; https://doi.org/10.3390/a16120566 - 12 Dec 2023
Cited by 1 | Viewed by 1310
Abstract
This paper proposes a novel prediction model termed the social and spatial attentive generative adversarial network (SSA-GAN). The SSA-GAN framework utilizes a generative approach, where the generator employs social attention mechanisms to accurately model social interactions among pedestrians. Unlike previous methodologies, our model [...] Read more.
This paper proposes a novel prediction model termed the social and spatial attentive generative adversarial network (SSA-GAN). The SSA-GAN framework utilizes a generative approach, where the generator employs social attention mechanisms to accurately model social interactions among pedestrians. Unlike previous methodologies, our model utilizes comprehensive motion features as query vectors, significantly enhancing predictive performance. Additionally, spatial attention is integrated to encapsulate the interactions between pedestrians and their spatial context through semantic spatial features. Moreover, we present a novel approach for generating simulated multi-trajectory datasets using the CARLA simulator. This method circumvents the limitations inherent in existing public datasets such as UCY and ETH, particularly when evaluating multi-trajectory metrics. Our experimental findings substantiate the efficacy of the proposed SSA-GAN model in capturing the nuances of pedestrian interactions and providing accurate multimodal trajectory predictions. Full article
(This article belongs to the Special Issue Mathematical Modelling in Engineering and Human Behaviour)
Show Figures

Graphical abstract

19 pages, 2441 KiB  
Article
Robustness of Single- and Dual-Energy Deep-Learning-Based Scatter Correction Models on Simulated and Real Chest X-rays
by Clara Freijo, Joaquin L. Herraiz, Fernando Arias-Valcayo, Paula Ibáñez, Gabriela Moreno, Amaia Villa-Abaunza and José Manuel Udías
Algorithms 2023, 16(12), 565; https://doi.org/10.3390/a16120565 - 12 Dec 2023
Viewed by 1306
Abstract
Chest X-rays (CXRs) represent the first tool globally employed to detect cardiopulmonary pathologies. These acquisitions are highly affected by scattered photons due to the large field of view required. Scatter in CXRs introduces background in the images, which reduces their contrast. We developed [...] Read more.
Chest X-rays (CXRs) represent the first tool globally employed to detect cardiopulmonary pathologies. These acquisitions are highly affected by scattered photons due to the large field of view required. Scatter in CXRs introduces background in the images, which reduces their contrast. We developed three deep-learning-based models to estimate and correct scatter contribution to CXRs. We used a Monte Carlo (MC) ray-tracing model to simulate CXRs from human models obtained from CT scans using different configurations (depending on the availability of dual-energy acquisitions). The simulated CXRs contained the separated contribution of direct and scattered X-rays in the detector. These simulated datasets were then used as the reference for the supervised training of several NNs. Three NN models (single and dual energy) were trained with the MultiResUNet architecture. The performance of the NN models was evaluated on CXRs obtained, with an MC code, from chest CT scans of patients affected by COVID-19. The results show that the NN models were able to estimate and correct the scatter contribution to CXRs with an error of <5%, being robust to variations in the simulation setup and improving contrast in soft tissue. The single-energy model was tested on real CXRs, providing robust estimations of the scatter-corrected CXRs. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Medical Image Processing)
Show Figures

Figure 1

16 pages, 2155 KiB  
Article
Deep Learning Based on EfficientNet for Multiorgan Segmentation of Thoracic Structures on a 0.35 T MR-Linac Radiation Therapy System
by Mohammed Chekroun, Youssef Mourchid, Igor Bessières and Alain Lalande
Algorithms 2023, 16(12), 564; https://doi.org/10.3390/a16120564 - 12 Dec 2023
Viewed by 1307
Abstract
The advent of the 0.35 T MR-Linac (MRIdian, ViewRay) system in radiation therapy allows precise tumor targeting for moving lesions. However, the lack of an automatic volume segmentation function in the MR-Linac’s treatment planning system poses a challenge. In this paper, we propose [...] Read more.
The advent of the 0.35 T MR-Linac (MRIdian, ViewRay) system in radiation therapy allows precise tumor targeting for moving lesions. However, the lack of an automatic volume segmentation function in the MR-Linac’s treatment planning system poses a challenge. In this paper, we propose a deep-learning-based multiorgan segmentation approach for the thoracic region, using EfficientNet as the backbone for the network architecture. The objectives of this approach include accurate segmentation of critical organs, such as the left and right lungs, the heart, the spinal cord, and the esophagus, essential for minimizing radiation toxicity during external radiation therapy. Our proposed approach, when evaluated on an internal dataset comprising 81 patients, demonstrated superior performance compared to other state-of-the-art methods. Specifically, the results for our approach with a 2.5D strategy were as follows: a dice similarity coefficient (DSC) of 0.820 ± 0.041, an intersection over union (IoU) of 0.725 ± 0.052, and a 3D Hausdorff distance (HD) of 10.353 ± 4.974 mm. Notably, the 2.5D strategy surpassed the 2D strategy in all three metrics, exhibiting higher DSC and IoU values, as well as lower HD values. This improvement strongly suggests that our proposed approach with the 2.5D strategy may hold promise in achieving more precise and accurate segmentations when compared to the conventional 2D strategy. Our work has practical implications in the improvement of treatment planning precision, aligning with the evolution of medical imaging and innovative strategies for multiorgan segmentation tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Imaging)
Show Figures

Figure 1

14 pages, 1085 KiB  
Article
On the Influence of Data Imbalance on Supervised Gaussian Mixture Models
by Luca Scrucca
Algorithms 2023, 16(12), 563; https://doi.org/10.3390/a16120563 - 11 Dec 2023
Viewed by 1497
Abstract
Imbalanced data present a pervasive challenge in many real-world applications of statistical and machine learning, where the instances of one class significantly outnumber those of the other. This paper examines the impact of class imbalance on the performance of Gaussian mixture models in [...] Read more.
Imbalanced data present a pervasive challenge in many real-world applications of statistical and machine learning, where the instances of one class significantly outnumber those of the other. This paper examines the impact of class imbalance on the performance of Gaussian mixture models in classification tasks and establishes the need for a strategy to reduce the adverse effects of imbalanced data on the accuracy and reliability of classification outcomes. We explore various strategies to address this problem, including cost-sensitive learning, threshold adjustments, and sampling-based techniques. Through extensive experiments on synthetic and real-world datasets, we evaluate the effectiveness of these methods. Our findings emphasize the need for effective mitigation strategies for class imbalance in supervised Gaussian mixtures, offering valuable insights for practitioners and researchers in improving classification outcomes. Full article
(This article belongs to the Special Issue Algorithms in Data Classification)
Show Figures

Figure 1

25 pages, 9322 KiB  
Article
Blood Cell Revolution: Unveiling 11 Distinct Types with ‘Naturalize’ Augmentation
by Mohamad Abou Ali, Fadi Dornaika and Ignacio Arganda-Carreras
Algorithms 2023, 16(12), 562; https://doi.org/10.3390/a16120562 - 10 Dec 2023
Cited by 1 | Viewed by 1768
Abstract
Artificial intelligence (AI) has emerged as a cutting-edge tool, simultaneously accelerating, securing, and enhancing the diagnosis and treatment of patients. An exemplification of this capability is evident in the analysis of peripheral blood smears (PBS). In university medical centers, hematologists routinely examine hundreds [...] Read more.
Artificial intelligence (AI) has emerged as a cutting-edge tool, simultaneously accelerating, securing, and enhancing the diagnosis and treatment of patients. An exemplification of this capability is evident in the analysis of peripheral blood smears (PBS). In university medical centers, hematologists routinely examine hundreds of PBS slides daily to validate or correct outcomes produced by advanced hematology analyzers assessing samples from potentially problematic patients. This process may logically lead to erroneous PBC readings, posing risks to patient health. AI functions as a transformative tool, significantly improving the accuracy and precision of readings and diagnoses. This study reshapes the parameters of blood cell classification, harnessing the capabilities of AI and broadening the scope from 5 to 11 specific blood cell categories with the challenging 11-class PBC dataset. This transformation facilitates a more profound exploration of blood cell diversity, surpassing prior constraints in medical image analysis. Our approach combines state-of-the-art deep learning techniques, including pre-trained ConvNets, ViTb16 models, and custom CNN architectures. We employ transfer learning, fine-tuning, and ensemble strategies, such as CBAM and Averaging ensembles, to achieve unprecedented accuracy and interpretability. Our fully fine-tuned EfficientNetV2 B0 model sets a new standard, with a macro-average precision, recall, and F1-score of 91%, 90%, and 90%, respectively, and an average accuracy of 93%. This breakthrough underscores the transformative potential of 11-class blood cell classification for more precise medical diagnoses. Moreover, our groundbreaking “Naturalize” augmentation technique produces remarkable results. The 2K-PBC dataset generated with “Naturalize” boasts a macro-average precision, recall, and F1-score of 97%, along with an average accuracy of 96% when leveraging the fully fine-tuned EfficientNetV2 B0 model. This innovation not only elevates classification performance but also addresses data scarcity and bias in medical deep learning. Our research marks a paradigm shift in blood cell classification, enabling more nuanced and insightful medical analyses. The “Naturalize” technique’s impact extends beyond blood cell classification, emphasizing the vital role of diverse and comprehensive datasets in advancing healthcare applications through deep learning. Full article
(This article belongs to the Special Issue Algorithms in Data Classification)
Show Figures

Figure 1

21 pages, 516 KiB  
Article
Time-Dependent Unavailability Exploration of Interconnected Urban Power Grid and Communication Network
by Matej Vrtal, Radek Fujdiak, Jan Benedikt, Pavel Praks, Radim Bris, Michal Ptacek and Petr Toman
Algorithms 2023, 16(12), 561; https://doi.org/10.3390/a16120561 - 10 Dec 2023
Viewed by 1445
Abstract
This paper presents a time-dependent reliability analysis created for a critical energy infrastructure use case, which consists of an interconnected urban power grid and a communication network. By utilizing expert knowledge from the energy and communication sectors and integrating the renewal theory of [...] Read more.
This paper presents a time-dependent reliability analysis created for a critical energy infrastructure use case, which consists of an interconnected urban power grid and a communication network. By utilizing expert knowledge from the energy and communication sectors and integrating the renewal theory of multi-component systems, a representative reliability model of this interconnected energy infrastructure, based on real network located in the Czech Republic, is established. This model assumes reparable and non-reparable components and captures the topology of the interconnected infrastructure and reliability characteristics of both the power grid and the communication network. Moreover, a time-dependent reliability assessment of the interconnected system is provided. One of the significant outputs of this research is the identification of the critical components of the interconnected network and their interdependencies by the directed acyclic graph. Numerical results indicate that the original design has an unacceptable large unavailability. Thus, to improve the reliability of the interconnected system, a slightly modified design, in which only a limited number of components in the system are modified to keep the additional costs of the improved design limited, is proposed. Consequently, numerical results indicate reducing the unavailability of the improved interconnected system in comparison with the initial reliability design. The proposed unavailability exploration strategy is general and can bring a valuable reliability improvement in the power and communication sectors. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms)
Show Figures

Figure 1

18 pages, 17597 KiB  
Article
Stereo 3D Object Detection Using a Feature Attention Module
by Kexin Zhao, Rui Jiang and Jun He
Algorithms 2023, 16(12), 560; https://doi.org/10.3390/a16120560 - 07 Dec 2023
Viewed by 1158
Abstract
Stereo 3D object detection remains a crucial challenge within the realm of 3D vision. In the pursuit of enhancing stereo 3D object detection, feature fusion has emerged as a potent strategy. However, the design of the feature fusion module and the determination of [...] Read more.
Stereo 3D object detection remains a crucial challenge within the realm of 3D vision. In the pursuit of enhancing stereo 3D object detection, feature fusion has emerged as a potent strategy. However, the design of the feature fusion module and the determination of pivotal features in this fusion process remain critical. This paper proposes a novel feature attention module tailored for stereo 3D object detection. Serving as a pivotal element for feature fusion, this module not only discerns feature importance but also facilitates informed enhancements based on its conclusions. This study delved into the various facets aided by the feature attention module. Firstly, a interpretability analysis was conducted concerning the function of the image segmentation methods. Secondly, we explored the augmentation of the feature fusion module through a category reweighting strategy. Lastly, we investigated global feature fusion methods and model compression strategies. The models devised through our proposed design underwent an effective analysis, yielding commendable performance, especially in small object detection within the pedestrian category. Full article
Show Figures

Figure 1

27 pages, 920 KiB  
Article
On Finding Optimal (Dynamic) Arborescences
by Joaquim Espada, Alexandre P. Francisco, Tatiana Rocher, Luís M. S. Russo and Cátia Vaz
Algorithms 2023, 16(12), 559; https://doi.org/10.3390/a16120559 - 06 Dec 2023
Viewed by 1495
Abstract
Let G=(V,E) be a directed and weighted graph with a vertex set V of size n and an edge set E of size m such that each edge (u,v)E has a [...] Read more.
Let G=(V,E) be a directed and weighted graph with a vertex set V of size n and an edge set E of size m such that each edge (u,v)E has a real-valued weight w(u,c). An arborescence in G is a subgraph T=(V,E) such that, for a vertex uV, which is the root, there is a unique path in T from u to any other vertex vV. The weight of T is the sum of the weights of its edges. In this paper, given G, we are interested in finding an arborescence in G with a minimum weight, i.e., an optimal arborescence. Furthermore, when G is subject to changes, namely, edge insertions and deletions, we are interested in efficiently maintaining a dynamic arborescence in G. This is a well-known problem with applications in several domains such as network design optimization and phylogenetic inference. In this paper, we revisit the algorithmic ideas proposed by several authors for this problem. We provide detailed pseudocode, as well as implementation details, and we present experimental results regarding large scale-free networks and phylogenetic inference. Our implementation is publicly available. Full article
Show Figures

Figure 1

13 pages, 336 KiB  
Communication
Construction of Two-Derivative Runge–Kutta Methods of Order Six
by Zacharoula Kalogiratou and Theodoros Monovasilis
Algorithms 2023, 16(12), 558; https://doi.org/10.3390/a16120558 - 06 Dec 2023
Viewed by 1288
Abstract
Two-Derivative Runge–Kutta methods have been proposed by Chan and Tsai in 2010 and order conditions up to the fifth order are given. In this work, for the first time, we derive order conditions for order six. Simplifying assumptions that reduce the number of [...] Read more.
Two-Derivative Runge–Kutta methods have been proposed by Chan and Tsai in 2010 and order conditions up to the fifth order are given. In this work, for the first time, we derive order conditions for order six. Simplifying assumptions that reduce the number of order conditions are also given. The procedure for constructing sixth-order methods is presented. A specific method is derived in order to illustrate the procedure; this method is of the sixth algebraic order with a reduced phase-lag and amplification error. For numerical comparison, five well-known test problems have been solved using a seventh-order Two-Derivative Runge–Kutta method developed by Chan and Tsai and several Runge–Kutta methods of orders 6 and 8. Diagrams of the maximum absolute error vs. computation time show the efficiency of the new method. Full article
Show Figures

Figure 1

19 pages, 983 KiB  
Article
An Efficient Closed-Form Formula for Evaluating r-Flip Moves in Quadratic Unconstrained Binary Optimization
by Bahram Alidaee, Haibo Wang and Lutfu S. Sua
Algorithms 2023, 16(12), 557; https://doi.org/10.3390/a16120557 - 05 Dec 2023
Viewed by 1313
Abstract
Quadratic unconstrained binary optimization (QUBO) is a classic NP-hard problem with an enormous number of applications. Local search strategy (LSS) is one of the most fundamental algorithmic concepts and has been successfully applied to a wide range of hard combinatorial optimization problems. One [...] Read more.
Quadratic unconstrained binary optimization (QUBO) is a classic NP-hard problem with an enormous number of applications. Local search strategy (LSS) is one of the most fundamental algorithmic concepts and has been successfully applied to a wide range of hard combinatorial optimization problems. One LSS that has gained the attention of researchers is the r-flip (also known as r-Opt) strategy. Given a binary solution with n variables, the r-flip strategy “flips” r binary variables to obtain a new solution if the changes improve the objective function. The main purpose of this paper is to develop several results for the implementation of r-flip moves in QUBO, including a necessary and sufficient condition that when a 1-flip search reaches local optimality, the number of candidates for implementation of the r-flip moves can be reduced significantly. The results of the substantial computational experiments are reported to compare an r-flip strategy-embedded algorithm and a multiple start tabu search algorithm on a set of benchmark instances and three very-large-scale QUBO instances. The r-flip strategy implemented within the algorithm makes the algorithm very efficient, leading to very high-quality solutions within a short CPU time. Full article
(This article belongs to the Special Issue Metaheuristics)
Show Figures

Figure A1

17 pages, 3311 KiB  
Article
A Novel Deep Learning Segmentation and Classification Framework for Leukemia Diagnosis
by A. Khuzaim Alzahrani, Ahmed A. Alsheikhy, Tawfeeq Shawly, Ahmed Azzahrani and Yahia Said
Algorithms 2023, 16(12), 556; https://doi.org/10.3390/a16120556 - 05 Dec 2023
Viewed by 1532
Abstract
Blood cancer occurs due to changes in white blood cells (WBCs). These changes are known as leukemia. Leukemia occurs mostly in children and affects their tissues or plasma. However, it could occur in adults. This disease becomes fatal and causes death if it [...] Read more.
Blood cancer occurs due to changes in white blood cells (WBCs). These changes are known as leukemia. Leukemia occurs mostly in children and affects their tissues or plasma. However, it could occur in adults. This disease becomes fatal and causes death if it is discovered and diagnosed late. In addition, leukemia can occur from genetic mutations. Therefore, there is a need to detect it early to save a patient’s life. Recently, researchers have developed various methods to detect leukemia using different technologies. Deep learning approaches (DLAs) have been widely utilized because of their high accuracy. However, some of these methods are time-consuming and costly. Thus, a need for a practical solution with low cost and higher accuracy is required. This article proposes a novel segmentation and classification framework model to discover and categorize leukemia using a deep learning structure. The proposed system encompasses two main parts, which are a deep learning technology to perform segmentation and characteristic extraction and classification on the segmented section. A new UNET architecture is developed to provide the segmentation and feature extraction processes. Various experiments were performed on four datasets to evaluate the model using numerous performance factors, including precision, recall, F-score, and Dice Similarity Coefficient (DSC). It achieved an average 97.82% accuracy for segmentation and categorization. In addition, 98.64% was achieved for F-score. The obtained results indicate that the presented method is a powerful technique for discovering leukemia and categorizing it into suitable groups. Furthermore, the model outperforms some of the implemented methods. The proposed system can assist healthcare providers in their services. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms for Healthcare)
Show Figures

Figure 1

16 pages, 2083 KiB  
Article
Deep Error-Correcting Output Codes
by Li-Na Wang, Hongxu Wei, Yuchen Zheng, Junyu Dong and Guoqiang Zhong
Algorithms 2023, 16(12), 555; https://doi.org/10.3390/a16120555 - 04 Dec 2023
Viewed by 1240
Abstract
Ensemble learning, online learning and deep learning are very effective and versatile in a wide spectrum of problem domains, such as feature extraction, multi-class classification and retrieval. In this paper, combining the ideas of ensemble learning, online learning and deep learning, we propose [...] Read more.
Ensemble learning, online learning and deep learning are very effective and versatile in a wide spectrum of problem domains, such as feature extraction, multi-class classification and retrieval. In this paper, combining the ideas of ensemble learning, online learning and deep learning, we propose a novel deep learning method called deep error-correcting output codes (DeepECOCs). DeepECOCs are composed of multiple layers of the ECOC module, which combines several incremental support vector machines (incremental SVMs) as base classifiers. In this novel deep architecture, each ECOC module can be considered as two successive layers of the network, while the incremental SVMs can be viewed as weighted links between two successive layers. In the pre-training procedure, supervisory information, i.e., class labels, can be used during the network initialization. The incremental SVMs lead this procedure to be very efficient, especially for large-scale applications. We have conducted extensive experiments to compare DeepECOCs with traditional ECOC, feature learning and deep learning algorithms. The results demonstrate that DeepECOCs perform, not only better than existing ECOC and feature learning algorithms, but also related to deep learning ones in most cases. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

21 pages, 4889 KiB  
Article
A Case-Study Comparison of Machine Learning Approaches for Predicting Student’s Dropout from Multiple Online Educational Entities
by José Manuel Porras, Juan Alfonso Lara, Cristóbal Romero and Sebastián Ventura
Algorithms 2023, 16(12), 554; https://doi.org/10.3390/a16120554 - 03 Dec 2023
Viewed by 1640
Abstract
Predicting student dropout is a crucial task in online education. Traditionally, each educational entity (institution, university, faculty, department, etc.) creates and uses its own prediction model starting from its own data. However, that approach is not always feasible or advisable and may depend [...] Read more.
Predicting student dropout is a crucial task in online education. Traditionally, each educational entity (institution, university, faculty, department, etc.) creates and uses its own prediction model starting from its own data. However, that approach is not always feasible or advisable and may depend on the availability of data, local infrastructure, and resources. In those cases, there are various machine learning approaches for sharing data and/or models between educational entities, using a classical centralized machine learning approach or other more advanced approaches such as transfer learning or federated learning. In this paper, we used data from three different LMS Moodle servers representing homogeneous different-sized educational entities. We tested the performance of the different machine learning approaches for the problem of predicting student dropout with multiple educational entities involved. We used a deep learning algorithm as a predictive classifier method. Our preliminary findings provide useful information on the benefits and drawbacks of each approach, as well as suggestions for enhancing performance when there are multiple institutions. In our case, repurposed transfer learning, stacked transfer learning, and centralized approaches produced similar or better results than the locally trained models for most of the entities. Full article
(This article belongs to the Special Issue Algorithms in Data Classification)
Show Figures

Graphical abstract

16 pages, 8874 KiB  
Article
Automatic Segmentation of Histological Images of Mouse Brains
by Juan Cisneros, Alain Lalande, Binnaz Yalcin, Fabrice Meriaudeau and Stephan Collins
Algorithms 2023, 16(12), 553; https://doi.org/10.3390/a16120553 - 01 Dec 2023
Viewed by 1387
Abstract
Using a high-throughput neuroanatomical screen of histological brain sections developed in collaboration with the International Mouse Phenotyping Consortium, we previously reported a list of 198 genes whose inactivation leads to neuroanatomical phenotypes. To achieve this milestone, tens of thousands of hours of manual [...] Read more.
Using a high-throughput neuroanatomical screen of histological brain sections developed in collaboration with the International Mouse Phenotyping Consortium, we previously reported a list of 198 genes whose inactivation leads to neuroanatomical phenotypes. To achieve this milestone, tens of thousands of hours of manual image segmentation were necessary. The present work involved developing a full pipeline to automate the application of deep learning methods for the automated segmentation of 24 anatomical regions used in the aforementioned screen. The dataset includes 2000 annotated parasagittal slides (24,000 × 14,000 pixels). Our approach consists of three main parts: the conversion of images (.ROI to .PNG), the training of the deep learning approach on the compressed images (512 × 256 and 2048 × 1024 pixels of the deep learning approach) to extract the regions of interest using either the U-Net or Attention U-Net architectures, and finally the transformation of the identified regions (.PNG to .ROI), enabling visualization and editing within the Fiji/ImageJ 1.54 software environment. With an image resolution of 2048 × 1024, the Attention U-Net provided the best results with an overall Dice Similarity Coefficient (DSC) of 0.90 ± 0.01 for all 24 regions. Using one command line, the end-user is now able to pre-analyze images automatically, then runs the existing analytical pipeline made of ImageJ macros to validate the automatically generated regions of interest resulting. Even for regions with low DSC, expert neuroanatomists rarely correct the results. We estimate a time savings of 6 to 10 times. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Imaging)
Show Figures

Figure 1

15 pages, 8312 KiB  
Article
A Lightweight Graph Neural Network Algorithm for Action Recognition Based on Self-Distillation
by Miao Feng and Jean Meunier
Algorithms 2023, 16(12), 552; https://doi.org/10.3390/a16120552 - 01 Dec 2023
Viewed by 1402
Abstract
Recognizing human actions can help in numerous ways, such as health monitoring, intelligent surveillance, virtual reality and human–computer interaction. A quick and accurate detection algorithm is required for daily real-time detection. This paper first proposes to generate a lightweight graph neural network by [...] Read more.
Recognizing human actions can help in numerous ways, such as health monitoring, intelligent surveillance, virtual reality and human–computer interaction. A quick and accurate detection algorithm is required for daily real-time detection. This paper first proposes to generate a lightweight graph neural network by self-distillation for human action recognition tasks. The lightweight graph neural network was evaluated on the NTU-RGB+D dataset. The results demonstrate that, with competitive accuracy, the heavyweight graph neural network can be compressed by up to 80%. Furthermore, the learned representations have denser clusters, estimated by the Davies–Bouldin index, the Dunn index and silhouette coefficients. The ideal input data and algorithm capacity are also discussed. Full article
(This article belongs to the Special Issue Recent Advances in Algorithms for Computer Vision Applications)
Show Figures

Figure 1

19 pages, 1644 KiB  
Article
An Algorithm for Coloring of Picture Fuzzy Graphs Based on Strong and Weak Adjacencies, and Its Application
by Isnaini Rosyida and Christiana Rini Indrati
Algorithms 2023, 16(12), 551; https://doi.org/10.3390/a16120551 - 30 Nov 2023
Cited by 1 | Viewed by 1413
Abstract
The idea of strong and weak adjacencies between vertices has been generalized into fuzzy graphs and intuitionistic fuzzy graphs (IFGs), and it is an important part of making decisions. However, one or two membership degrees are not always sufficient for making decisions on [...] Read more.
The idea of strong and weak adjacencies between vertices has been generalized into fuzzy graphs and intuitionistic fuzzy graphs (IFGs), and it is an important part of making decisions. However, one or two membership degrees are not always sufficient for making decisions on real-world problems that need an answer of types “yes, neutral, and no”. Consequently, in previous work, we generalized the concept into picture fuzzy graphs (PFGs) where each element in the PFG has membership, neutral, and non-membership degrees. Moreover, we constructed the notion of the coloring of PFGs based on strong and weak adjacencies between vertices. In this paper, we investigate some properties of the chromatic number of PFGs based on the concept of strong and weak adjacencies between vertices. According to these properties, we construct an algorithm to find the chromatic number of PFGs. The algorithm is useful when we work with large PFGs. Further, we improve the method to implement the PFG’s coloring for determining traffic signal phasing at an intersection. A case study has also been carried to evaluate the method. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

17 pages, 3622 KiB  
Article
OrthoDETR: A Streamlined Transformer-Based Approach for Precision Detection of Orthopedic Medical Devices
by Xiaobo Zhang, Huashun Li, Jingzhao Li and Xuehai Zhou
Algorithms 2023, 16(12), 550; https://doi.org/10.3390/a16120550 - 29 Nov 2023
Viewed by 1163
Abstract
The rapid and accurate detection of orthopedic medical devices is pivotal in enhancing health care delivery, particularly by improving workflow efficiency. Despite advancements in medical imaging technology, current detection models often fail to meet the unique requirements of orthopedic device detection. To address [...] Read more.
The rapid and accurate detection of orthopedic medical devices is pivotal in enhancing health care delivery, particularly by improving workflow efficiency. Despite advancements in medical imaging technology, current detection models often fail to meet the unique requirements of orthopedic device detection. To address this gap, we introduce OrthoDETR, a Transformer-based object detection model specifically designed and optimized for orthopedic medical devices. OrthoDETR is an evolution of the DETR (Detection Transformer) model, with several key modifications to better serve orthopedic applications. We replace the ResNet backbone with the MLP-Mixer, improve the multi-head self-attention mechanism, and refine the loss function for more accurate detections. In our comparative study, OrthoDETR outperformed other models, achieving an AP50 score of 0.897, an AP50:95 score of 0.864, an AR50:95 score of 0.895, and a frame per second (FPS) rate of 26. This represents a significant improvement over the DETR model, which achieved an AP50 score of 0.852, an AP50:95 score of 0.842, an AR50:95 score of 0.862, and an FPS rate of 20. OrthoDETR not only accelerates the detection process but also maintains an acceptable performance trade-off. The real-world impact of this model is substantial. By facilitating the precise and quick detection of orthopedic devices, OrthoDETR can potentially revolutionize the management of orthopedic workflows, improving patient care, and enhancing the efficiency of healthcare systems. This paper underlines the significance of specialized object detection models in orthopedics and sets the stage for further research in this direction. Full article
Show Figures

Figure 1

22 pages, 4259 KiB  
Article
Predicting the Impact of Data Poisoning Attacks in Blockchain-Enabled Supply Chain Networks
by Usman Javed Butt, Osama Hussien, Krison Hasanaj, Khaled Shaalan, Bilal Hassan and Haider al-Khateeb
Algorithms 2023, 16(12), 549; https://doi.org/10.3390/a16120549 - 29 Nov 2023
Viewed by 1492
Abstract
As computer networks become increasingly important in various domains, the need for secure and reliable networks becomes more pressing, particularly in the context of blockchain-enabled supply chain networks. One way to ensure network security is by using intrusion detection systems (IDSs), which are [...] Read more.
As computer networks become increasingly important in various domains, the need for secure and reliable networks becomes more pressing, particularly in the context of blockchain-enabled supply chain networks. One way to ensure network security is by using intrusion detection systems (IDSs), which are specialised devices that detect anomalies and attacks in the network. However, these systems are vulnerable to data poisoning attacks, such as label and distance-based flipping, which can undermine their effectiveness within blockchain-enabled supply chain networks. In this research paper, we investigate the effect of these attacks on a network intrusion detection system using several machine learning models, including logistic regression, random forest, SVC, and XGB Classifier, and evaluate each model via their F1 Score, confusion matrix, and accuracy. We run each model three times: once without any attack, once with random label flipping with a randomness of 20%, and once with distance-based label flipping attacks with a distance threshold of 0.5. Additionally, this research tests an eight-layer neural network using accuracy metrics and a classification report library. The primary goal of this research is to provide insights into the effect of data poisoning attacks on machine learning models within the context of blockchain-enabled supply chain networks. By doing so, we aim to contribute to developing more robust intrusion detection systems tailored to the specific challenges of securing blockchain-based supply chain networks. Full article
(This article belongs to the Special Issue Deep Learning Techniques for Computer Security Problems)
Show Figures

Figure 1

30 pages, 1159 KiB  
Article
An Efficient Optimized DenseNet Model for Aspect-Based Multi-Label Classification
by Nasir Ayub, Tayyaba, Saddam Hussain, Syed Sajid Ullah and Jawaid Iqbal
Algorithms 2023, 16(12), 548; https://doi.org/10.3390/a16120548 - 28 Nov 2023
Viewed by 1285
Abstract
Sentiment analysis holds great importance within the domain of natural language processing as it examines both the expressed and underlying emotions conveyed through review content. Furthermore, researchers have discovered that relying solely on the overall sentiment derived from the textual content is inadequate. [...] Read more.
Sentiment analysis holds great importance within the domain of natural language processing as it examines both the expressed and underlying emotions conveyed through review content. Furthermore, researchers have discovered that relying solely on the overall sentiment derived from the textual content is inadequate. Consequently, sentiment analysis was developed to extract nuanced expressions from textual information. One of the challenges in this field is effectively extracting emotional elements using multi-label data that covers various aspects. This article presents a novel approach called the Ensemble of DenseNet based on Aquila Optimizer (EDAO). EDAO is specifically designed to enhance the precision and diversity of multi-label learners. Unlike traditional multi-label methods, EDAO strongly emphasizes improving model diversity and accuracy in multi-label scenarios. To evaluate the effectiveness of our approach, we conducted experiments on seven distinct datasets, including emotions, hotels, movies, proteins, automobiles, medical, news, and birds. Our initial strategy involves establishing a preprocessing mechanism to obtain precise and refined data. Subsequently, we used the Vader tool with Bag of Words (BoW) for feature extraction. In the third stage, we created word associations using the word2vec method. The improved data were also used to train and test the DenseNet model, which was fine-tuned using the Aquila Optimizer (AO). On the news, emotion, auto, bird, movie, hotel, protein, and medical datasets, utilizing the aspect-based multi-labeling technique, we achieved accuracy rates of 95%, 97%, and 96%, respectively, with DenseNet-AO. Our proposed model demonstrates that EDAO outperforms other standard methods across various multi-label datasets with different dimensions. The implemented strategy has been rigorously validated through experimental results, showcasing its effectiveness compared to existing benchmark approaches. Full article
(This article belongs to the Special Issue Machine Learning in Big Data Modeling)
Show Figures

Figure 1

36 pages, 2020 KiB  
Article
Optimizing Physics-Informed Neural Network in Dynamic System Simulation and Learning of Parameters
by Ebenezer O. Oluwasakin and Abdul Q. M. Khaliq
Algorithms 2023, 16(12), 547; https://doi.org/10.3390/a16120547 - 28 Nov 2023
Viewed by 2090
Abstract
Artificial neural networks have changed many fields by giving scientists a strong way to model complex phenomena. They are also becoming increasingly useful for solving various difficult scientific problems. Still, people keep trying to find faster and more accurate ways to simulate dynamic [...] Read more.
Artificial neural networks have changed many fields by giving scientists a strong way to model complex phenomena. They are also becoming increasingly useful for solving various difficult scientific problems. Still, people keep trying to find faster and more accurate ways to simulate dynamic systems. This research explores the transformative capabilities of physics-informed neural networks, a specialized subset of artificial neural networks, in modeling complex dynamical systems with enhanced speed and accuracy. These networks incorporate known physical laws into the learning process, ensuring predictions remain consistent with fundamental principles, which is crucial when dealing with scientific phenomena. This study focuses on optimizing the application of this specialized network for simultaneous system dynamics simulations and learning time-varying parameters, particularly when the number of unknowns in the system matches the number of undetermined parameters. Additionally, we explore scenarios with a mismatch between parameters and equations, optimizing network architecture to enhance convergence speed, computational efficiency, and accuracy in learning the time-varying parameter. Our approach enhances the algorithm’s performance and accuracy, ensuring optimal use of computational resources and yielding more precise results. Extensive experiments are conducted on four different dynamical systems: first-order irreversible chain reactions, biomass transfer, the Brusselsator model, and the Lotka-Volterra model, using synthetically generated data to validate our approach. Additionally, we apply our method to the susceptible-infected-recovered model, utilizing real-world COVID-19 data to learn the time-varying parameters of the pandemic’s spread. A comprehensive comparison between the performance of our approach and fully connected deep neural networks is presented, evaluating both accuracy and computational efficiency in parameter identification and system dynamics capture. The results demonstrate that the physics-informed neural networks outperform fully connected deep neural networks in performance, especially with increased network depth, making them ideal for real-time complex system modeling. This underscores the physics-informed neural network’s effectiveness in scientific modeling in scenarios with balanced unknowns and parameters. Furthermore, it provides a fast, accurate, and efficient alternative for analyzing dynamic systems. Full article
(This article belongs to the Special Issue Algorithms for Natural Computing Models)
Show Figures

Figure 1

26 pages, 8044 KiB  
Article
Wind Turbine Predictive Fault Diagnostics Based on a Novel Long Short-Term Memory Model
by Shuo Zhang, Emma Robinson and Malabika Basu
Algorithms 2023, 16(12), 546; https://doi.org/10.3390/a16120546 - 28 Nov 2023
Viewed by 1475
Abstract
The operation and maintenance (O&M) issues of offshore wind turbines (WTs) are more challenging because of the harsh operational environment and hard accessibility. As sudden component failures within WTs bring about durable downtimes and significant revenue losses, condition monitoring and predictive fault diagnostic [...] Read more.
The operation and maintenance (O&M) issues of offshore wind turbines (WTs) are more challenging because of the harsh operational environment and hard accessibility. As sudden component failures within WTs bring about durable downtimes and significant revenue losses, condition monitoring and predictive fault diagnostic approaches must be developed to detect faults before they occur, thus preventing durable downtimes and costly unplanned maintenance. Based primarily on supervisory control and data acquisition (SCADA) data, thirty-three weighty features from operational data are extracted, and eight specific faults are categorised for fault predictions from status information. By providing a model-agnostic vector representation for time, Time2Vec (T2V), into Long Short-Term Memory (LSTM), this paper develops a novel deep-learning neural network model, T2V-LSTM, conducting multi-level fault predictions. The classification steps allow fault diagnosis from 10 to 210 min prior to faults. The results show that T2V-LSTM can successfully predict over 84.97% of faults and outperform LSTM and other counterparts in both overall and individual fault predictions due to its topmost recall scores in most multistep-ahead cases performed. Thus, the proposed T2V-LSTM can correctly diagnose more faults and upgrade the predictive performances based on vanilla LSTM in terms of accuracy, recall scores, and F-scores. Full article
(This article belongs to the Special Issue Artificial Intelligence for Fault Detection and Diagnosis)
Show Figures

Figure 1

15 pages, 758 KiB  
Article
Measuring the Performance of Ant Colony Optimization Algorithms for the Dynamic Traveling Salesman Problem
by Michalis Mavrovouniotis, Maria N. Anastasiadou and Diofantos Hadjimitsis
Algorithms 2023, 16(12), 545; https://doi.org/10.3390/a16120545 - 28 Nov 2023
Cited by 1 | Viewed by 1435
Abstract
Ant colony optimization (ACO) has proven its adaptation capabilities on optimization problems with dynamic environments. In this work, the dynamic traveling salesman problem (DTSP) is used as the base problem to generate dynamic test cases. Two types of dynamic changes for the DTSP [...] Read more.
Ant colony optimization (ACO) has proven its adaptation capabilities on optimization problems with dynamic environments. In this work, the dynamic traveling salesman problem (DTSP) is used as the base problem to generate dynamic test cases. Two types of dynamic changes for the DTSP are considered: (1) node changes and (2) weight changes. In the experiments, ACO algorithms are systematically compared in different DTSP test cases. Statistical tests are performed using the arithmetic mean and standard deviation of ACO algorithms, which is the standard method of comparing ACO algorithms. To complement the comparisons, the quantiles of the distribution are also used to measure the peak-, average-, and bad-case performance of ACO algorithms. The experimental results demonstrate some advantages of using quantiles for evaluating the performance of ACO algorithms in some DTSP test cases. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop