Next Issue
Volume 17, March
Previous Issue
Volume 17, January
 
 

Algorithms, Volume 17, Issue 2 (February 2024) – 39 articles

Cover Story (view full-size image): This paper introduces an agent-based model grounded in the ACO algorithm, exploring how partitioning ant colonies affects algorithmic performance. It delves into the roles of group size and number within a multi-objective optimization context. The model features memory-enhanced ants navigating a weighted network to find the optimal path to an exit point, maximizing exiting ants while minimizing path costs. The analyses include the overall colony performance across varied group sizes, the performance of partitioned groups, and the influence of pheromone distribution on navigation. The findings highlight a dynamic correlation between colony partitioning and solution quality within the ACO algorithm framework. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
33 pages, 8824 KiB  
Article
An Adaptive Linear Programming Algorithm with Parameter Learning
by Lin Guo, Anand Balu Nellippallil, Warren F. Smith, Janet K. Allen and Farrokh Mistree
Algorithms 2024, 17(2), 88; https://doi.org/10.3390/a17020088 - 19 Feb 2024
Viewed by 1153
Abstract
When dealing with engineering design problems, designers often encounter nonlinear and nonconvex features, multiple objectives, coupled decision making, and various levels of fidelity of sub-systems. To realize the design with limited computational resources, problems with the features above need to be linearized and [...] Read more.
When dealing with engineering design problems, designers often encounter nonlinear and nonconvex features, multiple objectives, coupled decision making, and various levels of fidelity of sub-systems. To realize the design with limited computational resources, problems with the features above need to be linearized and then solved using solution algorithms for linear programming. The adaptive linear programming (ALP) algorithm is an extension of the Sequential Linear Programming algorithm where a nonlinear compromise decision support problem (cDSP) is iteratively linearized, and the resulting linear programming problem is solved with satisficing solutions returned. The reduced move coefficient (RMC) is used to define how far away from the boundary the next linearization is to be performed, and currently, it is determined based on a heuristic. The choice of RMC significantly affects the efficacy of the linearization process and, hence, the rapidity of finding the solution. In this paper, we propose a rule-based parameter-learning procedure to vary the RMC at each iteration, thereby significantly increasing the speed of determining the ultimate solution. To demonstrate the efficacy of the ALP algorithm with parameter learning (ALPPL), we use an industry-inspired problem, namely, the integrated design of a hot-rolling process chain for the production of a steel rod. Using the proposed ALPPL, we can incorporate domain expertise to identify the most relevant criteria to evaluate the performance of the linearization algorithm, quantify the criteria as evaluation indices, and tune the RMC to return the solutions that fall into the most desired range of each evaluation index. Compared with the old ALP algorithm using the golden section search to update the RMC, the ALPPL improves the algorithm by identifying the RMC values with better linearization performance without adding computational complexity. The insensitive region of the RMC is better explored using the ALPPL—the ALP only explores the insensitive region twice, whereas the ALPPL explores four times throughout the iterations. With ALPPL, we have a more comprehensive definition of linearization performance—given multiple design scenarios, using evaluation indices (EIs) including the statistics of deviations, the numbers of binding (active) constraints and bounds, the numbers of accumulated linear constraints, and the number of iterations. The desired range of evaluation indices (DEI) is also learned during the iterations. The RMC value that brings the most EIs into the DEI is returned as the best RMC, which ensures a balance between the accuracy of the linearization and the robustness of the solutions. For our test problem, the hot-rolling process chain, the ALP returns the best RMC in twelve iterations considering only the deviation as the linearization performance index, whereas the ALPPL returns the best RMC in fourteen iterations considering multiple EIs. The complexity of both the ALP and the ALPPL is O(n2). The parameter-learning steps can be customized to improve the parameter determination of other algorithms. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

25 pages, 731 KiB  
Article
Transfer Reinforcement Learning for Combinatorial Optimization Problems
by Gleice Kelly Barbosa Souza, Samara Oliveira Silva Santos, André Luiz Carvalho Ottoni, Marcos Santos Oliveira, Daniela Carine Ramires Oliveira and Erivelton Geraldo Nepomuceno
Algorithms 2024, 17(2), 87; https://doi.org/10.3390/a17020087 - 18 Feb 2024
Viewed by 1373
Abstract
Reinforcement learning is an important technique in various fields, particularly in automated machine learning for reinforcement learning (AutoRL). The integration of transfer learning (TL) with AutoRL in combinatorial optimization is an area that requires further research. This paper employs both AutoRL and TL [...] Read more.
Reinforcement learning is an important technique in various fields, particularly in automated machine learning for reinforcement learning (AutoRL). The integration of transfer learning (TL) with AutoRL in combinatorial optimization is an area that requires further research. This paper employs both AutoRL and TL to effectively tackle combinatorial optimization challenges, specifically the asymmetric traveling salesman problem (ATSP) and the sequential ordering problem (SOP). A statistical analysis was conducted to assess the impact of TL on the aforementioned problems. Furthermore, the Auto_TL_RL algorithm was introduced as a novel contribution, combining the AutoRL and TL methodologies. Empirical findings strongly support the effectiveness of this integration, resulting in solutions that were significantly more efficient than conventional techniques, with an 85.7% improvement in the preliminary analysis results. Additionally, the computational time was reduced in 13 instances (i.e., in 92.8% of the simulated problems). The TL-integrated model outperformed the optimal benchmarks, demonstrating its superior convergence. The Auto_TL_RL algorithm design allows for smooth transitions between the ATSP and SOP domains. In a comprehensive evaluation, Auto_TL_RL significantly outperformed traditional methodologies in 78% of the instances analyzed. Full article
(This article belongs to the Special Issue Advancements in Reinforcement Learning Algorithms)
Show Figures

Figure 1

20 pages, 351 KiB  
Article
A Novel Higher-Order Numerical Scheme for System of Nonlinear Load Flow Equations
by Fiza Zafar, Alicia Cordero, Husna Maryam and Juan R. Torregrosa
Algorithms 2024, 17(2), 86; https://doi.org/10.3390/a17020086 - 18 Feb 2024
Viewed by 919
Abstract
Power flow problems can be solved in a variety of ways by using the Newton–Raphson approach. The nonlinear power flow equations depend upon voltages Vi and phase angle δ. An electrical power system is obtained by taking the partial derivatives of [...] Read more.
Power flow problems can be solved in a variety of ways by using the Newton–Raphson approach. The nonlinear power flow equations depend upon voltages Vi and phase angle δ. An electrical power system is obtained by taking the partial derivatives of load flow equations which contain active and reactive powers. In this paper, we present an efficient seventh-order iterative scheme to obtain the solutions of nonlinear system of equations, with only three steps in its formulation. Then, we illustrate the computational cost for different operations such as matrix–matrix multiplication, matrix–vector multiplication, and LU-decomposition, which is then used to calculate the cost of our proposed method and is compared with the cost of already seventh-order methods. Furthermore, we elucidate the applicability of our newly developed scheme in an electrical power system. The two-bus, three-bus, and four-bus power flow problems are then solved by using load flow equations that describe the applicability of the new schemes. Full article
Show Figures

Figure 1

23 pages, 2640 KiB  
Article
Improving Academic Advising in Engineering Education with Machine Learning Using a Real-World Dataset
by Mfowabo Maphosa, Wesley Doorsamy and Babu Paul
Algorithms 2024, 17(2), 85; https://doi.org/10.3390/a17020085 - 18 Feb 2024
Viewed by 1120
Abstract
The role of academic advising has been conducted by faculty-student advisors, who often have many students to advise quickly, making the process ineffective. The selection of the incorrect qualification increases the risk of dropping out, changing qualifications, or not finishing the qualification enrolled [...] Read more.
The role of academic advising has been conducted by faculty-student advisors, who often have many students to advise quickly, making the process ineffective. The selection of the incorrect qualification increases the risk of dropping out, changing qualifications, or not finishing the qualification enrolled in the minimum time. This study harnesses a real-world dataset comprising student records across four engineering disciplines from the 2016 and 2017 academic years at a public South African university. The study examines the relative importance of features in models for predicting student performance and determining whether students are better suited for extended or mainstream programmes. The study employs a three-step methodology, encompassing data pre-processing, feature importance selection, and model training with evaluation, to predict student performance by addressing issues such as dataset imbalance, biases, and ethical considerations. By relying exclusively on high school performance data, predictions are based solely on students’ abilities, fostering fairness and minimising biases in predictive tasks. The results show that removing demographic features like ethnicity or nationality reduces bias. The study’s findings also highlight the significance of the following features: mathematics, physical sciences, and admission point scores when predicting student performance. The models are evaluated, demonstrating their ability to provide accurate predictions. The study’s results highlight varying performance among models and their key contributions, underscoring the potential to transform academic advising and enhance student decision-making. These models can be incorporated into the academic advising recommender system, thereby improving the quality of academic guidance. Full article
(This article belongs to the Special Issue Machine Learning Algorithms and Methods for Predictive Analytics)
Show Figures

Figure 1

22 pages, 26451 KiB  
Article
Mapping the Distribution of High-Value Broadleaf Tree Crowns through Unmanned Aerial Vehicle Image Analysis Using Deep Learning
by Nyo Me Htun, Toshiaki Owari, Satoshi Tsuyuki and Takuya Hiroshima
Algorithms 2024, 17(2), 84; https://doi.org/10.3390/a17020084 - 17 Feb 2024
Viewed by 1520
Abstract
High-value timber species with economic and ecological importance are usually distributed at very low densities, such that accurate knowledge of the location of these trees within a forest is critical for forest management practices. Recent technological developments integrating unmanned aerial vehicle (UAV) imagery [...] Read more.
High-value timber species with economic and ecological importance are usually distributed at very low densities, such that accurate knowledge of the location of these trees within a forest is critical for forest management practices. Recent technological developments integrating unmanned aerial vehicle (UAV) imagery and deep learning provide an efficient method for mapping forest attributes. In this study, we explored the applicability of high-resolution UAV imagery and a deep learning algorithm to predict the distribution of high-value deciduous broadleaf tree crowns of Japanese oak (Quercus crispula) in an uneven-aged mixed forest in Hokkaido, northern Japan. UAV images were collected in September and October 2022 before and after the color change of the leaves of Japanese oak to identify the optimal timing of UAV image collection. RGB information extracted from the UAV images was analyzed using a ResU-Net model (U-Net model with a Residual Network 101 (ResNet101), pre-trained on large ImageNet datasets, as backbone). Our results, confirmed using validation data, showed that reliable F1 scores (>0.80) could be obtained with both UAV datasets. According to the overlay analyses of the segmentation results and all the annotated ground truth data, the best performance was that of the model with the October UAV dataset (F1 score of 0.95). Our case study highlights a potential methodology to offer a transferable approach to the management of high-value timber species in other regions. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Sensor Data and Image Understanding)
Show Figures

Figure 1

24 pages, 514 KiB  
Review
A Comprehensive Survey of Isocontouring Methods: Applications, Limitations and Perspectives
by Keno Jann Büscher, Jan Philipp Degel and Jan Oellerich
Algorithms 2024, 17(2), 83; https://doi.org/10.3390/a17020083 - 15 Feb 2024
Viewed by 1307
Abstract
This paper provides a comprehensive overview of approaches to the determination of isocontours and isosurfaces from given data sets. Different algorithms are reported in the literature for this purpose, which originate from various application areas, such as computer graphics or medical imaging procedures. [...] Read more.
This paper provides a comprehensive overview of approaches to the determination of isocontours and isosurfaces from given data sets. Different algorithms are reported in the literature for this purpose, which originate from various application areas, such as computer graphics or medical imaging procedures. In all these applications, the challenge is to extract surfaces with a specific isovalue from a given characteristic, so called isosurfaces. These different application areas have given rise to solution approaches that all solve the problem of isocontouring in their own way. Based on the literature, the following four dominant methods can be identified: the marching cubes algorithms, the tessellation-based algorithms, the surface nets algorithms and the ray tracing algorithms. With regard to their application, it can be seen that the methods are mainly used in the fields of medical imaging, computer graphics and the visualization of simulation results. In our work, we provide a broad and compact overview of the common methods that are currently used in terms of isocontouring with respect to certain criteria and their individual limitations. In this context, we discuss the individual methods and identify possible future research directions in the field of isocontouring. Full article
(This article belongs to the Special Issue Surveys in Algorithm Analysis and Complexity Theory, Part II)
Show Figures

Figure 1

26 pages, 4989 KiB  
Article
Optimizing Multidimensional Pooling for Variational Quantum Algorithms
by Mingyoung Jeng, Alvir Nobel, Vinayak Jha, David Levy, Dylan Kneidel, Manu Chaudhary, Ishraq Islam, Evan Baumgartner, Eade Vanderhoof, Audrey Facer, Manish Singh, Abina Arshad and Esam El-Araby
Algorithms 2024, 17(2), 82; https://doi.org/10.3390/a17020082 - 15 Feb 2024
Viewed by 1181
Abstract
Convolutional neural networks (CNNs) have proven to be a very efficient class of machine learning (ML) architectures for handling multidimensional data by maintaining data locality, especially in the field of computer vision. Data pooling, a major component of CNNs, plays a crucial role [...] Read more.
Convolutional neural networks (CNNs) have proven to be a very efficient class of machine learning (ML) architectures for handling multidimensional data by maintaining data locality, especially in the field of computer vision. Data pooling, a major component of CNNs, plays a crucial role in extracting important features of the input data and downsampling its dimensionality. Multidimensional pooling, however, is not efficiently implemented in existing ML algorithms. In particular, quantum machine learning (QML) algorithms have a tendency to ignore data locality for higher dimensions by representing/flattening multidimensional data as simple one-dimensional data. In this work, we propose using the quantum Haar transform (QHT) and quantum partial measurement for performing generalized pooling operations on multidimensional data. We present the corresponding decoherence-optimized quantum circuits for the proposed techniques along with their theoretical circuit depth analysis. Our experimental work was conducted using multidimensional data, ranging from 1-D audio data to 2-D image data to 3-D hyperspectral data, to demonstrate the scalability of the proposed methods. In our experiments, we utilized both noisy and noise-free quantum simulations on a state-of-the-art quantum simulator from IBM Quantum. We also show the efficiency of our proposed techniques for multidimensional data by reporting the fidelity of results. Full article
(This article belongs to the Special Issue Quantum Machine Learning algorithm and Large Language Model)
Show Figures

Figure 1

15 pages, 4166 KiB  
Communication
Adaptive Antenna Array Control Algorithm in Radiocommunication Systems
by Marian Wnuk
Algorithms 2024, 17(2), 81; https://doi.org/10.3390/a17020081 - 14 Feb 2024
Viewed by 1001
Abstract
An important element of modern telecommunications is wireless radio networks, which enable mobile subscribers to access wireless networks. The cell area is divided into independent sectors served by directional antennas. As the number of mobile network subscribers served by a single base station [...] Read more.
An important element of modern telecommunications is wireless radio networks, which enable mobile subscribers to access wireless networks. The cell area is divided into independent sectors served by directional antennas. As the number of mobile network subscribers served by a single base station increases, the problem of interference related to the operation of the radio link increases. To minimize the disadvantages of omnidirectional antennas, base stations use antennas with directional radiation characteristics. This solution allows you to optimize the operating conditions of the mobile network in terms of reducing the impact of interference, better managing the frequency spectrum and improving the energy efficiency of the system. The work presents an adaptive antenna algorithm used in mobile telephony. The principle of operation of adaptive systems, the properties of their elements and the configurations in which they are used in practice are described. On this basis, an algorithm for controlling the radiation characteristics of adaptive antennas is presented. The control is carried out using a microprocessor system. The simulation model is described. An algorithm was developed based on the Mathcad mathematical program, and the simulation results of this algorithm, i.e., changes in radiation characteristics as a result of changing the mobile position of subscribers, were presented in the form of selected radiation characteristics charts. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

14 pages, 828 KiB  
Article
A Quantum-Inspired Ant Colony Optimization Algorithm for Parking Lot Rental to Shared E-Scooter Services
by Antonella Nardin and Fabio D’Andreagiovanni
Algorithms 2024, 17(2), 80; https://doi.org/10.3390/a17020080 - 14 Feb 2024
Viewed by 1089
Abstract
Electric scooter sharing mobility services have recently spread in major cities all around the world. However, the bad parking behavior of users has become a major source of issues, provoking accidents and compromising urban decorum of public areas. Reducing wild parking habits can [...] Read more.
Electric scooter sharing mobility services have recently spread in major cities all around the world. However, the bad parking behavior of users has become a major source of issues, provoking accidents and compromising urban decorum of public areas. Reducing wild parking habits can be pursued by setting reserved parking spaces. In this work, we consider the problem faced by a municipality that hosts e-scooter sharing services and must choose which locations in its territory may be rented as reserved parking lots to sharing companies, with the aim of maximizing a return on renting and while taking into account spatial consideration and parking needs of local residents. Since this problem may result difficult to solve even for a state-of-the-art optimization software, we propose a hybrid metaheuristic solution algorithm combining a quantum-inspired ant colony optimization algorithm with an exact large neighborhood search. Results of computational tests considering realistic instances referring to the Italian capital city of Rome show the superior performance of the proposed hybrid metaheuristic. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

13 pages, 3750 KiB  
Article
Research on Gangue Detection Algorithm Based on Cross-Scale Feature Fusion and Dynamic Pruning
by Haojie Wang, Pingqing Fan, Xipei Ma and Yansong Wang
Algorithms 2024, 17(2), 79; https://doi.org/10.3390/a17020079 - 13 Feb 2024
Viewed by 1021
Abstract
The intelligent identification of coal gangue on industrial conveyor belts is a crucial technology for the precise sorting of coal gangue. To address the issues in coal gangue detection algorithms, such as high false negative rates, complex network structures, and substantial model weights, [...] Read more.
The intelligent identification of coal gangue on industrial conveyor belts is a crucial technology for the precise sorting of coal gangue. To address the issues in coal gangue detection algorithms, such as high false negative rates, complex network structures, and substantial model weights, an optimized coal gangue detection algorithm based on YOLOv5s is proposed. In the backbone network, a feature refinement module is employed for feature extraction, enhancing the capability to extract features for coal and gangue. The improved BIFPN structure is employed as the feature pyramid, augmenting the model’s capability for cross-scale feature fusion. In the prediction layer, the ESIOU is utilized as the bounding box regression loss function to rectify the misalignment issue between predicted and actual box angles. This approach expedites the convergence speed of the network while concurrently enhancing the accuracy of coal gangue detection. Channel pruning is implemented on the network to diminish model computational complexity and weight, consequently augmenting detection speed. The experimental results demonstrate that the refined YOLOv5s coal gangue detection algorithm outperforms the original YOLOv5s algorithm, achieving a notable accuracy enhancement of 2.2% to reach 93.8%. Concurrently, a substantial reduction in model weight by 38.8% is observed, resulting in a notable 56.2% increase in inference speed. These advancements meet the detection requirements for scenarios involving mixed coal gangue. Full article
(This article belongs to the Special Issue Graph Neural Network Algorithms and Applications)
Show Figures

Figure 1

34 pages, 1227 KiB  
Review
A Review of Machine Learning’s Role in Cardiovascular Disease Prediction: Recent Advances and Future Challenges
by Marwah Abdulrazzaq Naser, Aso Ahmed Majeed, Muntadher Alsabah, Taha Raad Al-Shaikhli and Kawa M. Kaky
Algorithms 2024, 17(2), 78; https://doi.org/10.3390/a17020078 - 13 Feb 2024
Cited by 1 | Viewed by 2395
Abstract
Cardiovascular disease is the leading cause of global mortality and responsible for millions of deaths annually. The mortality rate and overall consequences of cardiac disease can be reduced with early disease detection. However, conventional diagnostic methods encounter various challenges, including delayed treatment and [...] Read more.
Cardiovascular disease is the leading cause of global mortality and responsible for millions of deaths annually. The mortality rate and overall consequences of cardiac disease can be reduced with early disease detection. However, conventional diagnostic methods encounter various challenges, including delayed treatment and misdiagnoses, which can impede the course of treatment and raise healthcare costs. The application of artificial intelligence (AI) techniques, especially machine learning (ML) algorithms, offers a promising pathway to address these challenges. This paper emphasizes the central role of machine learning in cardiac health and focuses on precise cardiovascular disease prediction. In particular, this paper is driven by the urgent need to fully utilize the potential of machine learning to enhance cardiovascular disease prediction. In light of the continued progress in machine learning and the growing public health implications of cardiovascular disease, this paper aims to offer a comprehensive analysis of the topic. This review paper encompasses a wide range of topics, including the types of cardiovascular disease, the significance of machine learning, feature selection, the evaluation of machine learning models, data collection & preprocessing, evaluation metrics for cardiovascular disease prediction, and the recent trends & suggestion for future works. In addition, this paper offers a holistic view of machine learning’s role in cardiovascular disease prediction and public health. We believe that our comprehensive review will contribute significantly to the existing body of knowledge in this essential area. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

19 pages, 6666 KiB  
Article
Algorithms Utilized for Creep Analysis in Torque Transducers for Wind Turbines
by Jacek G. Puchalski, Janusz D. Fidelus and Paweł Fotowicz
Algorithms 2024, 17(2), 77; https://doi.org/10.3390/a17020077 - 07 Feb 2024
Viewed by 1355
Abstract
One of the fundamental challenges in analyzing wind turbine performance is the occurrence of torque creep under load and without load. This phenomenon significantly impacts the proper functioning of torque transducers, thus necessitating the utilization of appropriate measurement data analysis algorithms. In this [...] Read more.
One of the fundamental challenges in analyzing wind turbine performance is the occurrence of torque creep under load and without load. This phenomenon significantly impacts the proper functioning of torque transducers, thus necessitating the utilization of appropriate measurement data analysis algorithms. In this regard, employing the least squares method appears to be a suitable approach. Linear regression can be employed to investigate the creep trend itself, while visualizing the creep in the form of a non-linear curve using a third-degree polynomial can provide further insights. Additionally, calculating deviations between the measurement data and the regression curves proves beneficial in accurately assessing the data. Full article
Show Figures

Figure 1

45 pages, 2733 KiB  
Review
A Literature Review on Some Trends in Artificial Neural Networks for Modeling and Simulation with Time Series
by Angel E. Muñoz-Zavala, Jorge E. Macías-Díaz, Daniel Alba-Cuéllar and José A. Guerrero-Díaz-de-León
Algorithms 2024, 17(2), 76; https://doi.org/10.3390/a17020076 - 07 Feb 2024
Viewed by 1510
Abstract
This paper reviews the application of artificial neural network (ANN) models to time series prediction tasks. We begin by briefly introducing some basic concepts and terms related to time series analysis, and by outlining some of the most popular ANN architectures considered in [...] Read more.
This paper reviews the application of artificial neural network (ANN) models to time series prediction tasks. We begin by briefly introducing some basic concepts and terms related to time series analysis, and by outlining some of the most popular ANN architectures considered in the literature for time series forecasting purposes: feedforward neural networks, radial basis function networks, recurrent neural networks, and self-organizing maps. We analyze the strengths and weaknesses of these architectures in the context of time series modeling. We then summarize some recent time series ANN modeling applications found in the literature, focusing mainly on the previously outlined architectures. In our opinion, these summarized techniques constitute a representative sample of the research and development efforts made in this field. We aim to provide the general reader with a good perspective on how ANNs have been employed for time series modeling and forecasting tasks. Finally, we comment on possible new research directions in this area. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
Show Figures

Figure 1

28 pages, 2207 KiB  
Article
Assessing the Impact of Patient Characteristics on Genetic Clinical Pathways: A Regression Approach
by Stefano Alderighi, Paolo Landa, Elena Tànfani and Angela Testi
Algorithms 2024, 17(2), 75; https://doi.org/10.3390/a17020075 - 07 Feb 2024
Viewed by 986
Abstract
Molecular genetic techniques allow for the diagnosing of hereditary diseases and congenital abnormalities prenatally. A high variability of treatments exists, engendering an inappropriate clinical response, an inefficient use of resources, and the violation of the principle of the equality of treatment for equal [...] Read more.
Molecular genetic techniques allow for the diagnosing of hereditary diseases and congenital abnormalities prenatally. A high variability of treatments exists, engendering an inappropriate clinical response, an inefficient use of resources, and the violation of the principle of the equality of treatment for equal needs. The proposed framework is based on modeling clinical pathways that contribute to identifying major causes of variability in treatments justified by the clinical needs’ variability as well as depending on individual characteristics. An electronic data collection method for high-risk pregnant women addressing genetic facilities and laboratories was implemented. The collected data were analyzed retrospectively with two aims. The first is to identify how the whole activity of genetic services can be broken down into different clinical pathways. This was performed by building a flow chart with the help of doctors. The second aim consists of measuring the variability, within and among, the different paths due to individual characteristics. A set of statistical models was developed to determine the impact of the patient characteristics on the clinical pathway and its length. The results show the importance of considering these characteristics together with the clinical information to define the care pathway and the use of resources. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms for Healthcare)
Show Figures

Figure 1

22 pages, 2752 KiB  
Article
GPU Adding-Doubling Algorithm for Analysis of Optical Spectral Images
by Matija Milanic and Rok Hren
Algorithms 2024, 17(2), 74; https://doi.org/10.3390/a17020074 - 07 Feb 2024
Viewed by 1062
Abstract
The Adding-Doubling (AD) algorithm is a general analytical solution of the radiative transfer equation (RTE). AD offers a favorable balance between accuracy and computational efficiency, surpassing other RTE solutions, such as Monte Carlo (MC) simulations, in terms of speed while outperforming approximate solutions [...] Read more.
The Adding-Doubling (AD) algorithm is a general analytical solution of the radiative transfer equation (RTE). AD offers a favorable balance between accuracy and computational efficiency, surpassing other RTE solutions, such as Monte Carlo (MC) simulations, in terms of speed while outperforming approximate solutions like the Diffusion Approximation method in accuracy. While AD algorithms have traditionally been implemented on central processing units (CPUs), this study focuses on leveraging the capabilities of graphics processing units (GPUs) to achieve enhanced computational speed. In terms of processing speed, the GPU AD algorithm showed an improvement by a factor of about 5000 to 40,000 compared to the GPU MC method. The optimal number of threads for this algorithm was found to be approximately 3000. To illustrate the utility of the GPU AD algorithm, the Levenberg–Marquardt inverse solution was used to extract object parameters from optical spectral data of human skin under various hemodynamic conditions. With regards to computational efficiency, it took approximately 5 min to process a 220 × 100 × 61 image (x-axis × y-axis × spectral-axis). The development of the GPU AD algorithm presents an advancement in determining tissue properties compared to other RTE solutions. Moreover, the GPU AD method itself holds the potential to expedite machine learning techniques in the analysis of spectral images. Full article
Show Figures

Figure 1

20 pages, 4984 KiB  
Article
μ-Analysis and μ-Synthesis Control Methods in Smart Structure Disturbance Suppression with Reduced Order Control
by Amalia Moutsopoulou, Markos Petousis, Georgios E. Stavroulakis, Anastasios Pouliezos and Nectarios Vidakis
Algorithms 2024, 17(2), 73; https://doi.org/10.3390/a17020073 - 06 Feb 2024
Viewed by 1251
Abstract
In this study, we created an accurate model for a homogenous smart structure. After modeling multiplicative uncertainty, an ideal robust controller was designed using μ-synthesis and a reduced-order H-infinity Feedback Optimal Output (Hifoo) controller, leading to the creation of an improved uncertain [...] Read more.
In this study, we created an accurate model for a homogenous smart structure. After modeling multiplicative uncertainty, an ideal robust controller was designed using μ-synthesis and a reduced-order H-infinity Feedback Optimal Output (Hifoo) controller, leading to the creation of an improved uncertain plant. A powerful controller was built using a larger plant that included the nominal model and corresponding uncertainty. The designed controllers demonstrated robust and nominal performance when handling agitated plants. A comparison of the results was conducted. As an example of a general smart structure, the vibration of a collocated piezoelectric actuator and sensor was controlled using two different approaches with strong controller designs. This study presents a comprehensive simulation of the oscillation suppression problem for smart beams. They provide an analytical demonstration of how uncertainty is introduced into the model. The desired outcomes were achieved by utilizing Simulink and MATLAB (v. 8.0) programming tools. Full article
Show Figures

Figure 1

20 pages, 552 KiB  
Article
An Attention-Based Method for the Minimum Vertex Cover Problem on Complex Networks
by Giorgio Lazzarinetti, Riccardo Dondi, Sara Manzoni and Italo Zoppis
Algorithms 2024, 17(2), 72; https://doi.org/10.3390/a17020072 - 06 Feb 2024
Viewed by 1338
Abstract
Solving combinatorial problems on complex networks represents a primary issue which, on a large scale, requires the use of heuristics and approximate algorithms. Recently, neural methods have been proposed in this context to find feasible solutions for relevant computational problems over graphs. However, [...] Read more.
Solving combinatorial problems on complex networks represents a primary issue which, on a large scale, requires the use of heuristics and approximate algorithms. Recently, neural methods have been proposed in this context to find feasible solutions for relevant computational problems over graphs. However, such methods have some drawbacks: (1) they use the same neural architecture for different combinatorial problems without introducing customizations that reflects the specificity of each problem; (2) they only use a nodes local information to compute the solution; (3) they do not take advantage of common heuristics or exact algorithms. Following this interest, in this research we address these three main points by designing a customized attention-based mechanism that uses both local and global information from the adjacency matrix to find approximate solutions for the Minimum Vertex Cover Problem. We evaluate our proposal with respect to a fast two-factor approximation algorithm and a widely adopted state-of-the-art heuristic both on synthetically generated instances and on benchmark graphs with different scales. Experimental results demonstrate that, on the one hand, the proposed methodology is able to outperform both the two-factor approximation algorithm and the heuristic on the test datasets, scaling even better than the heuristic with harder instances and, on the other hand, is able to provide a representation of the nodes which reflects the combinatorial structure of the problem. Full article
(This article belongs to the Special Issue Algorithms for Network Analysis: Theory and Practice)
Show Figures

Figure 1

31 pages, 3131 KiB  
Review
Algorithms in Tomography and Related Inverse Problems—A Review
by Styliani Tassiopoulou, Georgia Koukiou and Vassilis Anastassopoulos
Algorithms 2024, 17(2), 71; https://doi.org/10.3390/a17020071 - 05 Feb 2024
Viewed by 1464
Abstract
In the ever-evolving landscape of tomographic imaging algorithms, this literature review explores a diverse array of themes shaping the field’s progress. It encompasses foundational principles, special innovative approaches, tomographic implementation algorithms, and applications of tomography in medicine, natural sciences, remote sensing, and seismology. [...] Read more.
In the ever-evolving landscape of tomographic imaging algorithms, this literature review explores a diverse array of themes shaping the field’s progress. It encompasses foundational principles, special innovative approaches, tomographic implementation algorithms, and applications of tomography in medicine, natural sciences, remote sensing, and seismology. This choice is to show off the diversity of tomographic applications and simultaneously the new trends in tomography in recent years. Accordingly, the evaluation of backprojection methods for breast tomographic reconstruction is highlighted. After that, multi-slice fusion takes center stage, promising real-time insights into dynamic processes and advanced diagnosis. Computational efficiency, especially in methods for accelerating tomographic reconstruction algorithms on commodity PC graphics hardware, is also presented. In geophysics, a deep learning-based approach to ground-penetrating radar (GPR) data inversion propels us into the future of geological and environmental sciences. We venture into Earth sciences with global seismic tomography: the inverse problem and beyond, understanding the Earth’s subsurface through advanced inverse problem solutions and pushing boundaries. Lastly, optical coherence tomography is reviewed in basic applications for revealing tiny biological tissue structures. This review presents the main categories of applications of tomography, providing a deep insight into the methods and algorithms that have been developed so far so that the reader who wants to deal with the subject is fully informed. Full article
(This article belongs to the Collection Featured Reviews of Algorithms)
Show Figures

Figure 1

35 pages, 9095 KiB  
Article
Numbers Do Not Lie: A Bibliometric Examination of Machine Learning Techniques in Fake News Research
by Andra Sandu, Ioana Ioanăș, Camelia Delcea, Margareta-Stela Florescu and Liviu-Adrian Cotfas
Algorithms 2024, 17(2), 70; https://doi.org/10.3390/a17020070 - 05 Feb 2024
Cited by 1 | Viewed by 1418
Abstract
Fake news is an explosive subject, being undoubtedly among the most controversial and difficult challenges facing society in the present-day environment of technology and information, which greatly affects the individuals who are vulnerable and easily influenced, shaping their decisions, actions, and even beliefs. [...] Read more.
Fake news is an explosive subject, being undoubtedly among the most controversial and difficult challenges facing society in the present-day environment of technology and information, which greatly affects the individuals who are vulnerable and easily influenced, shaping their decisions, actions, and even beliefs. In the course of discussing the gravity and dissemination of the fake news phenomenon, this article aims to clarify the distinctions between fake news, misinformation, and disinformation, along with conducting a thorough analysis of the most widely read academic papers that have tackled the topic of fake news research using various machine learning techniques. Utilizing specific keywords for dataset extraction from Clarivate Analytics’ Web of Science Core Collection, the bibliometric analysis spans six years, offering valuable insights aimed at identifying key trends, methodologies, and notable strategies within this multidisciplinary field. The analysis encompasses the examination of prolific authors, prominent journals, collaborative efforts, prior publications, covered subjects, keywords, bigrams, trigrams, theme maps, co-occurrence networks, and various other relevant topics. One noteworthy aspect related to the extracted dataset is the remarkable growth rate observed in association with the analyzed subject, indicating an impressive increase of 179.31%. The growth rate value, coupled with the relatively short timeframe, further emphasizes the research community’s keen interest in this subject. In light of these findings, the paper draws attention to key contributions and gaps in the existing literature, providing researchers and decision-makers innovative viewpoints and perspectives on the ongoing battle against the spread of fake news in the age of information. Full article
Show Figures

Figure 1

13 pages, 291 KiB  
Article
An FPT Algorithm for Directed Co-Graph Edge Deletion
by Wenjun Li, Xueying Yang, Chao Xu and Yongjie Yang
Algorithms 2024, 17(2), 69; https://doi.org/10.3390/a17020069 - 05 Feb 2024
Viewed by 1003
Abstract
In the directed co-graph edge-deletion problem, we are given a directed graph and an integer k, and the question is whether we can delete, at most, k edges so that the resulting graph is a directed co-graph. In this paper, we make [...] Read more.
In the directed co-graph edge-deletion problem, we are given a directed graph and an integer k, and the question is whether we can delete, at most, k edges so that the resulting graph is a directed co-graph. In this paper, we make two minor contributions. Firstly, we show that the problem is NP-hard. Then, we show that directed co-graphs are fully characterized by eight forbidden structures, each having, at most, six edges. Based on the symmetry properties and several refined observations, we develop a branching algorithm with a running time of O(2.733k), which is significantly more efficient compared to the brute-force algorithm, which has a running time of O(6k). Full article
Show Figures

Figure 1

19 pages, 4791 KiB  
Article
A Heterogeneity-Aware Car-Following Model: Based on the XGBoost Method
by Kefei Zhu, Xu Yang, Yanbo Zhang, Mengkun Liang and Jun Wu
Algorithms 2024, 17(2), 68; https://doi.org/10.3390/a17020068 - 05 Feb 2024
Viewed by 1157
Abstract
With the rising popularity of the Advanced Driver Assistance System (ADAS), there is an increasing demand for more human-like car-following performance. In this paper, we consider the role of heterogeneity in car-following behavior within car-following modeling. We incorporate car-following heterogeneity factors into the [...] Read more.
With the rising popularity of the Advanced Driver Assistance System (ADAS), there is an increasing demand for more human-like car-following performance. In this paper, we consider the role of heterogeneity in car-following behavior within car-following modeling. We incorporate car-following heterogeneity factors into the model features. We employ the eXtreme Gradient Boosting (XGBoost) method to build the car-following model. The results show that our model achieves optimal performance with a mean squared error of 0.002181, surpassing the model that disregards heterogeneity factors. Furthermore, utilizing model importance analysis, we determined that the cumulative importance score of heterogeneity factors in the model is 0.7262. The results demonstrate the significant impact of heterogeneity factors on car-following behavior prediction and highlight the importance of incorporating heterogeneity factors into car-following models. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Distributed Autonomous Vehicles)
Show Figures

Figure 1

21 pages, 440 KiB  
Article
Assessing the Ability of Genetic Programming for Feature Selection in Constructing Dispatching Rules for Unrelated Machine Environments
by Marko Đurasević, Domagoj Jakobović, Stjepan Picek and Luca Mariot
Algorithms 2024, 17(2), 67; https://doi.org/10.3390/a17020067 - 04 Feb 2024
Viewed by 1274
Abstract
The automated design of dispatching rules (DRs) with genetic programming (GP) has become an important research direction in recent years. One of the most important decisions in applying GP to generate DRs is determining the features of the scheduling problem to be used [...] Read more.
The automated design of dispatching rules (DRs) with genetic programming (GP) has become an important research direction in recent years. One of the most important decisions in applying GP to generate DRs is determining the features of the scheduling problem to be used during the evolution process. Unfortunately, there are no clear rules or guidelines for the design or selection of such features, and often the features are simply defined without investigating their influence on the performance of the algorithm. However, the performance of GP can depend significantly on the features provided to it, and a poor or inadequate selection of features for a given problem can result in the algorithm performing poorly. In this study, we examine in detail the features that GP should use when developing DRs for unrelated machine scheduling problems. Different types of features are investigated, and the best combination of these features is determined using two selection methods. The obtained results show that the design and selection of appropriate features are crucial for GP, as they improve the results by about 7% when only the simplest terminal nodes are used without selection. In addition, the results show that it is not possible to outperform more sophisticated manually designed DRs when only the simplest problem features are used as terminal nodes. This shows how important it is to design appropriate composite terminal nodes to produce high-quality DRs. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (2nd Edition))
Show Figures

Figure 1

13 pages, 428 KiB  
Article
A Geometrical Study about the Biparametric Family of Anomalies in the Elliptic Two-Body Problem with Extensions to Other Families
by José Antonio López Ortí, Francisco José Marco Castillo and María José Martínez Usó
Algorithms 2024, 17(2), 66; https://doi.org/10.3390/a17020066 - 04 Feb 2024
Viewed by 1150
Abstract
In the present paper, we efficiently solve the two-body problem for extreme cases such as those with high eccentricities. The use of numerical methods, with the usual variables, cannot maintain the perihelion passage accurately. In previous articles, we have verified that this problem [...] Read more.
In the present paper, we efficiently solve the two-body problem for extreme cases such as those with high eccentricities. The use of numerical methods, with the usual variables, cannot maintain the perihelion passage accurately. In previous articles, we have verified that this problem is treated more adequately through temporal reparametrizations related to the mean anomaly through the partition function. The biparametric family of anomalies, with an appropriate partition function, allows a systematic study of these transformations. In the present work, we consider the elliptical orbit as a meridian section of the ellipsoid of revolution, and the partition function depends on two variables raised to specific parameters. One of the variables is the mean radius of the ellipsoid at the secondary, and the other is the distance to the primary. One parameter regulates the concentration of points in the apoapsis region, and the other produces a symmetrical displacement between the polar and equatorial regions. The three most used geodesy latitude variables are also studied, resulting in one not belonging to the biparametric family. However, it is in the one introduced now, which implies an extension of the biparametric method. The results obtained using the method presented here now allow a causal interpretation of the operation of numerous reparametrizations used in the study of orbital motion. Full article
(This article belongs to the Special Issue Mathematical Modelling in Engineering and Human Behaviour)
Show Figures

Figure 1

4 pages, 141 KiB  
Editorial
Special Issue: “2022 and 2023 Selected Papers from Algorithms’ Editorial Board Members”
by Frank Werner
Algorithms 2024, 17(2), 65; https://doi.org/10.3390/a17020065 - 03 Feb 2024
Viewed by 1131
Abstract
This is the third edition of a Special Issue of Algorithms; it is of a rather different nature compared to other Special Issues in the journal, which are usually dedicated to a particular subject in the area of algorithms [...] Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
16 pages, 761 KiB  
Article
Enhanced Intrusion Detection Systems Performance with UNSW-NB15 Data Analysis
by Shweta More, Moad Idrissi, Haitham Mahmoud and A. Taufiq Asyhari
Algorithms 2024, 17(2), 64; https://doi.org/10.3390/a17020064 - 01 Feb 2024
Viewed by 1988
Abstract
The rapid proliferation of new technologies such as Internet of Things (IoT), cloud computing, virtualization, and smart devices has led to a massive annual production of over 400 zettabytes of network traffic data. As a result, it is crucial for companies to implement [...] Read more.
The rapid proliferation of new technologies such as Internet of Things (IoT), cloud computing, virtualization, and smart devices has led to a massive annual production of over 400 zettabytes of network traffic data. As a result, it is crucial for companies to implement robust cybersecurity measures to safeguard sensitive data from intrusion, which can lead to significant financial losses. Existing intrusion detection systems (IDS) require further enhancements to reduce false positives as well as enhance overall accuracy. To minimize security risks, data analytics and machine learning can be utilized to create data-driven recommendations and decisions based on the input data. This study focuses on developing machine learning models that can identify cyber-attacks and enhance IDS system performance. This paper employed logistic regression, support vector machine, decision tree, and random forest algorithms on the UNSW-NB15 network traffic dataset, utilizing in-depth exploratory data analysis, and feature selection using correlation analysis and random sampling to compare model accuracy and effectiveness. The performance and confusion matrix results indicate that the Random Forest model is the best option for identifying cyber-attacks, with a remarkable F1 score of 97.80%, accuracy of 98.63%, and low false alarm rate of 1.36%, and thus should be considered to improve IDS system security. Full article
Show Figures

Figure 1

20 pages, 837 KiB  
Article
Group Dynamics in Memory-Enhanced Ant Colonies: The Influence of Colony Division on a Maze Navigation Problem
by Claudia Cavallaro, Carolina Crespi, Vincenzo Cutello, Mario Pavone and Francesco Zito
Algorithms 2024, 17(2), 63; https://doi.org/10.3390/a17020063 - 01 Feb 2024
Viewed by 1605
Abstract
This paper introduces an agent-based model grounded in the ACO algorithm to investigate the impact of partitioning ant colonies on algorithmic performance. The exploration focuses on understanding the roles of group size and number within a multi-objective optimization context. The model consists of [...] Read more.
This paper introduces an agent-based model grounded in the ACO algorithm to investigate the impact of partitioning ant colonies on algorithmic performance. The exploration focuses on understanding the roles of group size and number within a multi-objective optimization context. The model consists of a colony of memory-enhanced ants (ME-ANTS) which, starting from a given position, must collaboratively discover the optimal path to the exit point within a grid network. The colony can be divided into groups of different sizes and its objectives are maximizing the number of ants that exit the grid while minimizing path costs. Three distinct analyses were conducted: an overall analysis assessing colony performance across different-sized groups, a group analysis examining the performance of each partitioned group, and a pheromone distribution analysis discerning correlations between temporal pheromone distribution and ant navigation. From the results, a dynamic correlation emerged between the degree of colony partitioning and solution quality within the ACO algorithm framework. Full article
(This article belongs to the Special Issue Algorithms for Network Analysis: Theory and Practice)
Show Figures

Figure 1

19 pages, 481 KiB  
Article
Program Code Generation with Generative AIs
by Baskhad Idrisov and Tim Schlippe
Algorithms 2024, 17(2), 62; https://doi.org/10.3390/a17020062 - 31 Jan 2024
Viewed by 2010
Abstract
Our paper compares the correctness, efficiency, and maintainability of human-generated and AI-generated program code. For that, we analyzed the computational resources of AI- and human-generated program code using metrics such as time and space complexity as well as runtime and memory [...] Read more.
Our paper compares the correctness, efficiency, and maintainability of human-generated and AI-generated program code. For that, we analyzed the computational resources of AI- and human-generated program code using metrics such as time and space complexity as well as runtime and memory usage. Additionally, we evaluated the maintainability using metrics such as lines of code, cyclomatic complexity, Halstead complexity and maintainability index. For our experiments, we had generative AIs produce program code in Java, Python, and C++ that solves problems defined on the competition coding website leetcode.com. We selected six LeetCode problems of varying difficulty, resulting in 18 program codes generated by each generative AI. GitHub Copilot, powered by Codex (GPT-3.0), performed best, solving 9 of the 18 problems (50.0%), whereas CodeWhisperer did not solve a single problem. BingAI Chat (GPT-4.0) generated correct program code for seven problems (38.9%), ChatGPT (GPT-3.5) and Code Llama (Llama 2) for four problems (22.2%) and StarCoder and InstructCodeT5+ for only one problem (5.6%). Surprisingly, although ChatGPT generated only four correct program codes, it was the only generative AI capable of providing a correct solution to a coding problem of difficulty level hard. In summary, 26 AI-generated codes (20.6%) solve the respective problem. For 11 AI-generated incorrect codes (8.7%), only minimal modifications to the program code are necessary to solve the problem, which results in time savings between 8.9% and even 71.3% in comparison to programming the program code from scratch. Full article
Show Figures

Figure 1

29 pages, 911 KiB  
Article
Efficient Time-Series Clustering through Sparse Gaussian Modeling
by Dimitris Fotakis, Panagiotis Patsilinakos, Eleni Psaroudaki and Michalis Xefteris
Algorithms 2024, 17(2), 61; https://doi.org/10.3390/a17020061 - 30 Jan 2024
Viewed by 1257
Abstract
In this work, we consider the problem of shape-based time-series clustering with the widely used Dynamic Time Warping (DTW) distance. We present a novel two-stage framework based on Sparse Gaussian Modeling. In the first stage, we apply Sparse Gaussian Process Regression and obtain [...] Read more.
In this work, we consider the problem of shape-based time-series clustering with the widely used Dynamic Time Warping (DTW) distance. We present a novel two-stage framework based on Sparse Gaussian Modeling. In the first stage, we apply Sparse Gaussian Process Regression and obtain a sparse representation of each time series in the dataset with a logarithmic (in the original length T) number of inducing data points. In the second stage, we apply k-means with DTW Barycentric Averaging (DBA) to the sparsified dataset using a generalization of DTW, which accounts for the fact that each inducing point serves as a representative of many original data points. The asymptotic running time of our Sparse Time-Series Clustering framework is Ω(T2/log2T) times faster than the running time of applying k-means to the original dataset because sparsification reduces the running time of DTW from Θ(T2) to Θ(log2T). Moreover, sparsification tends to smoothen outliers and particularly noisy parts of the original time series. We conduct an extensive experimental evaluation using datasets from the UCR Time-Series Classification Archive, showing that the quality of clustering computed by our Sparse Time-Series Clustering framework is comparable to the clustering computed by the standard k-means algorithm. Full article
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

15 pages, 2977 KiB  
Article
Learning State-Specific Action Masks for Reinforcement Learning
by Ziyi Wang, Xinran Li, Luoyang Sun, Haifeng Zhang, Hualin Liu and Jun Wang
Algorithms 2024, 17(2), 60; https://doi.org/10.3390/a17020060 - 30 Jan 2024
Viewed by 1277
Abstract
Efficient yet sufficient exploration remains a critical challenge in reinforcement learning (RL), especially for Markov Decision Processes (MDPs) with vast action spaces. Previous approaches have commonly involved projecting the original action space into a latent space or employing environmental action masks to reduce [...] Read more.
Efficient yet sufficient exploration remains a critical challenge in reinforcement learning (RL), especially for Markov Decision Processes (MDPs) with vast action spaces. Previous approaches have commonly involved projecting the original action space into a latent space or employing environmental action masks to reduce the action possibilities. Nevertheless, these methods often lack interpretability or rely on expert knowledge. In this study, we introduce a novel method for automatically reducing the action space in environments with discrete action spaces while preserving interpretability. The proposed approach learns state-specific masks with a dual purpose: (1) eliminating actions with minimal influence on the MDP and (2) aggregating actions with identical behavioral consequences within the MDP. Specifically, we introduce a novel concept called Bisimulation Metrics on Actions by States (BMAS) to quantify the behavioral consequences of actions within the MDP and design a dedicated mask model to ensure their binary nature. Crucially, we present a practical learning procedure for training the mask model, leveraging transition data collected by any RL policy. Our method is designed to be plug-and-play and adaptable to all RL policies, and to validate its effectiveness, an integration into two prominent RL algorithms, DQN and PPO, is performed. Experimental results obtained from Maze, Atari, and μRTS2 reveal a substantial acceleration in the RL learning process and noteworthy performance improvements facilitated by the introduced approach. Full article
(This article belongs to the Special Issue Algorithms for Games AI)
Show Figures

Figure 1

17 pages, 2828 KiB  
Article
Enhancing Product Design through AI-Driven Sentiment Analysis of Amazon Reviews Using BERT
by Mahammad Khalid Shaik Vadla, Mahima Agumbe Suresh and Vimal K. Viswanathan
Algorithms 2024, 17(2), 59; https://doi.org/10.3390/a17020059 - 30 Jan 2024
Viewed by 1800
Abstract
Understanding customer emotions and preferences is paramount for success in the dynamic product design landscape. This paper presents a study to develop a prediction pipeline to detect the aspect and perform sentiment analysis on review data. The pre-trained Bidirectional Encoder Representation from Transformers [...] Read more.
Understanding customer emotions and preferences is paramount for success in the dynamic product design landscape. This paper presents a study to develop a prediction pipeline to detect the aspect and perform sentiment analysis on review data. The pre-trained Bidirectional Encoder Representation from Transformers (BERT) model and the Text-to-Text Transfer Transformer (T5) are deployed to predict customer emotions. These models were trained on synthetically generated and manually labeled datasets to detect the specific features from review data, then sentiment analysis was performed to classify the data into positive, negative, and neutral reviews concerning their aspects. This research focused on eco-friendly products to analyze the customer emotions in this category. The BERT and T5 models were finely tuned for the aspect detection job and achieved 92% and 91% accuracy, respectively. The best-performing model will be selected, calculating the evaluation metrics precision, recall, F1-score, and computational efficiency. In these calculations, the BERT model outperforms T5 and is chosen as a classifier for the prediction pipeline to predict the aspect. By detecting aspects and sentiments of input data using the pre-trained BERT model, our study demonstrates its capability to comprehend and analyze customer reviews effectively. These findings can empower product designers and research developers with data-driven insights to shape exceptional products that resonate with customer expectations. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop