Advances in Artificial Intelligence: Models, Optimization, and Machine Learning, 2nd Edition

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: 31 July 2024 | Viewed by 22970

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Automatic Control and Computer Engineering, "Gheorghe Asachi" Technical University of Iași, 700050 Iași, Romania
Interests: artificial intelligence; machine learning; multiagent systems; software design
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Automatic Control and Computer Engineering, "Gheorghe Asachi" Technical University of Iași, 700050 Iași, Romania
Interests: spiking neural networks; artificial intelligence; embedded systems; optical wireless communication
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Automatic Control and Computer Engineering, "Gheorghe Asachi" Technical University of Iași, 700050 Iași, Romania
Interests: machine learning; computer graphics; data analytics; gaming engines; physics simulations
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Nowadays, artificial intelligence is an integral part of scientific progress. Various methods have been used to solve problems that were considered challenging until now. AI has the potential to offer tools for learning, knowledge discovery, and decision-making that can outperform human abilities and can be used in a large number of application domains.

This Special Issue will focus on recent theoretical and computational studies of artificial intelligence, with a focus on models, optimization, and machine learning. Topics include, but are not limited to, the following:

  1. Deep learning and classic machine learning algorithms;
  2. Neural modeling, architectures, and learning algorithms;
  3. Neuro-symbolic models and explainable artificial intelligence models;
  4. Spiking neural networks: theory and applications;
  5. Hebbian learning and other biologically plausible neural models;
  6. Optical neural networks;
  7. Biologically inspired optimization algorithms;
  8. Algorithms for autonomous driving;
  9. Reinforcement learning and deep reinforcement learning;
  10. Probabilistic models and Bayesian reasoning;
  11. Adaptive systems;
  12. Intelligent agents and multiagent systems.

Prof. Dr. Florin Leon
Dr. Mircea Hulea
Prof. Dr. Marius Gavrilescu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • neural networks
  • deep learning
  • machine learning
  • optimization algorithms
  • autonomous driving
  • Bayesian networks
  • reinforcement learning
  • multiagent systems

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 1406 KiB  
Article
Hybrid DE-Optimized GPR and NARX/SVR Models for Forecasting Gold Spot Prices: A Case Study of the Global Commodities Market
by Esperanza García-Gonzalo, Paulino José García-Nieto, Gregorio Fidalgo Valverde, Pedro Riesgo Fernández, Fernando Sánchez Lasheras and Sergio Luis Suárez Gómez
Mathematics 2024, 12(7), 1039; https://doi.org/10.3390/math12071039 - 30 Mar 2024
Viewed by 403
Abstract
In this work, we highlight three different techniques for automatically constructing the dataset for a time-series study: the direct multi-step, the recursive multi-step, and the direct–recursive hybrid scheme. The nonlinear autoregressive with exogenous variable support vector regression (NARX SVR) and the Gaussian process [...] Read more.
In this work, we highlight three different techniques for automatically constructing the dataset for a time-series study: the direct multi-step, the recursive multi-step, and the direct–recursive hybrid scheme. The nonlinear autoregressive with exogenous variable support vector regression (NARX SVR) and the Gaussian process regression (GPR), combined with the differential evolution (DE) for parameter tuning, are the two novel hybrid methods used in this study. The hyper-parameter settings used in the GPR and SVR training processes as part of this optimization technique DE significantly affect how accurate the regression is. The accuracy in the prediction of DE/GPR and DE/SVR, with or without NARX, is examined in this article using data on spot gold prices from the New York Commodities Exchange (COMEX) that have been made publicly available. According to RMSE statistics, the numerical results obtained demonstrate that NARX DE/SVR achieved the best results. Full article
Show Figures

Figure 1

17 pages, 970 KiB  
Article
Summary-Sentence Level Hierarchical Supervision for Re-Ranking Model of Two-Stage Abstractive Summarization Framework
by Eunseok Yoo, Gyunyeop Kim and Sangwoo Kang
Mathematics 2024, 12(4), 521; https://doi.org/10.3390/math12040521 - 07 Feb 2024
Viewed by 544
Abstract
Fine-tuning a pre-trained sequence-to-sequence-based language model has significantly advanced the field of abstractive summarization. However, the early models of abstractive summarization were limited by the gap between training and inference, and they did not fully utilize the potential of the language model. Recent [...] Read more.
Fine-tuning a pre-trained sequence-to-sequence-based language model has significantly advanced the field of abstractive summarization. However, the early models of abstractive summarization were limited by the gap between training and inference, and they did not fully utilize the potential of the language model. Recent studies have introduced a two-stage framework that allows the second-stage model to re-rank the candidate summary generated by the first-stage model, to resolve these limitations. In this study, we point out that the supervision method performed in the existing re-ranking model of the two-stage abstractive summarization framework cannot learn detailed and complex information of the data. In addition, we present the problem of positional bias in the existing encoder–decoder-based re-ranking model. To address these two limitations, this study proposes a hierarchical supervision method that jointly performs summary and sentence-level supervision. For sentence-level supervision, we designed two sentence-level loss functions: intra- and inter-intra-sentence ranking losses. Compared to the existing abstractive summarization model, the proposed method exhibited a performance improvement for both the CNN/DM and XSum datasets. The proposed model outperformed the baseline model under a few-shot setting. Full article
Show Figures

Figure 1

16 pages, 1462 KiB  
Article
State-Space Compression for Efficient Policy Learning in Crude Oil Scheduling
by Nan Ma, Hongqi Li and Hualin Liu
Mathematics 2024, 12(3), 393; https://doi.org/10.3390/math12030393 - 25 Jan 2024
Viewed by 581
Abstract
The imperative for swift and intelligent decision making in production scheduling has intensified in recent years. Deep reinforcement learning, akin to human cognitive processes, has heralded advancements in complex decision making and has found applicability in the production scheduling domain. Yet, its deployment [...] Read more.
The imperative for swift and intelligent decision making in production scheduling has intensified in recent years. Deep reinforcement learning, akin to human cognitive processes, has heralded advancements in complex decision making and has found applicability in the production scheduling domain. Yet, its deployment in industrial settings is marred by large state spaces, protracted training times, and challenging convergence, necessitating a more efficacious approach. Addressing these concerns, this paper introduces an innovative, accelerated deep reinforcement learning framework—VSCS (Variational Autoencoder for State Compression in Soft Actor–Critic). The framework adeptly employs a variational autoencoder (VAE) to condense the expansive high-dimensional state space into a tractable low-dimensional feature space, subsequently leveraging these features to refine policy learning and augment the policy network’s performance and training efficacy. Furthermore, a novel methodology to ascertain the optimal dimensionality of these low-dimensional features is presented, integrating feature reconstruction similarity with visual analysis to facilitate informed dimensionality selection. This approach, rigorously validated within the realm of crude oil scheduling, demonstrates significant improvements over traditional methods. Notably, the convergence rate of the proposed VSCS method shows a remarkable increase of 77.5%, coupled with an 89.3% enhancement in the reward and punishment values. Furthermore, this method substantiates the robustness and appropriateness of the chosen feature dimensions. Full article
Show Figures

Figure 1

24 pages, 6294 KiB  
Article
Multi-Target Feature Selection with Adaptive Graph Learning and Target Correlations
by Yujing Zhou and Dubo He
Mathematics 2024, 12(3), 372; https://doi.org/10.3390/math12030372 - 24 Jan 2024
Viewed by 562
Abstract
In this paper, we present a novel multi-target feature selection algorithm that incorporates adaptive graph learning and target correlations. Specifically, our proposed approach introduces the low-rank constraint on the regression matrix, allowing us to model both inter-target and input–output relationships within a unified [...] Read more.
In this paper, we present a novel multi-target feature selection algorithm that incorporates adaptive graph learning and target correlations. Specifically, our proposed approach introduces the low-rank constraint on the regression matrix, allowing us to model both inter-target and input–output relationships within a unified framework. To preserve the similarity structure of the samples and mitigate the influence of noise and outliers, we learn a graph matrix that captures the induced sample similarity. Furthermore, we introduce a manifold regularizer to maintain the global target correlations, ensuring the preservation of the overall target relationship during subsequent learning processes. To solve the final objective function, we also propose an optimization algorithm. Through extensive experiments on eight real-world datasets, we demonstrate that our proposed method outperforms state-of-the-art multi-target feature selection techniques. Full article
Show Figures

Figure 1

19 pages, 2418 KiB  
Article
Efficient Federated Learning with Pre-Trained Large Language Model Using Several Adapter Mechanisms
by Gyunyeop Kim, Joon Yoo and Sangwoo Kang
Mathematics 2023, 11(21), 4479; https://doi.org/10.3390/math11214479 - 29 Oct 2023
Viewed by 1271
Abstract
Recent advancements in deep learning have led to various challenges, one of which is the issue of data privacy in training data. To address this issue, federated learning, a technique that merges models trained by clients on servers, has emerged as an attractive [...] Read more.
Recent advancements in deep learning have led to various challenges, one of which is the issue of data privacy in training data. To address this issue, federated learning, a technique that merges models trained by clients on servers, has emerged as an attractive solution. However, federated learning faces challenges related to data heterogeneity and system heterogeneity. Recent observations suggest that incorporating pre-trained models into federated learning can mitigate some of these challenges. Nonetheless, the main drawback of pre-trained models lies in their typically large model size, leading to excessive data transmission when clients send these models to the server. Additionally, federated learning involves multiple global steps, which means transmitting a large language model to multiple clients results in too much data exchange. In this paper, we propose a novel approach to address this challenge using adapters. Adapters demonstrate training efficiency by training a small capacity adapter layer alongside a large language model. This unique characteristic reduces the volume of data transmission, offering a practical solution to the problem. The evaluation results demonstrate that the proposed method achieves a reduction in training time of approximately 20–40% and a transmission speed improvement of over 98% compared to previous approaches. Full article
Show Figures

Figure 1

16 pages, 3985 KiB  
Article
Structure-Aware Low-Rank Adaptation for Parameter-Efficient Fine-Tuning
by Yahao Hu, Yifei Xie, Tianfeng Wang, Man Chen and Zhisong Pan
Mathematics 2023, 11(20), 4317; https://doi.org/10.3390/math11204317 - 17 Oct 2023
Viewed by 1465
Abstract
With the growing scale of pre-trained language models (PLMs), full parameter fine-tuning becomes prohibitively expensive and practically infeasible. Therefore, parameter-efficient adaptation techniques for PLMs have been proposed to learn through incremental updates of pre-trained weights, such as in low-rank adaptation (LoRA). However, LoRA [...] Read more.
With the growing scale of pre-trained language models (PLMs), full parameter fine-tuning becomes prohibitively expensive and practically infeasible. Therefore, parameter-efficient adaptation techniques for PLMs have been proposed to learn through incremental updates of pre-trained weights, such as in low-rank adaptation (LoRA). However, LoRA relies on heuristics to select the modules and layers to which it is applied, and assigns them the same rank. As a consequence, any fine-tuning that ignores the structural information between modules and layers is suboptimal. In this work, we propose structure-aware low-rank adaptation (SaLoRA), which adaptively learns the intrinsic rank of each incremental matrix by removing rank-0 components during training. We conduct comprehensive experiments using pre-trained models of different scales in both task-oriented (GLUE) and task-agnostic (Yelp and GYAFC) settings. The experimental results show that SaLoRA effectively captures the structure-aware intrinsic rank. Moreover, our method consistently outperforms LoRA without significantly compromising training efficiency. Full article
Show Figures

Figure 1

25 pages, 2097 KiB  
Article
Breast Cancer Diagnosis Using a Novel Parallel Support Vector Machine with Harris Hawks Optimization
by Sultan Almotairi, Elsayed Badr, Mustafa Abdul Salam and Hagar Ahmed
Mathematics 2023, 11(14), 3251; https://doi.org/10.3390/math11143251 - 24 Jul 2023
Cited by 1 | Viewed by 1567
Abstract
Three contributions are proposed. Firstly, a novel hybrid classifier (HHO-SVM) is introduced, which is a combination between the Harris hawks optimization (HHO) and a support vector machine (SVM) is introduced. Second, the performance of the HHO-SVM is enhanced using the conventional normalization method. [...] Read more.
Three contributions are proposed. Firstly, a novel hybrid classifier (HHO-SVM) is introduced, which is a combination between the Harris hawks optimization (HHO) and a support vector machine (SVM) is introduced. Second, the performance of the HHO-SVM is enhanced using the conventional normalization method. The final contribution is to improve the efficiency of the HHO-SVM by adopting a parallel approach that employs the data distribution. The proposed models are evaluated using the Wisconsin Diagnosis Breast Cancer (WDBC) dataset. The results show that the HHO-SVM achieves a 98.24% accuracy rate with the normalization scaling technique, outperforming other related works. On the other hand, the HHO-SVM achieves a 99.47% accuracy rate with the equilibration scaling technique, which is better than other previous works. Finally, to compare the three effective scaling strategies on four CPU cores, the parallel version of the proposed model provides an acceleration of 3.97. Full article
Show Figures

Figure 1

13 pages, 4311 KiB  
Article
Effects of Exploration Weight and Overtuned Kernel Parameters on Gaussian Process-Based Bayesian Optimization Search Performance
by Yuto Omae
Mathematics 2023, 11(14), 3067; https://doi.org/10.3390/math11143067 - 11 Jul 2023
Cited by 1 | Viewed by 877
Abstract
Gaussian process-based Bayesian optimization (GPBO) is used to search parameters in machine learning, material design, etc. It is a method for finding optimal solutions in a search space through the following four procedures. (1) Develop a Gaussian process regression (GPR) model using observed [...] Read more.
Gaussian process-based Bayesian optimization (GPBO) is used to search parameters in machine learning, material design, etc. It is a method for finding optimal solutions in a search space through the following four procedures. (1) Develop a Gaussian process regression (GPR) model using observed data. (2) The GPR model is used to obtain the estimated mean and estimated variance for the search space. (3) The point where the sum of the estimated mean and the weighted estimated variance (upper confidence bound, UCB) is largest is the next search point (in the case of a maximum search). (4) Repeat the above procedures. Thus, the generalization performance of the GPR is directly related to the search performance of the GPBO. In procedure (1), the kernel parameters (KPs) of the GPR are tuned via gradient descent (GD) using the log-likelihood as the objective function. However, if the number of iterations of the GD is too high, there is a risk that the KPs will overfit the observed data. In this case, because the estimated mean and variance output by the GPR model are inappropriate, the next search point cannot be properly determined. Therefore, overtuned KPs degrade the GPBO search performance. However, this negative effect can be mitigated by changing the parameters of the GPBO. We focus on the weight of the estimated variances (exploration weight) of the UCB as one of these parameters. In a GPBO with a large exploration weight, the observed data appear in various regions in the search space. If the KP is tuned using such data, the GPR model can estimate the diverse regions somewhat correctly, even if the KP overfits the observed data, i.e., the negative effect of overtuned KPs on the GPR is mitigated by setting a larger exploration weight for the UCB. This suggests that the negative effect of overtuned KPs on the GPBO search performance may be related to the UCB exploration weight. In the present study, this hypothesis was tested using simple numerical simulations. Specifically, GPBO was applied to a simple black-box function with two optimal solutions. As parameters of GPBO, we set the number of KP iterations of GD in the range of 0–500 and the exploration weight as {1,5}. The number of KP iterations expresses the degree of overtuning, and the exploration weight expresses the strength of the GPBO search. The results indicate that, in the overtuned KP situation, GPBO with a larger exploration weight has better search performance. This suggests that, when searching for solutions with a small GPBO exploration weight, one must be careful about overtuning KPs. The findings of this study are useful for successful exploration with GPBO in all situations where it is used, e.g., machine learning hyperparameter tuning. Full article
Show Figures

Figure 1

25 pages, 13079 KiB  
Article
Multiagent Multimodal Trajectory Prediction in Urban Traffic Scenarios Using a Neural Network-Based Solution
by Andreea-Iulia Patachi and Florin Leon
Mathematics 2023, 11(8), 1923; https://doi.org/10.3390/math11081923 - 19 Apr 2023
Cited by 1 | Viewed by 1457
Abstract
Trajectory prediction in urban scenarios is critical for high-level automated driving systems. However, this task is associated with many challenges. On the one hand, a scene typically includes different traffic participants, such as vehicles, buses, pedestrians, and cyclists, which may behave differently. On [...] Read more.
Trajectory prediction in urban scenarios is critical for high-level automated driving systems. However, this task is associated with many challenges. On the one hand, a scene typically includes different traffic participants, such as vehicles, buses, pedestrians, and cyclists, which may behave differently. On the other hand, an agent may have multiple plausible future trajectories based on complex interactions with the other agents. To address these challenges, we propose a multiagent, multimodal trajectory prediction method based on neural networks, which encodes past motion information, group context, and road context to estimate future trajectories by learning from the interactions of the agents. At inference time, multiple realistic future trajectories are predicted. Our solution is based on an encoder–decoder architecture that can handle a variable number of traffic participants. It uses vectors of agent features as inputs rather than images, and it is designed to run on a physical autonomous car, addressing the real-time operation requirements. We evaluate the method using the inD dataset for each type of traffic participant and provide information about its integration into an actual self-driving car. Full article
Show Figures

Figure 1

17 pages, 3823 KiB  
Article
Manifold Regularized Principal Component Analysis Method Using L2,p-Norm
by Minghua Wan, Xichen Wang, Hai Tan and Guowei Yang
Mathematics 2022, 10(23), 4603; https://doi.org/10.3390/math10234603 - 05 Dec 2022
Cited by 2 | Viewed by 1266
Abstract
The main idea of principal component analysis (PCA) is to transform the problem of high-dimensional space into low-dimensional space, and obtain the output sample set after a series of operations on the samples. However, the accuracy of the traditional principal component analysis method [...] Read more.
The main idea of principal component analysis (PCA) is to transform the problem of high-dimensional space into low-dimensional space, and obtain the output sample set after a series of operations on the samples. However, the accuracy of the traditional principal component analysis method in dimension reduction is not very high, and it is very sensitive to outliers. In order to improve the robustness of image recognition to noise and the importance of geometric information in a given data space, this paper proposes a new unsupervised feature extraction model based on l2,p-norm PCA and manifold learning method. To improve robustness, the model method adopts l2,p-norm to reconstruct the distance measure between the error and the original input data. When the image is occluded, the projection direction will not significantly deviate from the expected solution of the model, which can minimize the reconstruction error of the data and improve the recognition accuracy. To verify whether the algorithm proposed by the method is robust, the data sets used in this experiment include ORL database, Yale database, FERET database, and PolyU palmprint database. In the experiments of these four databases, the recognition rate of the proposed method is higher than that of other methods when p=0.5. Finally, the experimental results show that the method proposed in this paper is robust and effective. Full article
Show Figures

Figure 1

16 pages, 1714 KiB  
Article
Hateful Memes Detection Based on Multi-Task Learning
by Zhiyu Ma, Shaowen Yao, Liwen Wu, Song Gao and Yunqi Zhang
Mathematics 2022, 10(23), 4525; https://doi.org/10.3390/math10234525 - 30 Nov 2022
Cited by 3 | Viewed by 2888
Abstract
With the popularity of posting memes on social platforms, the severe negative impact of hateful memes is growing. As existing detection models have lower detection accuracy than humans, hateful memes detection is still a challenge to statistical learning and artificial intelligence. This paper [...] Read more.
With the popularity of posting memes on social platforms, the severe negative impact of hateful memes is growing. As existing detection models have lower detection accuracy than humans, hateful memes detection is still a challenge to statistical learning and artificial intelligence. This paper proposed a multi-task learning method consisting of a primary multimodal task and two unimodal auxiliary tasks to address this issue. We introduced a self-supervised generation strategy in auxiliary tasks to generate unimodal auxiliary labels automatically. Meanwhile, we used BERT and RESNET as the backbone for text and image classification, respectively, and then fusion them with a late fusion method. In the training phase, the backward guidance technique and the adaptive weight adjustment strategy were used to capture the consistency and variability between different modalities, numerically improving the hateful memes detection accuracy and the generalization and robustness of the model. The experiment conducted on the Facebook AI multimodal hateful memes dataset shows that the prediction accuracy of our model outperformed the comparing models. Full article
Show Figures

Figure 1

33 pages, 3285 KiB  
Article
Optimization Methods for Redundancy Allocation in Hybrid Structure Large Binary Systems
by Petru Cașcaval and Florin Leon
Mathematics 2022, 10(19), 3698; https://doi.org/10.3390/math10193698 - 09 Oct 2022
Cited by 1 | Viewed by 1394
Abstract
This paper addresses the issue of optimal redundancy allocation in hybrid structure large binary systems. Two aspects of optimization are considered: (1) maximizing the reliability of the system under the cost constraint, and (2) obtaining the necessary reliability at a minimum cost. The [...] Read more.
This paper addresses the issue of optimal redundancy allocation in hybrid structure large binary systems. Two aspects of optimization are considered: (1) maximizing the reliability of the system under the cost constraint, and (2) obtaining the necessary reliability at a minimum cost. The complex binary system considered in this work is composed of many subsystems with redundant structure. To cover most of the cases encountered in practice, the following kinds of redundancy are considered: active redundancy, passive redundancy, hybrid standby redundancy with a hot or warm reserve and possibly other cold ones, triple modular redundancy (TMR) structure with control facilities and cold spare components, static redundancy: triple modular redundancy or 5-modular redundancy (5MR), TMR/Simplex with cold standby redundancy, and TMR/Duplex with cold standby redundancy. A classic evolutionary algorithm highlights the complexity of this optimization problem. To master the complexity of this problem, two fundamentally different optimization methods are proposed: an improved evolutionary algorithm and a zero-one integer programming formulation. To speed up the search process, a lower bound is determined first. The paper highlights the difficulty of these optimization problems for large systems and, based on numerical results, shows the effectiveness of zero-one integer programming. Full article
Show Figures

Graphical abstract

29 pages, 3796 KiB  
Article
A Hybrid Competitive Evolutionary Neural Network Optimization Algorithm for a Regression Problem in Chemical Engineering
by Marius Gavrilescu, Sabina-Adriana Floria, Florin Leon and Silvia Curteanu
Mathematics 2022, 10(19), 3581; https://doi.org/10.3390/math10193581 - 30 Sep 2022
Cited by 2 | Viewed by 1468
Abstract
Neural networks have demonstrated their usefulness for solving complex regression problems in circumstances where alternative methods do not provide satisfactory results. Finding a good neural network model is a time-consuming task that involves searching through a complex multidimensional hyperparameter and weight space in [...] Read more.
Neural networks have demonstrated their usefulness for solving complex regression problems in circumstances where alternative methods do not provide satisfactory results. Finding a good neural network model is a time-consuming task that involves searching through a complex multidimensional hyperparameter and weight space in order to find the values that provide optimal convergence. We propose a novel neural network optimizer that leverages the advantages of both an improved evolutionary competitive algorithm and gradient-based backpropagation. The method consists of a modified, hybrid variant of the Imperialist Competitive Algorithm (ICA). We analyze multiple strategies for initialization, assimilation, revolution, and competition, in order to find the combination of ICA steps that provides optimal convergence and enhance the algorithm by incorporating a backpropagation step in the ICA loop, which, together with a self-adaptive hyperparameter adjustment strategy, significantly improves on the original algorithm. The resulting hybrid method is used to optimize a neural network to solve a complex problem in the field of chemical engineering: the synthesis and swelling behavior of the semi- and interpenetrated multicomponent crosslinked structures of hydrogels, with the goal of predicting the yield in a crosslinked polymer and the swelling degree based on several reaction-related input parameters. We show that our approach has better performance than other biologically inspired optimization algorithms and generates regression models capable of making predictions that are better correlated with the desired outputs. Full article
Show Figures

Figure 1

Review

Jump to: Research

42 pages, 8350 KiB  
Review
Analysis of Colorectal and Gastric Cancer Classification: A Mathematical Insight Utilizing Traditional Machine Learning Classifiers
by Hari Mohan Rai and Joon Yoo
Mathematics 2023, 11(24), 4937; https://doi.org/10.3390/math11244937 - 12 Dec 2023
Cited by 1 | Viewed by 918
Abstract
Cancer remains a formidable global health challenge, claiming millions of lives annually. Timely and accurate cancer diagnosis is imperative. While numerous reviews have explored cancer classification using machine learning and deep learning techniques, scant literature focuses on traditional ML methods. In this manuscript, [...] Read more.
Cancer remains a formidable global health challenge, claiming millions of lives annually. Timely and accurate cancer diagnosis is imperative. While numerous reviews have explored cancer classification using machine learning and deep learning techniques, scant literature focuses on traditional ML methods. In this manuscript, we undertake a comprehensive review of colorectal and gastric cancer detection specifically employing traditional ML classifiers. This review emphasizes the mathematical underpinnings of cancer detection, encompassing preprocessing techniques, feature extraction, machine learning classifiers, and performance assessment metrics. We provide mathematical formulations for these key components. Our analysis is limited to peer-reviewed articles published between 2017 and 2023, exclusively considering medical imaging datasets. Benchmark and publicly available imaging datasets for colorectal and gastric cancers are presented. This review synthesizes findings from 20 articles on colorectal cancer and 16 on gastric cancer, culminating in a total of 36 research articles. A significant focus is placed on mathematical formulations for commonly used preprocessing techniques, features, ML classifiers, and assessment metrics. Crucially, we introduce our optimized methodology for the detection of both colorectal and gastric cancers. Our performance metrics analysis reveals remarkable results: 100% accuracy in both cancer types, but with the lowest sensitivity recorded at 43.1% for gastric cancer. Full article
Show Figures

Figure 1

31 pages, 2143 KiB  
Review
A Review of Intelligent Connected Vehicle Cooperative Driving Development
by Biyao Wang, Yi Han, Siyu Wang, Di Tian, Mengjiao Cai, Ming Liu and Lujia Wang
Mathematics 2022, 10(19), 3635; https://doi.org/10.3390/math10193635 - 04 Oct 2022
Cited by 18 | Viewed by 5242
Abstract
With the development and progress of information technology, especially V2X technology, the research focus of intelligent vehicles gradually shifted from single-vehicle control to multi-vehicle control, and the cooperative control system of intelligent connected vehicles became an important topic of development. In order to [...] Read more.
With the development and progress of information technology, especially V2X technology, the research focus of intelligent vehicles gradually shifted from single-vehicle control to multi-vehicle control, and the cooperative control system of intelligent connected vehicles became an important topic of development. In order to track the research progress of intelligent connected vehicle cooperative driving systems in recent years, this paper discusses the current research of intelligent connected vehicle cooperative driving systems with vehicles, infrastructure, and test sites, and analyzes the current development status, development trend, and development limitations of each object. Based on the analysis results of relevant references of the cooperative control algorithm, this paper expounds on vehicle collaborative queue control, vehicle collaborative decision making, and vehicle collaborative positioning. In the case of taking the infrastructure as the object, this paper expounds the communication security, communication delay, and communication optimization algorithm of the vehicle terminal and the road terminal of intelligent connected vehicles. In the case of taking the test site as the object, this paper expounds the development process and research status of the real vehicle road test platform, virtual test platform, test method, and evaluation mechanism, and analyzes the problems existing in the intelligent connected vehicle test environment. Finally, the future development trend and limitations of intelligent networked vehicle collaborative control system are discussed. This paper summarizes the intelligent connected car collaborative control system, and puts forward the next problems to be solved and the direction of further exploration. The research results can provide a reference for the cooperative driving of intelligent vehicles. Full article
Show Figures

Figure 1

Back to TopTop