Advances in Machine Learning, Optimization, and Control Applications

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Engineering Mathematics".

Deadline for manuscript submissions: closed (31 March 2023) | Viewed by 35608

Special Issue Editors


E-Mail Website
Guest Editor
School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen 518107, China
Interests: large-scale pattern recognition; signal processing; machine learning; control systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200444, China
Interests: sparse optimization; distributed optimization; deep learning; data-driven fault detection
Special Issues, Collections and Topics in MDPI journals
School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou 510275, China
Interests: data-driven control systems; intelligent control; optimization; robot control
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In practice, many systems such as industrial processes, aerospace systems, transportation systems, power grid systems, etc., are becoming more and more complex. Moreover, these systems may suffer from various uncertainties, high nonlinearities, external disturbances, stochastic effects, etc., which significantly challenge model-based control and optimization, while with the development of information science and sensing technology, huge amounts of data are constantly emerging. Both academia and industry have put much effort into mining valuable information from data to facilitate control and optimization of practical systems.

Over the past few decades, data science and machine learning have demonstrated tremendous success in many areas of science and engineering, such as large-scale pattern recognition, computer vision, multiagent control, industrial engineering, etc. The connection between machine learning and control theory is becoming a popular research topic, which may endow control systems with learning ability and thus improve the control ability and performance of conventional control approaches. However, the coupling of a learning algorithm with a control loop requires a combined treatment as a dynamic process, which raises fundamental questions about stability, robustness, and safety for control systems. Additionally, insights from robust control theory may, in turn, help to enhance the robustness of machine learning algorithms. In order to leverage the potential of data-based and learning methods for control and optimization, we therefore believe that principled approaches integrating with machine learning and control theory are needed urgently, which therefore put forward new demands for novel mathematical theory, new optimization algorithms, and statistical techniques behind machine learning.

This Special Issue on “Advances in Machine Learning, Optimization, and Control Applications” aims to present the latest theoretical and technical advancements in the broad areas of machine learning, optimization, and control applications, and also to explore potential problems and challenges in connections of these techniques. Topics of interest in this Special Issue include but are not limited to machine learning, neural networks, statistical optimization learning, parallel and distributed optimization, sparse optimization, intelligent control via neural networks, and other applications of machine learning.

Prof. Dr. Wanquan Liu
Dr. Xianchao Xiu
Prof. Dr. Xuefang Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • neural networks
  • mathematical models
  • distributed systems
  • optimization methods
  • scientific computing
  • pattern recognition
  • data-driven control systems
  • learning control systems
  • reinforcement learning control and optimization

Related Special Issue

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

31 pages, 40502 KiB  
Article
Levy Flight-Based Improved Grey Wolf Optimization: A Solution for Various Engineering Problems
by Bhargav Bhatt, Himanshu Sharma, Krishan Arora, Gyanendra Prasad Joshi and Bhanu Shrestha
Mathematics 2023, 11(7), 1745; https://doi.org/10.3390/math11071745 - 05 Apr 2023
Cited by 4 | Viewed by 2231
Abstract
Optimization is a broad field for researchers to develop new algorithms for solving various types of problems. There are various popular techniques being worked on for improvement. Grey wolf optimization (GWO) is one such algorithm because it is efficient, simple to use, and [...] Read more.
Optimization is a broad field for researchers to develop new algorithms for solving various types of problems. There are various popular techniques being worked on for improvement. Grey wolf optimization (GWO) is one such algorithm because it is efficient, simple to use, and easy to implement. However, GWO has several drawbacks as it is stuck in local optima, has a low convergence rate, and has poor exploration. Several attempts have been made recently to overcome these drawbacks. This paper discusses some strategies that can be applied to GWO to overcome its drawbacks. This article proposes a novel algorithm to enhance the convergence rate, which was poor in GWO, and it is also compared with the other optimization algorithms. GWO also has the limitation of becoming stuck in local optima when used in complex functions or in a large search space, so these issues are further addressed. The most remarkable factor is that GWO purely depends on the initialization constraints such as population size and wolf initial positions. This study demonstrates the improved position of the wolf by applying strategies with the same population size. As a result, this novel algorithm has enhanced its exploration capability compared to other algorithms presented, and statistical results are also presented to demonstrate its superiority. Full article
(This article belongs to the Special Issue Advances in Machine Learning, Optimization, and Control Applications)
Show Figures

Figure 1

36 pages, 21841 KiB  
Article
Mountaineering Team-Based Optimization: A Novel Human-Based Metaheuristic Algorithm
by Iman Faridmehr, Moncef L. Nehdi, Iraj Faraji Davoudkhani and Alireza Poolad
Mathematics 2023, 11(5), 1273; https://doi.org/10.3390/math11051273 - 06 Mar 2023
Cited by 16 | Viewed by 2323
Abstract
This paper proposes a novel optimization method for solving real-world optimization problems. It is inspired by a cooperative human phenomenon named the mountaineering team-based optimization (MTBO) algorithm. Proposed for the first time, the MTBO algorithm is mathematically modeled to achieve a robust optimization [...] Read more.
This paper proposes a novel optimization method for solving real-world optimization problems. It is inspired by a cooperative human phenomenon named the mountaineering team-based optimization (MTBO) algorithm. Proposed for the first time, the MTBO algorithm is mathematically modeled to achieve a robust optimization algorithm based on the social behavior and human cooperation needed in considering the natural phenomena to reach a mountaintop, which represents the optimal global solution. To solve optimization problems, the proposed MTBO algorithm captures the phases of the regular and guided movement of climbers based on the leader’s experience, obstacles against reaching the peak and getting stuck in local optimality, and the coordination and social cooperation of the group to save members from natural hazards. The performance of the MTBO algorithm was tested with 30 known CEC 2014 test functions, as well as on classical engineering design problems, and the results were compared with that of well-known methods. It is shown that the MTBO algorithm is very competitive in comparison with state-of-art metaheuristic methods. The superiority of the proposed MTBO algorithm is further confirmed by statistical validation, as well as the Wilcoxon signed-rank test with advanced optimization algorithms. Compared to the other algorithms, the MTBO algorithm is more robust, easier to implement, exhibits effective optimization performance for a wide range of real-world test functions, and attains faster convergence to optimal global solutions. Full article
(This article belongs to the Special Issue Advances in Machine Learning, Optimization, and Control Applications)
Show Figures

Figure 1

26 pages, 7042 KiB  
Article
Task Scheduling Approach in Cloud Computing Environment Using Hybrid Differential Evolution
by Mohamed Abdel-Basset, Reda Mohamed, Waleed Abd Elkhalik, Marwa Sharawi and Karam M. Sallam
Mathematics 2022, 10(21), 4049; https://doi.org/10.3390/math10214049 - 31 Oct 2022
Cited by 8 | Viewed by 2375
Abstract
Task scheduling is one of the most significant challenges in the cloud computing environment and has attracted the attention of various researchers over the last decades, in order to achieve cost-effective execution and improve resource utilization. The challenge of task scheduling is categorized [...] Read more.
Task scheduling is one of the most significant challenges in the cloud computing environment and has attracted the attention of various researchers over the last decades, in order to achieve cost-effective execution and improve resource utilization. The challenge of task scheduling is categorized as a nondeterministic polynomial time (NP)-hard problem, which cannot be tackled with the classical methods, due to their inability to find a near-optimal solution within a reasonable time. Therefore, metaheuristic algorithms have recently been employed to overcome this problem, but these algorithms still suffer from falling into a local minima and from a low convergence speed. Therefore, in this study, a new task scheduler, known as hybrid differential evolution (HDE), is presented as a solution to the challenge of task scheduling in the cloud computing environment. This scheduler is based on two proposed enhancements to the traditional differential evolution. The first improvement is based on improving the scaling factor, to include numerical values generated dynamically and based on the current iteration, in order to improve both the exploration and exploitation operators; the second improvement is intended to improve the exploitation operator of the classical DE, in order to achieve better results in fewer iterations. Multiple tests utilizing randomly generated datasets and the CloudSim simulator were conducted, to demonstrate the efficacy of HDE. In addition, HDE was compared to a variety of heuristic and metaheuristic algorithms, including the slime mold algorithm (SMA), equilibrium optimizer (EO), sine cosine algorithm (SCA), whale optimization algorithm (WOA), grey wolf optimizer (GWO), classical DE, first come first served (FCFS), round robin (RR) algorithm, and shortest job first (SJF) scheduler. During trials, makespan and total execution time values were acquired for various task sizes, ranging from 100 to 3000. Compared to the other metaheuristic and heuristic algorithms considered, the results of the studies indicated that HDE generated superior outcomes. Consequently, HDE was found to be the most efficient metaheuristic scheduling algorithm among the numerous methods researched. Full article
(This article belongs to the Special Issue Advances in Machine Learning, Optimization, and Control Applications)
Show Figures

Figure 1

13 pages, 2811 KiB  
Article
Fractured Elbow Classification Using Hand-Crafted and Deep Feature Fusion and Selection Based on Whale Optimization Approach
by Sarib Malik, Javeria Amin, Muhammad Sharif, Mussarat Yasmin, Seifedine Kadry and Sheraz Anjum
Mathematics 2022, 10(18), 3291; https://doi.org/10.3390/math10183291 - 10 Sep 2022
Cited by 13 | Viewed by 1702
Abstract
The fracture of the elbow is common in human beings. The complex structure of the elbow, including its irregular shape, border, etc., makes it difficult to correctly recognize elbow fractures. To address such challenges, a method is proposed in this work that consists [...] Read more.
The fracture of the elbow is common in human beings. The complex structure of the elbow, including its irregular shape, border, etc., makes it difficult to correctly recognize elbow fractures. To address such challenges, a method is proposed in this work that consists of two phases. In Phase I, pre-processing is performed, in which images are converted into RGB. In Phase II, pre-trained convolutional models Darknet-53 and Xception are used for deep feature extraction. The handcrafted features, such as the histogram of oriented gradient (HOG) and local binary pattern (LBP), are also extracted from the input images. A principal component analysis (PCA) is used for best feature selection and is serially merged into a single-feature vector having the length of N×2125. Furthermore, informative features N×1049 are selected out of N×2125 features using the whale optimization approach (WOA) and supplied to SVM, KNN, and wide neural network (WNN) classifiers. The proposed method’s performance is evaluated on 16,984 elbow X-ray radiographs that are taken from the publicly available musculoskeletal radiology (MURA) dataset. The proposed technique provides 97.1% accuracy and a kappa score of 0.943% for the classification of elbow fractures. The obtained results are compared to the most recently published approaches on the same benchmark datasets. Full article
(This article belongs to the Special Issue Advances in Machine Learning, Optimization, and Control Applications)
Show Figures

Figure 1

19 pages, 5099 KiB  
Article
Deep Learning for Vessel Trajectory Prediction Using Clustered AIS Data
by Cheng-Hong Yang, Guan-Cheng Lin, Chih-Hsien Wu, Yen-Hsien Liu, Yi-Chuan Wang and Kuo-Chang Chen
Mathematics 2022, 10(16), 2936; https://doi.org/10.3390/math10162936 - 15 Aug 2022
Cited by 10 | Viewed by 2313
Abstract
Accurate vessel track prediction is key for maritime traffic control and management. Accurate prediction results can enable collision avoidance, in addition to being suitable for planning routes in advance, shortening the sailing distance, and improving navigation efficiency. Vessel track prediction using automatic identification [...] Read more.
Accurate vessel track prediction is key for maritime traffic control and management. Accurate prediction results can enable collision avoidance, in addition to being suitable for planning routes in advance, shortening the sailing distance, and improving navigation efficiency. Vessel track prediction using automatic identification system (AIS) data has attracted extensive attention in the maritime traffic community. In this study, a combining density-based spatial clustering of applications with noise (DBSCAN)-based long short-term memory (LSTM) model (denoted as DLSTM) was developed for vessel prediction. DBSCAN was used to cluster vessel tracks, and LSTM was then used for training and prediction. The performance of the DLSTM model was compared with that of support vector regression, recurrent neural network, and conventional LSTM models. The results revealed that the proposed DLSTM model outperformed these models by approximately 2–8%. The proposed model is able to provide a better prediction performance of vessel tracks, which can subsequently improve the efficiency and safety of maritime traffic control. Full article
(This article belongs to the Special Issue Advances in Machine Learning, Optimization, and Control Applications)
Show Figures

Figure 1

20 pages, 2981 KiB  
Article
Proposing a High-Precision Petroleum Pipeline Monitoring System for Identifying the Type and Amount of Oil Products Using Extraction of Frequency Characteristics and a MLP Neural Network
by Abdulilah Mohammad Mayet, Karina Shamilyevna Nurgalieva, Ali Awadh Al-Qahtani, Igor M. Narozhnyy, Hala H. Alhashim, Ehsan Nazemi and Ilya M. Indrupskiy
Mathematics 2022, 10(16), 2916; https://doi.org/10.3390/math10162916 - 13 Aug 2022
Cited by 11 | Viewed by 1340
Abstract
Setting up pipelines in the oil industry is very costly and time consuming. For this reason, a pipe is usually used to transport various petroleum products, so it is very important to use an accurate and reliable control system to determine the type [...] Read more.
Setting up pipelines in the oil industry is very costly and time consuming. For this reason, a pipe is usually used to transport various petroleum products, so it is very important to use an accurate and reliable control system to determine the type and amount of oil product. In this research, using a system based on the gamma-ray attenuation technique and the feature extraction technique in the frequency domain combined with a Multilayer Perceptron (MLP) neural network, an attempt has been made to determine the type and amount of four petroleum products. The implemented system consists of a dual-energy gamma source, a test pipe to simulate petroleum products, and a sodium iodide detector. The signals received from the detector were transmitted to the frequency domain, and the amplitudes of the first to fourth dominant frequency were extracted from them. These characteristics were given to an MLP neural network as input. The designed neural network has four outputs, which is the percentage of the volume ratio of each product. The proposed system has the ability to predict the volume ratio of products with a maximum root mean square error (RMSE) of 0.69, which is a strong reason for the use of this system in the oil industry. Full article
(This article belongs to the Special Issue Advances in Machine Learning, Optimization, and Control Applications)
Show Figures

Figure 1

16 pages, 1946 KiB  
Article
Sparse and Low-Rank Joint Dictionary Learning for Person Re-Identification
by Jun Sun, Lingchen Kong and Biao Qu
Mathematics 2022, 10(3), 510; https://doi.org/10.3390/math10030510 - 05 Feb 2022
Cited by 1 | Viewed by 1422
Abstract
In the past decade, the scientific community has become increasingly interested in the re-identification of people. It is still a challenging problem due to its low-quality images; occlusion between objects; and huge changes in lighting, viewpoint and posture (even for the same person). [...] Read more.
In the past decade, the scientific community has become increasingly interested in the re-identification of people. It is still a challenging problem due to its low-quality images; occlusion between objects; and huge changes in lighting, viewpoint and posture (even for the same person). Therefore, we propose a dictionary learning method that divides the appearance characteristics of pedestrians into a shared part, which comprises the similarity between different pedestrians, and a specific part, which reflects unique identity information. In the process of re-identification, by removing the shared part of a pedestrian’s visual characteristics and considering the unique part of each person, the ambiguity of the pedestrian’s visual characteristics can be reduced. In addition, considering the structural characteristics of the shared dictionary and special dictionary, low-rank, l0 norm and row sparsity constraints instead of their convex-relaxed forms are introduced into the dictionary learning framework to improve its representation and recognition capabilities. Therefore, we adopt the method of alternating directions to solve it. The experimental results of several commonly used datasets show the effectiveness of our proposed method. Full article
(This article belongs to the Special Issue Advances in Machine Learning, Optimization, and Control Applications)
Show Figures

Figure 1

17 pages, 1118 KiB  
Article
A Channel-Wise Spatial-Temporal Aggregation Network for Action Recognition
by Huafeng Wang, Tao Xia, Hanlin Li, Xianfeng Gu, Weifeng Lv and Yuehai Wang
Mathematics 2021, 9(24), 3226; https://doi.org/10.3390/math9243226 - 14 Dec 2021
Cited by 1 | Viewed by 2174
Abstract
A very challenging task for action recognition concerns how to effectively extract and utilize the temporal and spatial information of video (especially temporal information). To date, many researchers have proposed various spatial-temporal convolution structures. Despite their success, most models are limited in further [...] Read more.
A very challenging task for action recognition concerns how to effectively extract and utilize the temporal and spatial information of video (especially temporal information). To date, many researchers have proposed various spatial-temporal convolution structures. Despite their success, most models are limited in further performance especially on those datasets that are highly time-dependent due to their failure to identify the fusion relationship between the spatial and temporal features inside the convolution channel. In this paper, we proposed a lightweight and efficient spatial-temporal extractor, denoted as Channel-Wise Spatial-Temporal Aggregation block (CSTA block), which could be flexibly plugged in existing 2D CNNs (denoted by CSTANet). The CSTA Block utilizes two branches to model spatial-temporal information separately. In temporal branch, It is equipped with a Motion Attention Module (MA), which is used to enhance the motion regions in a given video. Then, we introduced a Spatial-Temporal Channel Attention (STCA) module, which could aggregate spatial-temporal features of each block channel-wisely in a self-adaptive and trainable way. The final experimental results demonstrate that the proposed CSTANet achieved the state-of-the-art results on EGTEA Gaze++ and Diving48 datasets, and obtained competitive results on Something-Something V1&V2 at the less computational cost. Full article
(This article belongs to the Special Issue Advances in Machine Learning, Optimization, and Control Applications)
Show Figures

Figure 1

Review

Jump to: Research

39 pages, 1368 KiB  
Review
A Survey on High-Dimensional Subspace Clustering
by Wentao Qu, Xianchao Xiu, Huangyue Chen and Lingchen Kong
Mathematics 2023, 11(2), 436; https://doi.org/10.3390/math11020436 - 13 Jan 2023
Cited by 7 | Viewed by 2363
Abstract
With the rapid development of science and technology, high-dimensional data have been widely used in various fields. Due to the complex characteristics of high-dimensional data, it is usually distributed in the union of several low-dimensional subspaces. In the past several decades, subspace clustering [...] Read more.
With the rapid development of science and technology, high-dimensional data have been widely used in various fields. Due to the complex characteristics of high-dimensional data, it is usually distributed in the union of several low-dimensional subspaces. In the past several decades, subspace clustering (SC) methods have been widely studied as they can restore the underlying subspace of high-dimensional data and perform fast clustering with the help of the data self-expressiveness property. The SC methods aim to construct an affinity matrix by the self-representation coefficient of high-dimensional data and then obtain the clustering results using the spectral clustering method. The key is how to design a self-expressiveness model that can reveal the real subspace structure of data. In this survey, we focus on the development of SC methods in the past two decades and present a new classification criterion to divide them into three categories based on the purpose of clustering, i.e., low-rank sparse SC, local structure preserving SC, and kernel SC. We further divide them into subcategories according to the strategy of constructing the representation coefficient. In addition, the applications of SC methods in face recognition, motion segmentation, handwritten digits recognition, and speech emotion recognition are introduced. Finally, we have discussed several interesting and meaningful future research directions. Full article
(This article belongs to the Special Issue Advances in Machine Learning, Optimization, and Control Applications)
Show Figures

Figure 1

27 pages, 1211 KiB  
Review
A Survey on Deep Transfer Learning and Beyond
by Fuchao Yu, Xianchao Xiu and Yunhui Li
Mathematics 2022, 10(19), 3619; https://doi.org/10.3390/math10193619 - 03 Oct 2022
Cited by 27 | Viewed by 4399
Abstract
Deep transfer learning (DTL), which incorporates new ideas from deep neural networks into transfer learning (TL), has achieved excellent success in computer vision, text classification, behavior recognition, and natural language processing. As a branch of machine learning, DTL applies end-to-end learning to overcome [...] Read more.
Deep transfer learning (DTL), which incorporates new ideas from deep neural networks into transfer learning (TL), has achieved excellent success in computer vision, text classification, behavior recognition, and natural language processing. As a branch of machine learning, DTL applies end-to-end learning to overcome the drawback of traditional machine learning that regards each dataset individually. Although some valuable and impressive general surveys exist on TL, special attention and recent advances in DTL are lacking. In this survey, we first review more than 50 representative approaches of DTL in the last decade and systematically summarize them into four categories. In particular, we further divide each category into subcategories according to models, functions, and operation objects. In addition, we discuss recent advances in TL in other fields and unsupervised TL. Finally, we provide some possible and exciting future research directions. Full article
(This article belongs to the Special Issue Advances in Machine Learning, Optimization, and Control Applications)
Show Figures

Figure 1

17 pages, 503 KiB  
Review
Mitigating the Multicollinearity Problem and Its Machine Learning Approach: A Review
by Jireh Yi-Le Chan, Steven Mun Hong Leow, Khean Thye Bea, Wai Khuen Cheng, Seuk Wai Phoong, Zeng-Wei Hong and Yen-Lin Chen
Mathematics 2022, 10(8), 1283; https://doi.org/10.3390/math10081283 - 12 Apr 2022
Cited by 94 | Viewed by 11197
Abstract
Technologies have driven big data collection across many fields, such as genomics and business intelligence. This results in a significant increase in variables and data points (observations) collected and stored. Although this presents opportunities to better model the relationship between predictors and the [...] Read more.
Technologies have driven big data collection across many fields, such as genomics and business intelligence. This results in a significant increase in variables and data points (observations) collected and stored. Although this presents opportunities to better model the relationship between predictors and the response variables, this also causes serious problems during data analysis, one of which is the multicollinearity problem. The two main approaches used to mitigate multicollinearity are variable selection methods and modified estimator methods. However, variable selection methods may negate efforts to collect more data as new data may eventually be dropped from modeling, while recent studies suggest that optimization approaches via machine learning handle data with multicollinearity better than statistical estimators. Therefore, this study details the chronological developments to mitigate the effects of multicollinearity and up-to-date recommendations to better mitigate multicollinearity. Full article
(This article belongs to the Special Issue Advances in Machine Learning, Optimization, and Control Applications)
Back to TopTop