Application of Machine Learning and Data Mining

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: 20 May 2024 | Viewed by 10332

Special Issue Editors


E-Mail Website
Guest Editor
College of Information and Science Technology, Donghua University, Shanghai 200051, China
Interests: data mining; machine learning; artificial intelligence; fashion AI

E-Mail Website
Guest Editor
Department of Computer Science, Harbin Institute of Technology, Shenzhen 518055, China
Interests: data mining; machine learning; fashion AI; video link learning and optimization; service computing

E-Mail Website
Guest Editor
School of Automation, Chongqing University, Chongqing 400044, China
Interests: optimization; artificial intelligence; smart grids; smart buildings and construction
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recent decades have seen a dramatic rise in the application of machine learning and data mining. Recent technologies, e.g., the Internet of things (IoTs), neural networks, deep learning, and smart things, have brought new developments in machine learning and data mining to areas such as, healthcare, manufacturing, automobiles, and agriculture. One important breakthrough in artificial intelligence techniques is deep learning, which includes a large family of neural computing methods, e.g., convolutional neural networks (CNNs), generative adversarial networks (GANs), and transformers, that employ deep architectures composed of multiple non-linear transformations to model high-level abstractions of raw data. Recent studies have shown that deep neural networks significantly improve the performance of learning tasks such as object detection, image classification, and segmentation. As a consequence, many advanced real-world applications have developed increasingly close relations with machine learning and data mining technologies.

This Special Issue aims to present cutting-edge techniques in the application of machine learning and data mining; this is also the aim of the 2023 International Conference on Neural Computing for Advanced Applications (NCAA 2023), which will be held in Hefei, China. The authors of outstanding papers selected from the NCAA 2022 will be invited to submit their extended technical papers for possible publication in the proposed Special Issue after an ordinary review process.

Prof. Dr. Mingbo Zhao
Prof. Dr. Haijun Zhang
Prof. Dr. Zhou Wu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • supervised, unsupervised, and self-learning methods
  • large-scale data mining
  • applicable neural networks and artificial intelligence
  • neural network-based industrial applications
  • neural model for natural language processing
  • deep learning for health informatics and biomedical engineering
  • graph convolutional neural networks and their applications
  • deep reinforcement learning and its applications
  • deep sparse and low-rank representation
  • computer vision and pattern recognition techniques

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 1503 KiB  
Article
EFE-LSTM: A Feature Extension, Fusion and Extraction Approach Using Long Short-Term Memory for Navigation Aids State Recognition
by Jingjing Cao, Zhipeng Wen, Liang Huang, Jinshan Dai and Hu Qin
Mathematics 2024, 12(7), 1048; https://doi.org/10.3390/math12071048 - 30 Mar 2024
Viewed by 534
Abstract
Navigation aids play a crucial role in guiding ship navigation and marking safe water areas. Therefore, ensuring the accurate and efficient recognition of a navigation aid’s state is critical for maritime safety. To address the issue of sparse features in navigation aid data, [...] Read more.
Navigation aids play a crucial role in guiding ship navigation and marking safe water areas. Therefore, ensuring the accurate and efficient recognition of a navigation aid’s state is critical for maritime safety. To address the issue of sparse features in navigation aid data, this paper proposes an approach that involves three distinct processes: the extension of rank entropy space, the fusion of multi-domain features, and the extraction of hidden features (EFE). Based on these processes, this paper introduces a new LSTM model termed EFE-LSTM. Specifically, in the feature extension module, we introduce a rank entropy operator for space extension. This method effectively captures uncertainty in data distribution and the interrelationships among features. The feature fusion module introduces new features in the time domain, frequency domain, and time–frequency domain, capturing the dynamic features of signals across multiple dimensions. Finally, in the feature extraction module, we employ the BiLSTM model to capture the hidden abstract features of navigational signals, enabling the model to more effectively differentiate between various navigation aids states. Extensive experimental results on four real-world navigation aid datasets indicate that the proposed model outperforms other benchmark algorithms, achieving the highest accuracy among all state recognition models at 92.32%. Full article
(This article belongs to the Special Issue Application of Machine Learning and Data Mining)
Show Figures

Figure 1

25 pages, 5565 KiB  
Article
Predicting Fan Attendance at Mega Sports Events—A Machine Learning Approach: A Case Study of the FIFA World Cup Qatar 2022
by Ahmad Al-Buenain, Mohamed Haouari and Jithu Reji Jacob
Mathematics 2024, 12(6), 926; https://doi.org/10.3390/math12060926 - 21 Mar 2024
Viewed by 633
Abstract
Mega sports events generate significant media coverage and have a considerable economic impact on the host cities. Organizing such events is a complex task that requires extensive planning. The success of these events hinges on the attendees’ satisfaction. Therefore, accurately predicting the number [...] Read more.
Mega sports events generate significant media coverage and have a considerable economic impact on the host cities. Organizing such events is a complex task that requires extensive planning. The success of these events hinges on the attendees’ satisfaction. Therefore, accurately predicting the number of fans from each country is essential for the organizers to optimize planning and ensure a positive experience. This study aims to introduce a new application for machine learning in order to accurately predict the number of attendees. The model is developed using attendance data from the FIFA World Cup (FWC) Russia 2018 to forecast the FWC Qatar 2022 attendance. Stochastic gradient descent (SGD) was found to be the top-performing algorithm, achieving an R2 metric of 0.633 in an Auto-Sklearn experiment that considered a total of 2523 models. After a thorough analysis of the result, it was found that team qualification has the highest impact on attendance. Other factors such as distance, number of expatriates in the host country, and socio-geopolitical factors have a considerable influence on visitor counts. Although the model produces good results, with ML it is always recommended to have more data inputs. Therefore, using previous tournament data has the potential to increase the accuracy of the results. Full article
(This article belongs to the Special Issue Application of Machine Learning and Data Mining)
Show Figures

Figure 1

21 pages, 1396 KiB  
Article
Improving Adversarial Robustness of Ensemble Classifiers by Diversified Feature Selection and Stochastic Aggregation
by Fuyong Zhang, Kuan Li and Ziliang Ren
Mathematics 2024, 12(6), 834; https://doi.org/10.3390/math12060834 - 12 Mar 2024
Viewed by 520
Abstract
Learning-based classifiers are found to be vulnerable to attacks by adversarial samples. Some works suggested that ensemble classifiers tend to be more robust than single classifiers against evasion attacks. However, recent studies have shown that this is not necessarily the case under more [...] Read more.
Learning-based classifiers are found to be vulnerable to attacks by adversarial samples. Some works suggested that ensemble classifiers tend to be more robust than single classifiers against evasion attacks. However, recent studies have shown that this is not necessarily the case under more realistic settings of black-box attacks. In this paper, we propose a novel ensemble approach to improve the robustness of classifiers against evasion attacks by using diversified feature selection and a stochastic aggregation strategy. Our proposed scheme includes three stages. Firstly, the adversarial feature selection algorithm is used to select a feature each time that can trade-offbetween classification accuracy and robustness, and add it to the feature vector bank. Secondly, each feature vector in the bank is used to train a base classifier and is added to the base classifier bank. Finally, m classifiers from the classifier bank are randomly selected for decision-making. In this way, it can cause each classifier in the base classifier bank to have good performance in terms of classification accuracy and robustness, and it also makes it difficult to estimate the gradients of the ensemble accurately. Thus, the robustness of classifiers can be improved without reducing the classification accuracy. Experiments performed using both Linear and Kernel SVMs on genuine datasets for spam filtering, malware detection, and handwritten digit recognition demonstrate that our proposed approach significantly improves the classifiers’ robustness against evasion attacks. Full article
(This article belongs to the Special Issue Application of Machine Learning and Data Mining)
Show Figures

Figure 1

19 pages, 6617 KiB  
Article
Prediction Model of Ammonia Nitrogen Concentration in Aquaculture Based on Improved AdaBoost and LSTM
by Yiyang Wang, Dehao Xu, Xianpeng Li and Wei Wang
Mathematics 2024, 12(5), 627; https://doi.org/10.3390/math12050627 - 20 Feb 2024
Viewed by 568
Abstract
The concentration of ammonia nitrogen is significant for intensive aquaculture, and if the concentration of ammonia nitrogen is too high, it will seriously affect the survival state of aquaculture. Therefore, prediction and control of the ammonia nitrogen concentration in advance is essential. This [...] Read more.
The concentration of ammonia nitrogen is significant for intensive aquaculture, and if the concentration of ammonia nitrogen is too high, it will seriously affect the survival state of aquaculture. Therefore, prediction and control of the ammonia nitrogen concentration in advance is essential. This paper proposed a combined model based on X Adaptive Boosting (XAdaBoost) and the Long Short-Term Memory neural network (LSTM) to predict ammonia nitrogen concentration in mariculture. Firstly, the weight assignment strategy was improved, and the number of correction iterations was introduced to retard the shortcomings of data error accumulation caused by the AdaBoost basic algorithm. Then, the XAdaBoost algorithm generated and combined several LSTM su-models to predict the ammonia nitrogen concentration. Finally, there were two experiments conducted to verify the effectiveness of the proposed prediction model. In the ammonia nitrogen concentration prediction experiment, compared with the LSTM and other comparison models, the RMSE of the XAdaBoost–LSTM model was reduced by about 0.89–2.82%, the MAE was reduced by about 0.72–2.47%, and the MAPE was reduced by about 8.69–18.39%. In the model stability experiment, the RMSE, MAE, and MAPE of the XAdaBoost–LSTM model decreased by about 1–1.5%, 0.7–1.7%, and 7–14%. From these two experiments, the evaluation indexes of the XAdaBoost–LSTM model were superior to the comparison models, which proves that the model has good prediction accuracy and stability and lays a foundation for monitoring and regulating the change of ammonia nitrogen concentration in the future. Full article
(This article belongs to the Special Issue Application of Machine Learning and Data Mining)
Show Figures

Figure 1

24 pages, 8083 KiB  
Article
Developer Assignment Method for Software Defects Based on Related Issue Prediction
by Baochuan Liu, Li Zhang, Zhenwei Liu and Jing Jiang
Mathematics 2024, 12(3), 425; https://doi.org/10.3390/math12030425 - 28 Jan 2024
Viewed by 615
Abstract
The open-source software platform hosts a large number of software defects, and the task of relying on administrators to manually assign developers is often time consuming. Thus, it is crucial to determine how to assign software defects to appropriate developers. This paper presents [...] Read more.
The open-source software platform hosts a large number of software defects, and the task of relying on administrators to manually assign developers is often time consuming. Thus, it is crucial to determine how to assign software defects to appropriate developers. This paper presents DARIP, a method for assigning developers to address software defects. First, the correlation between software defects and issues is considered, predicting related issues for each defect and comprehensively calculating the textual characteristics of the defect using the BERT model. Second, a heterogeneous collaborative network is constructed based on the three development behaviors of developers: reporting, commenting, and fixing. The meta-paths are defined based on the four collaborative relationships between developers: report–comment, report–fix, comment–comment, and comment–fix. The graph-embedding algorithm metapath2vec extracts developer characteristics from the heterogeneous collaborative network. Then, a classifier based on a deep learning model calculates the probability assigned to each developer category. Finally, the assignment list is obtained according to the probability ranking. Experiments on a dataset of 20,280 defects from 9 popular projects show that the DARIP method improves the average of the Recall@5, the Recall@10, and the MRR by 31.13%, 21.40%, and 25.45%, respectively, compared to the state-of-the-art method. Full article
(This article belongs to the Special Issue Application of Machine Learning and Data Mining)
Show Figures

Figure 1

20 pages, 2763 KiB  
Article
Research on a Non-Intrusive Load Recognition Algorithm Based on High-Frequency Signal Decomposition with Improved VI Trajectory and Background Color Coding
by Jiachuan Shi, Dingrui Zhi and Rao Fu
Mathematics 2024, 12(1), 30; https://doi.org/10.3390/math12010030 - 22 Dec 2023
Viewed by 632
Abstract
Against the backdrop of the current Chinese national carbon peak and carbon neutrality policies, higher requirements have been put forward for the construction and upgrading of smart grids. Non-intrusive Load Monitoring (NILM) technology is a key technology for advanced measurement systems at the [...] Read more.
Against the backdrop of the current Chinese national carbon peak and carbon neutrality policies, higher requirements have been put forward for the construction and upgrading of smart grids. Non-intrusive Load Monitoring (NILM) technology is a key technology for advanced measurement systems at the end of the power grid. This technology obtains detailed power information about the load without the need for traditional hardware deployment. The key step to solve this problem is load decomposition and identification. This study first utilized the Long Short-Term Memory Denoising Autoencoder (LSTM-DAE) to decompose the mixed current signal of a household busbar and obtain the current signals of the multiple independent loads that constituted the mixed current. Then, the obtained independent current signals were combined with the voltage signals to generate multicycle colored Voltage–Current (VI) trajectories, which were color-coded according to the background. These color-coded VI trajectories formed a feature library. When the Convolutional Neural Network (CNN) was used for load recognition, in light of the influence of the hyperparameters on the recognition results, the Bayesian Optimization Algorithm (BOA) was used for optimization, and the optimized CNN network was employed for VI trajectory recognition. Finally, the proposed method was validated using the PLAID dataset. The experimental results show that the proposed method exhibited better performance in load decomposition and identification than current methods. Full article
(This article belongs to the Special Issue Application of Machine Learning and Data Mining)
Show Figures

Figure 1

13 pages, 1819 KiB  
Article
A Visually Inspired Computational Model for Recognition of Optic Flow
by Xiumin Li, Wanyan Lin, Hao Yi, Lei Wang and Jiawei Chen
Mathematics 2023, 11(23), 4777; https://doi.org/10.3390/math11234777 - 27 Nov 2023
Viewed by 726
Abstract
Foundation models trained on vast quantities of data have demonstrated impressive performance in capturing complex nonlinear relationships and accurately predicting neuronal responses. Due to the fact that deep learning neural networks depend on massive amounts of data samples and high energy consumption, foundation [...] Read more.
Foundation models trained on vast quantities of data have demonstrated impressive performance in capturing complex nonlinear relationships and accurately predicting neuronal responses. Due to the fact that deep learning neural networks depend on massive amounts of data samples and high energy consumption, foundation models based on spiking neural networks (SNNs) have the potential to significantly reduce calculation costs by training on neuromorphic hardware. In this paper, a visually inspired computational model composed of an SNN and echo state network (ESN) is proposed for the recognition of optic flow. The visually inspired SNN model serves as a foundation model that is trained using spike-timing-dependent plasticity (STDP) for extracting core features. The ESN model makes readout decisions for recognition tasks using the linear regression method. The results show that STDP can perform similar functions as non-negative matrix decomposition (NMF), i.e., generating sparse and linear superimposed readouts based on basis flow fields. Once the foundation model is fully trained from enough input samples, it can considerably reduce the training samples required for ESN readout learning. Our proposed SNN-based foundation model facilitates efficient and cost-effective task learning and could also be adapted to new stimuli that are not included in the training of the foundation model. Moreover, compared with the NMF algorithm, the foundation model trained using STDP does not need to be retrained during the testing procedure, contributing to a more efficient computational performance. Full article
(This article belongs to the Special Issue Application of Machine Learning and Data Mining)
Show Figures

Figure 1

17 pages, 1483 KiB  
Article
Minimization of Active Power Loss Using Enhanced Particle Swarm Optimization
by Samson Ademola Adegoke, Yanxia Sun and Zenghui Wang
Mathematics 2023, 11(17), 3660; https://doi.org/10.3390/math11173660 - 24 Aug 2023
Cited by 3 | Viewed by 786
Abstract
Identifying the weak buses in power system networks is crucial for planning and operation since most generators operate close to their operating limits, resulting in generator failures. This work aims to identify the critical/weak node and reduce the system’s power loss. The line [...] Read more.
Identifying the weak buses in power system networks is crucial for planning and operation since most generators operate close to their operating limits, resulting in generator failures. This work aims to identify the critical/weak node and reduce the system’s power loss. The line stability index (Lmn) and fast voltage stability index (FVSI) were used to identify the critical node and lines close to instability in the power system networks. Enhanced particle swarm optimization (EPSO) was chosen because of its ability to communicate with better individuals, making it more efficient to obtain a prominent solution. EPSO and other PSO variants minimized the system’s actual/real losses. Nodes 8 and 14 were identified as the critical nodes of the IEEE 9 and 14 bus systems, respectively. The power loss of the IEEE 9 bus system was reduced from 9.842 MW to 7.543 MW, and for the IEEE 14 bus system, the loss was reduced from 13.775 MW of the base case to 12.253 MW for EPSO. EPSO gives a better active power loss reduction and improves the node’s voltage profile than other PSO variants and algorithms in the literature. This suggests the feasibility and suitability of EPSO to improve the grid voltage quality. Full article
(This article belongs to the Special Issue Application of Machine Learning and Data Mining)
Show Figures

Figure 1

18 pages, 2511 KiB  
Article
Research on Multi-AGV Task Allocation in Train Unit Maintenance Workshop
by Nan Zhao and Chun Feng
Mathematics 2023, 11(16), 3509; https://doi.org/10.3390/math11163509 - 14 Aug 2023
Viewed by 782
Abstract
In the context of the continuous development and maturity of intelligent manufacturing and intelligent logistics, it has been observed that the majority of vehicle maintenance in EMU trains still relies on traditional methods, which are characterized by excessive manual intervention and low efficiency. [...] Read more.
In the context of the continuous development and maturity of intelligent manufacturing and intelligent logistics, it has been observed that the majority of vehicle maintenance in EMU trains still relies on traditional methods, which are characterized by excessive manual intervention and low efficiency. To address these deficiencies, the present study proposes the integration of Automatic Guided Vehicles (AGVs) to improve the traditional maintenance processes, thereby enhancing the efficiency and quality of vehicle maintenance. Specifically, this research focuses on the scenario of the maintenance workshop in EMU trains and investigates the task allocation problem for multiple AGVs. Taking into consideration factors such as the maximum load capacity of AGVs, remaining battery power, and task execution time, a mathematical model is formulated with the objective of minimizing the total distance and time required to complete all tasks. A multi-population genetic algorithm is designed to solve the model. The effectiveness of the proposed model and algorithm is validated through simulation experiments, considering both small-scale and large-scale scenarios. The results indicate that the multi-population genetic algorithm outperforms the particle swarm algorithm and the genetic algorithm in terms of stability, optimization performance, and convergence. This research provides scientific guidance and practical insights for enterprises adopting task allocation strategies using multiple AGVs. Full article
(This article belongs to the Special Issue Application of Machine Learning and Data Mining)
Show Figures

Figure 1

13 pages, 1919 KiB  
Article
Analysis of Psychological Factors Influencing Mathematical Achievement and Machine Learning Classification
by Juhyung Park, Sungtae Kim and Beakcheol Jang
Mathematics 2023, 11(15), 3380; https://doi.org/10.3390/math11153380 - 02 Aug 2023
Viewed by 1406
Abstract
This study analyzed the psychological factors that influence mathematical achievement in order to classify students’ mathematical achievement. Here, we employed linear regression to investigate the variables that contribute to mathematical achievement, and we found that self-efficacy, math-efficacy, learning approach motivation, and reliance on [...] Read more.
This study analyzed the psychological factors that influence mathematical achievement in order to classify students’ mathematical achievement. Here, we employed linear regression to investigate the variables that contribute to mathematical achievement, and we found that self-efficacy, math-efficacy, learning approach motivation, and reliance on academies affect mathematical achievement. These variables are derived from the Test of Learning Psychology (TLP), a psychological test developed by Able Edutech Inc. specifically to measure students’ learning psychology in the mathematics field. We then conducted machine learning classification with the identified variables. As a result, the random forest model demonstrated the best performance, achieving accuracy values of 73% (Test 1) and 81% (Test 2), with F1-scores of 79% (Test 1) and 82% (Test 2). Finally, students’ skills were classified according to the TLP items. The results demonstrated that students’ academic abilities could be identified using a psychological test in the field of mathematics. Thus, the TLP results can serve as a valuable resource to develop personalized learning programs and enhance students’ mathematical skills. Full article
(This article belongs to the Special Issue Application of Machine Learning and Data Mining)
Show Figures

Figure 1

16 pages, 2443 KiB  
Article
A Moth–Flame Optimized Echo State Network and Triplet Feature Extractor for Epilepsy Electro-Encephalography Signals
by Xue-song Tang, Luchao Jiang, Kuangrong Hao, Tong Wang and Xiaoyan Liu
Mathematics 2023, 11(6), 1438; https://doi.org/10.3390/math11061438 - 16 Mar 2023
Cited by 2 | Viewed by 1022
Abstract
The analysis of epilepsy electro-encephalography (EEG) signals is of great significance for the diagnosis of epilepsy, which is one of the common neurological diseases of all age groups. With the developments of machine learning, many data-driven models have achieved great performance in EEG [...] Read more.
The analysis of epilepsy electro-encephalography (EEG) signals is of great significance for the diagnosis of epilepsy, which is one of the common neurological diseases of all age groups. With the developments of machine learning, many data-driven models have achieved great performance in EEG signals classification. However, it is difficult to select appropriate hyperparameters for the models to file a specific task. In this paper, an evolutionary algorithm enhanced model is proposed, which optimizes the fixed weights of the reservoir layer of the echo state network (ESN) according to the specific task. As evaluating a feature extractor relies heavily on the classifiers, a new feature distribution evaluation function (FDEF) using the label information of EEG signals is defined as the fitness function, which is an objective way to evaluate the performance of a feature extractor that not only focuses on the degree of dispersion, but also considers the relation amongst triplets. The performance of the proposed method is verified on the Bonn University dataset with an accuracy of 98.16% and on the CHB-MIT dataset with the highest sensitivity of 96.14%. The proposed method outperforms the previous EEG methods, as it can automatically optimize the hyperparameters of ESN to adjust the structure and initial parameters for a specific classification task. Furthermore, the optimization direction by using FDEF as the fitness of MFO no longer relies on the performance of the classifier but on the relative separability amongst classes. Full article
(This article belongs to the Special Issue Application of Machine Learning and Data Mining)
Show Figures

Figure 1

Review

Jump to: Research

19 pages, 878 KiB  
Review
Sustainable Rail/Road Unimodal Transportation of Bulk Cargo in Zambia: A Review of Algorithm-Based Optimization Techniques
by Fines Miyoba, Egbert Mujuni, Musa Ndiaye, Hastings M. Libati and Adnan M. Abu-Mahfouz
Mathematics 2024, 12(2), 348; https://doi.org/10.3390/math12020348 - 21 Jan 2024
Viewed by 872
Abstract
Modern rail/road transportation systems are critical to global travel and commercial transportation. The improvement of transport systems that are needed for efficient cargo movements possesses further challenges. For instance, diesel-powered trucks and goods trains are widely used in long-haul unimodal transportation of heavy [...] Read more.
Modern rail/road transportation systems are critical to global travel and commercial transportation. The improvement of transport systems that are needed for efficient cargo movements possesses further challenges. For instance, diesel-powered trucks and goods trains are widely used in long-haul unimodal transportation of heavy cargo in most landlocked and developing countries, a situation that leads to concerns of greenhouse gases (GHGs) such as carbon dioxide coming from diesel fuel combustion. In this context, it is critical to understand aspects such as the use of some parameters, variables and constraints in the formulation of mathematical models, optimization techniques and algorithms that directly contribute to sustainable transportation solutions. In seeking sustainable solutions to the bulk cargo long-haul transportation problems in Zambia, we conduct a systematic review of various transportation modes and related mathematical models, and optimization approaches. In this paper, we provide an updated survey of various transport models for bulk cargo and their associated optimized combinations. We identify key research challenges and notable issues to be considered for further studies in transport system optimization, especially when dealing with long-haul unimodal or single-mode heavy cargo movement in countries that are yet to implement intermodal and multimodal systems. Full article
(This article belongs to the Special Issue Application of Machine Learning and Data Mining)
Show Figures

Figure 1

Back to TopTop