Advanced Artificial Intelligence Models and Its Applications

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: closed (31 August 2023) | Viewed by 31311

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
Interests: computer vision; machine learning; medical image analysis; AI in healthcare
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The field of Artificial Intelligence (AI) has experienced tremendous growth since the mid-20th century, as evidenced by its application in a wide range of engineering and science problems. Over the last decade, AI has seen a breakthrough owing to the introduction of deep learning, which has allowed the use of various AI models in a diverse range of domains.

This Special Issue intends to provide a forum for researchers developing and reviewing new AI models for various fields, including science, engineering, industry, education, health, and transportation. We are inviting authors to submit relevant original results, literature reviews, theoretical studies, or papers addressing AI’s real-world applications.

Prof. Dr. Tao Zhou
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • pattern recognition
  • computer vision
  • multimedia retrieval and analysis
  • multimodal representation learning
  • statistical learning
  • medical image analysis
  • security applications
  • big data and analysis
  • benchmark dataset

Related Special Issue

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 7480 KiB  
Article
HVAC Load Forecasting Based on the CEEMDAN-Conv1D-BiLSTM-AM Model
by Zhicheng Xiao, Lijuan Yu, Huajun Zhang, Xuetao Zhang and Yixin Su
Mathematics 2023, 11(22), 4630; https://doi.org/10.3390/math11224630 - 13 Nov 2023
Cited by 1 | Viewed by 11155
Abstract
Heating, ventilation, and air-conditioning (HVAC) systems consume approximately 60% of the total energy consumption in public buildings, and an effective way to reduce HVAC energy consumption is to provide accurate load forecasting. This paper proposes a load forecasting model CEEMDAN-Conv1D-BiLSTM-AM which combines empirical [...] Read more.
Heating, ventilation, and air-conditioning (HVAC) systems consume approximately 60% of the total energy consumption in public buildings, and an effective way to reduce HVAC energy consumption is to provide accurate load forecasting. This paper proposes a load forecasting model CEEMDAN-Conv1D-BiLSTM-AM which combines empirical mode decomposition and neural networks. The load data are decomposed into fifteen sub-sequences using complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN). The neural network inputs consist of the decomposition results and five exogenous variables. The neural networks contain a one-dimensional convolutional layer, a BiLSTM layer, and an attention mechanism layer. The Conv1D is employed to extract deep features from each input variable, while BiLSTM and the attention mechanism layer are used to learn the characteristics of the load time series. The five exogenous variables are selected based on the correlation analysis between external factors and load series, and the number of input steps for the model is determined through autocorrelation analysis of the load series. The performance of CEEMDAN-Conv1D-BiLSTM-AM is compared with that of five other models and the results show that the proposed model has a higher prediction accuracy than other models. Full article
(This article belongs to the Special Issue Advanced Artificial Intelligence Models and Its Applications)
Show Figures

Figure 1

12 pages, 639 KiB  
Article
Deep Learning Architecture for Detecting SQL Injection Attacks Based on RNN Autoencoder Model
by Maha Alghawazi, Daniyal Alghazzawi and Suaad Alarifi
Mathematics 2023, 11(15), 3286; https://doi.org/10.3390/math11153286 - 26 Jul 2023
Cited by 6 | Viewed by 3195
Abstract
SQL injection attacks are one of the most common types of attacks on Web applications. These attacks exploit vulnerabilities in an application’s database access mechanisms, allowing attackers to execute unauthorized SQL queries. In this study, we propose an architecture for detecting SQL injection [...] Read more.
SQL injection attacks are one of the most common types of attacks on Web applications. These attacks exploit vulnerabilities in an application’s database access mechanisms, allowing attackers to execute unauthorized SQL queries. In this study, we propose an architecture for detecting SQL injection attacks using a recurrent neural network autoencoder. The proposed architecture was trained on a publicly available dataset of SQL injection attacks. Then, it was compared with several other machine learning models, including ANN, CNN, decision tree, naive Bayes, SVM, random forest, and logistic regression models. The experimental results showed that the proposed approach achieved an accuracy of 94% and an F1-score of 92%, which demonstrate its effectiveness in detecting QL injection attacks with high accuracy in comparison to the other models covered in the study. Full article
(This article belongs to the Special Issue Advanced Artificial Intelligence Models and Its Applications)
Show Figures

Figure 1

12 pages, 1117 KiB  
Article
Alleviating Long-Tailed Image Classification via Dynamical Classwise Splitting
by Ye Yuan, Jiaqi Wang, Xin Xu, Ruoshi Li, Yongtong Zhu, Lihong Wan, Qingdu Li and Na Liu
Mathematics 2023, 11(13), 2996; https://doi.org/10.3390/math11132996 - 05 Jul 2023
Viewed by 1001
Abstract
With the rapid increase in data scale, real-world datasets tend to exhibit long-tailed class distributions (i.e., a few classes account for most of the data, while most classes contain only a few data points). General solutions typically exploit class rebalancing strategies involving resampling [...] Read more.
With the rapid increase in data scale, real-world datasets tend to exhibit long-tailed class distributions (i.e., a few classes account for most of the data, while most classes contain only a few data points). General solutions typically exploit class rebalancing strategies involving resampling and reweighting based on the sample number for each class. In this work, we explore an orthogonal direction, category splitting, which is motivated by the empirical observation that naive splitting of majority samples could alleviate the heavy imbalance between majority and minority classes. To this end, we propose a novel classwise splitting (CWS) method built upon a dynamic cluster, where classwise prototypes are updated using a moving average technique. CWS generates intra-class pseudo labels for splitting intra-class samples based on the point-to-point distance. Moreover, a group mapping module was developed to recover the ground truth of the training samples. CWS can be plugged into any existing method as a complement. Comprehensive experiments were conducted on artificially induced long-tailed image classification datasets, such as CIFAR-10-LT, CIFAR-100-LT, and OCTMNIST. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets. Full article
(This article belongs to the Special Issue Advanced Artificial Intelligence Models and Its Applications)
Show Figures

Figure 1

17 pages, 2225 KiB  
Article
Broad Embedded Logistic Regression Classifier for Prediction of Air Pressure Systems Failure
by Adegoke A. Muideen, Carman Ka Man Lee, Jeffery Chan, Brandon Pang and Hafiz Alaka
Mathematics 2023, 11(4), 1014; https://doi.org/10.3390/math11041014 - 16 Feb 2023
Cited by 3 | Viewed by 1420
Abstract
In recent years, the latest maintenance modelling techniques that adopt the data-based method, such as machine learning (ML), have brought about a broad range of useful applications. One of the major challenges in the automotive industry is the early detection of component failure [...] Read more.
In recent years, the latest maintenance modelling techniques that adopt the data-based method, such as machine learning (ML), have brought about a broad range of useful applications. One of the major challenges in the automotive industry is the early detection of component failure for quick response, proper action, and minimizing maintenance costs. A vital component of an automobile system is an air pressure system (APS). Failure of APS without adequate and quick responses may lead to high maintenance costs, loss of lives, and component damages. This paper addresses classification problem where we detect whether a fault does or does not belong to APS. If a failure occurs in APS, it is classified as positive class; otherwise, it is classified as negative class. Hence, in this paper, we propose broad embedded logistic regression (BELR). The proposed BELR is applied to predict APS failure. It combines a broad learning system (BLS) and logistic regression (LogR) classifier as a fusion model. The proposed approach capitalizes on the strength of BLS and LogR for a better APS failure prediction. Additionally, we employ the BLS’s feature-mapped nodes for extracting features from the input data. Additionally, we use the enhancement nodes of the BLS to enhance the features from feature-mapped nodes. Hence, we have features that can assist LogR for better classification performances, even when the data is skewed to the positive class or negative class. Furthermore, to prevent the curse of dimensionality, a common problem with high-dimensional data sets, we utilize principal component analysis (PCA) to reduce the data dimension. We validate the proposed BELR using the APS data set and compare the results with the other robust machine learning classifiers. The commonly used evaluation metrics, namely Recall, Precision, an F1-score, to evaluate the model performance. From the results, we validate that performance of the proposed BELR. Full article
(This article belongs to the Special Issue Advanced Artificial Intelligence Models and Its Applications)
Show Figures

Figure 1

19 pages, 3856 KiB  
Article
Vehicle Routing Optimization with Cross-Docking Based on an Artificial Immune System in Logistics Management
by Shih-Che Lo and Ying-Lin Chuang
Mathematics 2023, 11(4), 811; https://doi.org/10.3390/math11040811 - 05 Feb 2023
Cited by 3 | Viewed by 2189
Abstract
Background: Manufacturing companies optimize logistics network routing to reduce transportation costs and operational costs in order to make profits in an extremely competitive environment. Therefore, the efficiency of logistics management in the supply chain and the quick response to customers’ demands are treated [...] Read more.
Background: Manufacturing companies optimize logistics network routing to reduce transportation costs and operational costs in order to make profits in an extremely competitive environment. Therefore, the efficiency of logistics management in the supply chain and the quick response to customers’ demands are treated as an additional source of profit. One of the warehouse operations for intelligent logistics network design, called cross-docking (CD) operations, is used to reduce inventory levels and improve responsiveness to meet customers’ requirements. Accordingly, the optimization of a vehicle dispatch schedule is imperative in order to produce a routing plan with the minimum transport cost while meeting demand allocation. Methods: This paper developed a two-phase algorithm, called sAIS, to solve the vehicle routing problem (VRP) with the CD facilities and systems in the logistics operations. The sAIS algorithm is based on a clustering-first and routing-later approach. The sweep method is used to cluster trucks as the initial solution for the second phase: optimizing routing by the Artificial Immune System. Results: In order to examine the performance of the proposed sAIS approach, we compared the proposed model with the Genetic Algorithm (GA) on the VRP with pickup and delivery benchmark problems, showing average improvements of 7.26%. Conclusions: In this study, we proposed a novel sAIS algorithm for solving VRP with CD problems by simulating human body immune reactions. The experimental results showed that the proposed sAIS algorithm is robustly competitive with the GA on the criterion of average solution quality as measured by the two-sample t-test. Full article
(This article belongs to the Special Issue Advanced Artificial Intelligence Models and Its Applications)
Show Figures

Figure 1

14 pages, 7401 KiB  
Article
MRBERT: Pre-Training of Melody and Rhythm for Automatic Music Generation
by Shuyu Li and Yunsick Sung
Mathematics 2023, 11(4), 798; https://doi.org/10.3390/math11040798 - 04 Feb 2023
Cited by 3 | Viewed by 2894
Abstract
Deep learning technology has been extensively studied for its potential in music, notably for creative music generation research. Traditional music generation approaches based on recurrent neural networks cannot provide satisfactory long-distance dependencies. These approaches are typically designed for specific tasks, such as melody [...] Read more.
Deep learning technology has been extensively studied for its potential in music, notably for creative music generation research. Traditional music generation approaches based on recurrent neural networks cannot provide satisfactory long-distance dependencies. These approaches are typically designed for specific tasks, such as melody and chord generation, and cannot generate diverse music simultaneously. Pre-training is used in natural language processing to accomplish various tasks and overcome the limitation of long-distance dependencies. However, pre-training is not yet widely used in automatic music generation. Because of the differences in the attributes of language and music, traditional pre-trained models utilized in language modeling cannot be directly applied to music fields. This paper proposes a pre-trained model, MRBERT, for multitask-based music generation to learn melody and rhythm representation. The pre-trained model can be applied to music generation applications such as web-based music composers that includes the functions of melody and rhythm generation, modification, completion, and chord matching after being fine-tuned. The results of ablation experiments performed on the proposed model revealed that under the evaluation metrics of HITS@k, the pre-trained MRBERT considerably improved the performance of the generation tasks by 0.09–13.10% and 0.02–7.37%, compared to the usage of RNNs and the original BERT, respectively. Full article
(This article belongs to the Special Issue Advanced Artificial Intelligence Models and Its Applications)
Show Figures

Figure 1

17 pages, 3041 KiB  
Article
Application of Artificial Intelligence for Better Investment in Human Capital
by Mohammed Abdullah Ammer, Zeyad A. T. Ahmed, Saleh Nagi Alsubari, Theyazn H. H. Aldhyani and Shahab Ahmad Almaaytah
Mathematics 2023, 11(3), 612; https://doi.org/10.3390/math11030612 - 26 Jan 2023
Cited by 1 | Viewed by 2217
Abstract
Selecting candidates for a specific job or nominating a person for a specific position takes time and effort due to the need to search for the individual’s file. Ultimately, the hiring decision may not be successful. However, artificial intelligence helps organizations or companies [...] Read more.
Selecting candidates for a specific job or nominating a person for a specific position takes time and effort due to the need to search for the individual’s file. Ultimately, the hiring decision may not be successful. However, artificial intelligence helps organizations or companies choose the right person for the right job. In addition, artificial intelligence contributes to the selection of harmonious working teams capable of achieving an organization’s strategy and goals. This study aimed to contribute to the development of machine-learning models to analyze and cluster personality traits and classify applicants to conduct correct hiring decisions for particular jobs and identify their weaknesses and strengths. Helping applicants to succeed while managing work and training employees with weaknesses is necessary to achieving an organization’s goals. Applying the proposed methodology, we used a publicly available Big-Five-personality-traits-test dataset to conduct the analyses. Preprocessing techniques were adopted to clean the dataset. Moreover, hypothesis testing was performed using Pearson’s correlation approach. Based on the testing results, we concluded that a positive relationship exists between four personality traits (agreeableness, conscientiousness, extraversion, and openness), and a negative correlation occurred between neuroticism traits and the four traits. This dataset was unlabeled. However, we applied the K-mean clustering algorithm to the data-labeling task. Furthermore, various supervised machine-learning models, such as random forest (RF), support vector machine (SVM), K-nearest neighbor (KNN), and AdaBoost, were used for classification purposes. The experimental results revealed that the SVM attained the highest results, with an accuracy of 98%, outperforming the other classification models. This study adds to the current literature and body of knowledge through examining the extent of the application of artificial intelligence in the present and, potentially, the future of human-resource management. Our results may be of significance to companies, organizations and their leaders and human-resource executives, in addition to human-resource professionals. Full article
(This article belongs to the Special Issue Advanced Artificial Intelligence Models and Its Applications)
Show Figures

Figure 1

20 pages, 5429 KiB  
Article
Self-Writer: Clusterable Embedding Based Self-Supervised Writer Recognition from Unlabeled Data
by Zabir Mohammad, Muhammad Mohsin Kabir, Muhammad Mostafa Monowar, Md Abdul Hamid and Muhammad Firoz Mridha
Mathematics 2022, 10(24), 4796; https://doi.org/10.3390/math10244796 - 16 Dec 2022
Viewed by 2056
Abstract
Writer recognition based on a small amount of handwritten text is one of the most challenging deep learning problems because of the implicit characteristics of handwriting styles. In a deep convolutional neural network, writer recognition based on supervised learning has shown great success. [...] Read more.
Writer recognition based on a small amount of handwritten text is one of the most challenging deep learning problems because of the implicit characteristics of handwriting styles. In a deep convolutional neural network, writer recognition based on supervised learning has shown great success. These supervised methods typically require a lot of annotated data. However, collecting annotated data is expensive. Although unsupervised writer recognition methods may address data annotation issues significantly, they often fail to capture sufficient feature relationships and usually perform less efficiently than supervised learning methods. Self-supervised learning may solve the unlabeled dataset issue and train the unsupervised datasets in a supervised manner. This paper introduces Self-Writer, a self-supervised writer recognition approach dealing with unlabeled data. The proposed scheme generates clusterable embeddings from a small fixed-length image frame such as a text block. The training strategy presumes that a small image frame of handwritten text should include the writer’s handwriting characteristics. We construct pairwise constraints and nongenerative augmentation to train Siamese architecture to generate embeddings depending on such an assumption. Self-Writer is evaluated on the two most widely used datasets, IAM and CVL, on pairwise and triplet architecture. We find Self-Writer to be convincing in achieving satisfactory performance using pairwise architectures. Full article
(This article belongs to the Special Issue Advanced Artificial Intelligence Models and Its Applications)
Show Figures

Figure 1

15 pages, 1013 KiB  
Article
Fully Connected Hashing Neural Networks for Indexing Large-Scale Remote Sensing Images
by Na Liu, Haiming Mou, Jun Tang, Lihong Wan, Qingdu Li and Ye Yuan
Mathematics 2022, 10(24), 4716; https://doi.org/10.3390/math10244716 - 12 Dec 2022
Cited by 1 | Viewed by 1227
Abstract
With the emergence of big data, the efficiency of data querying and data storage has become a critical bottleneck in the remote sensing community. In this letter, we explore hash learning for the indexing of large-scale remote sensing images (RSIs) with a supervised [...] Read more.
With the emergence of big data, the efficiency of data querying and data storage has become a critical bottleneck in the remote sensing community. In this letter, we explore hash learning for the indexing of large-scale remote sensing images (RSIs) with a supervised pairwise neural network with the aim of improving RSI retrieval performance with a few binary bits. First, a fully connected hashing neural network (FCHNN) is proposed in order to map RSI features into binary (feature-to-binary) codes. Compared with pixel-to-binary frameworks, such as DPSH (deep pairwise-supervised hashing), FCHNN only contains three fully connected layers and incorporates another new constraint, so it can be significantly accelerated to obtain desirable performance. Second, five types of image features, including mid-level and deep features, were investigated in the learning of the FCHNN to achieve state-of-the-art performances. The mid-level features were based on Fisher encoding with affine-invariant local descriptors, and the deep features were extracted by pretrained or fine-tuned CNNs (e.g., CaffeNet and VGG-VD16). Experiments on five recently released large-scale RSI datasets (i.e., AID, NWPU45, PatternNet, RSI-CB128, and RSI-CB256) demonstrated the effectiveness of the proposed method in comparison with existing handcrafted or deep-based hashing methods. Full article
(This article belongs to the Special Issue Advanced Artificial Intelligence Models and Its Applications)
Show Figures

Figure 1

21 pages, 607 KiB  
Article
Employing Quantum Fruit Fly Optimization Algorithm for Solving Three-Dimensional Chaotic Equations
by Qasim M. Zainel, Saad M. Darwish and Murad B. Khorsheed
Mathematics 2022, 10(21), 4147; https://doi.org/10.3390/math10214147 - 06 Nov 2022
Cited by 9 | Viewed by 1388
Abstract
In a chaotic system, deterministic, nonlinear, irregular, and initial-condition-sensitive features are desired. Due to its chaotic nature, it is difficult to quantify a chaotic system’s parameters. Parameter estimation is a major issue because it depends on the stability analysis of a chaotic system, [...] Read more.
In a chaotic system, deterministic, nonlinear, irregular, and initial-condition-sensitive features are desired. Due to its chaotic nature, it is difficult to quantify a chaotic system’s parameters. Parameter estimation is a major issue because it depends on the stability analysis of a chaotic system, and communication systems that are based on chaos make it difficult to give accurate estimates or a fast rate of convergence. Several nature-inspired metaheuristic algorithms have been used to estimate chaotic system parameters; however, many are unable to balance exploration and exploitation. The fruit fly optimization algorithm (FOA) is not only efficient in solving difficult optimization problems, but also simpler and easier to construct than other currently available population-based algorithms. In this study, the quantum fruit fly optimization algorithm (QFOA) was suggested to find the optimum values for chaotic parameters that would help algorithms converge faster and avoid the local optimum. The recommended technique used quantum theory probability and uncertainty to overcome the classic FA’s premature convergence and local optimum trapping. QFOA modifies the basic Newtonian-based search technique of FA by including a quantum behavior-based searching mechanism used to pinpoint the position of the fruit fly swarm. The suggested model has been assessed using a well-known Lorenz system with a specified set of parameter values and benchmarked signals. The results showed a considerable improvement in the accuracy of parameter estimates and better estimation power than state-of-the art parameter estimation approaches. Full article
(This article belongs to the Special Issue Advanced Artificial Intelligence Models and Its Applications)
Show Figures

Figure 1

18 pages, 3714 KiB  
Article
Lightweight Target-Aware Attention Learning Network-Based Target Tracking Method
by Yanchun Zhao, Jiapeng Zhang, Rui Duan, Fusheng Li and Huanlong Zhang
Mathematics 2022, 10(13), 2299; https://doi.org/10.3390/math10132299 - 30 Jun 2022
Cited by 2 | Viewed by 1254
Abstract
Siamese network trackers based on pre-trained depth features have achieved good performance in recent years. However, the pre-trained depth features are trained in advance on large-scale datasets, which contain feature information of a large number of objects. There may be a pair of [...] Read more.
Siamese network trackers based on pre-trained depth features have achieved good performance in recent years. However, the pre-trained depth features are trained in advance on large-scale datasets, which contain feature information of a large number of objects. There may be a pair of interference and redundant information for a single tracking target. To learn a more accurate target feature information, this paper proposes a lightweight target-aware attention learning network to learn the most effective channel features of the target online. The lightweight network uses a designed attention learning loss function to learn a series of channel features with weights online with no complex parameters. Compared with the pre-trained features, the channel features with weights can represent the target more accurately. Finally, the lightweight target-aware attention learning network is unified into a Siamese tracking network framework to implement target tracking effectively. Experiments on several datasets demonstrate that the tracker proposed in this paper has good performance. Full article
(This article belongs to the Special Issue Advanced Artificial Intelligence Models and Its Applications)
Show Figures

Figure 1

Back to TopTop