Emerging Topics in Machine Learning, Image Processing and Pattern Recognition for AI-Related Applications

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 16108

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Hong Kong Baptist University, Kowloon Tong, Hong Kong, China
Interests: machine learning; visual computing; pattern recognition; optimization

E-Mail Website
Guest Editor
School of Computer Science and Technology, Xidian University, Xi'an 710071, China
Interests: network modelling; task scheduling and resource allocation; artificial intelligence; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, Huaqiao University, Xiamen 361021, China
Interests: artificial neural networks; deep learning; pattern recognition; computer vision; image processing

Special Issue Information

Dear Colleagues, 

Recent advances in the fields of machine learning, image processing and pattern recognition have sparked a rapid revolution in the field of artificial intelligence (AI) and its cross-disciplinary research areas, where phenomenal progress has been made. In particular, there has been an increasing interest in computational intelligence combined with machine learning, image processing and pattern recognition techniques, to build efficient learning models that solve complicated problems in AI-related fields. Mathematical models are a key factor towards the success of these strategies because they enable a quantitative understanding of underlying learning processes and provide a principled, solid foundation for the evaluation of these learning models. Nowadays, we are experiencing new innovative methodologies emerging from everywhere in the world and an adaptability to unexpected conditions, which increases the usefulness of these methodologies for real-world problems.

The purpose of this Special Issue is to collate the latest methodologies, models, algorithms and findings, as well as to discuss the current challenges of machine learning, image processing and pattern recognition solutions for a broad range of AI-related applications. Topics include but are not limited to:  

  • Deep learning;
  • Interpretable machine learning algorithms;
  • Statistical learning theory for data mining;
  • Semi-supervised, weakly supervised and unsupervised learning systems;
  • Multi-modal data analysis;
  • Optimization of sustainable computational intelligence;
  • Innovative methodology related to image processing and computer vision;
  • Mathematic solutions for pattern recognition;
  • Real-time pattern recognition applications;
  • Secure AI.
Prof. Dr. Yiu-ming Cheung
Prof. Dr. Yuping Wang
Prof. Dr. Xin Liu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • image processing
  • pattern recognition
  • deep learning
  • data analytical techniques
  • innovative theory and methodology

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 1444 KiB  
Article
Extension of Divisible-Load Theory from Scheduling Fine-Grained to Coarse-Grained Divisible Workloads on Networked Computing Systems
by Xiaoli Wang, Bharadwaj Veeravalli, Kangjian Wu and Xiaobo Song
Mathematics 2023, 11(7), 1752; https://doi.org/10.3390/math11071752 - 06 Apr 2023
Viewed by 1215
Abstract
The big data explosion has sparked a strong demand for high-performance data processing. Meanwhile, the rapid development of networked computing systems, coupled with the growth of Divisible-Load Theory (DLT) as an innovative technology with competent scheduling strategies, provides a practical way of conducting [...] Read more.
The big data explosion has sparked a strong demand for high-performance data processing. Meanwhile, the rapid development of networked computing systems, coupled with the growth of Divisible-Load Theory (DLT) as an innovative technology with competent scheduling strategies, provides a practical way of conducting parallel processing with big data. Existing studies in the area of DLT usually consider the scheduling problem with regard to fine-grained divisible workloads. However, numerous big data loads nowadays can only be abstracted as coarse-grained workloads, such as large-scale image classification, context-dependent emotional analysis and so on. In view of this, this paper extends DLT from fine-grained to coarse-grained divisible loads by establishing a new multi-installment scheduling model. With this model, a subtle heuristic algorithm was proposed to find a feasible load partitioning scheme that minimizes the makespan of the entire workload. Simulation results show that the proposed algorithm is superior to the up-to-date multi-installment scheduling strategy in terms of achieving a shorter makespan of workloads when dealing with coarse-grained divisible loads. Full article
Show Figures

Figure 1

14 pages, 91023 KiB  
Article
Traffic Accident Detection Method Using Trajectory Tracking and Influence Maps
by Yihang Zhang and Yunsick Sung
Mathematics 2023, 11(7), 1743; https://doi.org/10.3390/math11071743 - 05 Apr 2023
Cited by 2 | Viewed by 3091
Abstract
With the development of artificial intelligence, techniques such as machine learning, object detection, and trajectory tracking have been applied to various traffic fields to detect accidents and analyze their causes. However, detecting traffic accidents using closed-circuit television (CCTV) as an emerging subject in [...] Read more.
With the development of artificial intelligence, techniques such as machine learning, object detection, and trajectory tracking have been applied to various traffic fields to detect accidents and analyze their causes. However, detecting traffic accidents using closed-circuit television (CCTV) as an emerging subject in machine learning remains challenging because of complex traffic environments and limited vision. Traditional research has limitations in deducing the trajectories of accident-related objects and extracting the spatiotemporal relationships among objects. This paper proposes a traffic accident detection method that helps to determine whether each frame shows accidents by generating and considering object trajectories using influence maps and a convolutional neural network (CNN). The influence maps with spatiotemporal relationships were enhanced to improve the detection of traffic accidents. A CNN is utilized to extract latent representations from the influence maps produced by object trajectories. Car Accident Detection and Prediction (CADP) was utilized in the experiments to train our model, which achieved a traffic accident detection accuracy of approximately 95%. Thus, the proposed method attained remarkable results in terms of performance improvement compared to methods that only rely on CNN-based detection. Full article
Show Figures

Figure 1

27 pages, 10213 KiB  
Article
LCAM: Low-Complexity Attention Module for Lightweight Face Recognition Networks
by Seng Chun Hoo, Haidi Ibrahim, Shahrel Azmin Suandi and Theam Foo Ng
Mathematics 2023, 11(7), 1694; https://doi.org/10.3390/math11071694 - 01 Apr 2023
Cited by 1 | Viewed by 1152
Abstract
Inspired by the human visual system to concentrate on the important region of a scene, attention modules recalibrate the weights of either the channel features alone or along with spatial features to prioritize informative regions while suppressing unimportant information. However, the floating-point operations [...] Read more.
Inspired by the human visual system to concentrate on the important region of a scene, attention modules recalibrate the weights of either the channel features alone or along with spatial features to prioritize informative regions while suppressing unimportant information. However, the floating-point operations (FLOPs) and parameter counts are considerably high when one is incorporating these modules, especially for those with both channel and spatial attentions in a baseline model. Despite the success of attention modules in general ImageNet classification tasks, emphasis should be given to incorporating these modules in face recognition tasks. Hence, a novel attention mechanism with three parallel branches known as the Low-Complexity Attention Module (LCAM) is proposed. Note that there is only one convolution operation for each branch. Therefore, the LCAM is lightweight, yet it is still able to achieve a better performance. Experiments from face verification tasks indicate that LCAM achieves similar or even better results compared with those of previous modules that incorporate both channel and spatial attentions. Moreover, compared to the baseline model with no attention modules, LCAM achieves performance values of 0.84% on ConvFaceNeXt, 1.15% on MobileFaceNet, and 0.86% on ProxylessFaceNAS with respect to the average accuracy of seven image-based face recognition datasets. Full article
Show Figures

Figure 1

17 pages, 833 KiB  
Article
Nonconvex Tensor Relative Total Variation for Image Completion
by Yunqing Bai, Jihong Pei and Min Li
Mathematics 2023, 11(7), 1682; https://doi.org/10.3390/math11071682 - 31 Mar 2023
Cited by 1 | Viewed by 958
Abstract
Image completion, which falls to a special type of inverse problems, is an important but challenging task. The difficulties lie in that (i) the datasets usually appear to be multi-dimensional; (ii) the unavailable or corrupted data entries are randomly distributed. Recently, low-rank priors [...] Read more.
Image completion, which falls to a special type of inverse problems, is an important but challenging task. The difficulties lie in that (i) the datasets usually appear to be multi-dimensional; (ii) the unavailable or corrupted data entries are randomly distributed. Recently, low-rank priors have gained importance in matrix completion problems and signal separation; however, due to the complexity of multi-dimensional data, using a low-rank prior by itself is often insufficient to achieve desirable completion, which requires a more comprehensive approach. In this paper, different from current available approaches, we develop a new approach, called relative total variation (TRTV), under the tensor framework, to effectively integrate the local and global image information for data processing. Based on our proposed framework, a completion model embedded with TRTV and tensor p-shrinkage nuclear norm minimization with suitable regularization is established. An alternating direction method of multiplier (ADMM)-based algorithm under our framework is presented. Extensive experiments in terms of denoising and completion tasks demonstrate our proposed method are not only effective but also superior to existing approaches in the literature. Full article
Show Figures

Figure 1

22 pages, 1492 KiB  
Article
Price Prediction of Bitcoin Based on Adaptive Feature Selection and Model Optimization
by Yingjie Zhu, Jiageng Ma, Fangqing Gu, Jie Wang, Zhijuan Li, Youyao Zhang, Jiani Xu, Yifan Li, Yiwen Wang and Xiangqun Yang
Mathematics 2023, 11(6), 1335; https://doi.org/10.3390/math11061335 - 09 Mar 2023
Cited by 2 | Viewed by 4352
Abstract
Bitcoin is one of the most successful cryptocurrencies, and research on price predictions is receiving more attention. To predict Bitcoin price fluctuations better and more effectively, it is necessary to establish a more abundant index system and prediction model with a better prediction [...] Read more.
Bitcoin is one of the most successful cryptocurrencies, and research on price predictions is receiving more attention. To predict Bitcoin price fluctuations better and more effectively, it is necessary to establish a more abundant index system and prediction model with a better prediction effect. In this study, a combined prediction model with twin support vector regression was used as the main model. Twenty-seven factors related to Bitcoin prices were collected. Some of the factors that have the greatest impact on Bitcoin prices were selected by using the XGBoost algorithm and random forest algorithm. The combined prediction model with support vector regression (SVR), least-squares support vector regression (LSSVR), and twin support vector regression (TWSVR) was used to predict the Bitcoin price. Since the model’s hyperparameters have a great impact on prediction accuracy and algorithm performance, we used the whale optimization algorithm (WOA) and particle swarm optimization algorithm (PSO) to optimize the hyperparameters of the model. The experimental results show that the combined model, XGBoost-WOA-TWSVR, has the best prediction effect, and the EVS score of this model is significantly better than that of the traditional statistical model. In addition, our study verifies that twin support vector regression has advantages in both prediction effect and computation speed. Full article
Show Figures

Figure 1

20 pages, 595 KiB  
Article
5G Multi-Slices Bi-Level Resource Allocation by Reinforcement Learning
by Zhipeng Yu, Fangqing Gu, Hailin Liu and Yutao Lai
Mathematics 2023, 11(3), 760; https://doi.org/10.3390/math11030760 - 02 Feb 2023
Cited by 1 | Viewed by 1457
Abstract
As the centralized unit (CU)—distributed unit (DU) separation in the fifth generation mobile network (5G), the multi-slice and multi-scenario, can be better applied in wireless communication. The development of the 5G network to vertical industries makes its resource allocation also have an obvious [...] Read more.
As the centralized unit (CU)—distributed unit (DU) separation in the fifth generation mobile network (5G), the multi-slice and multi-scenario, can be better applied in wireless communication. The development of the 5G network to vertical industries makes its resource allocation also have an obvious hierarchical structure. In this paper, we propose a bi-level resource allocation model. The up-level objective in this model refers to the profit of the 5G operator through the base station allocating resources to slices. The lower-level objective in this model refers to the slices allocating the resource to its users fairly. The resource allocation problem is a complex optimization problem with mixed-discrete variables, so whether a resource allocation algorithm can quickly and accurately give the resource allocation scheme is the key to its practical application. According to the characteristics of the problem, we select the multi-agent twin delayed deep deterministic policy gradient (MATD3) to solve the upper slice resource allocation and the discrete and continuous twin delayed deep deterministic policy gradient (DCTD3) to solve the lower user resource allocation. It is crucial to accurately characterize the state, environment, and reward of reinforcement learning for solving practical problems. Thus, we provide an effective definition of the environment, state, action, and reward of MATD3 and DCTD3 for solving the bi-level resource allocation problem. We conduct some simulation experiments and compare it with the multi-agent deep deterministic policy gradient (MADDPG) algorithm and nested bi-level evolutionary algorithm (NBLEA). The experimental results show that the proposed algorithm can quickly provide a better resource allocation scheme. Full article
Show Figures

Figure 1

24 pages, 830 KiB  
Article
Similarity Feature Construction for Matching Ontologies through Adaptively Aggregating Artificial Neural Networks
by Xingsi Xue, Jianhua Guo, Miao Ye and Jianhui Lv
Mathematics 2023, 11(2), 485; https://doi.org/10.3390/math11020485 - 16 Jan 2023
Cited by 4 | Viewed by 1956
Abstract
Ontology is the kernel technique of Semantic Web (SW), which enables the interaction and cooperation among different intelligent applications. However, with the rapid development of ontologies, their heterogeneity issue becomes more and more serious, which hampers communications among those intelligent systems built upon [...] Read more.
Ontology is the kernel technique of Semantic Web (SW), which enables the interaction and cooperation among different intelligent applications. However, with the rapid development of ontologies, their heterogeneity issue becomes more and more serious, which hampers communications among those intelligent systems built upon them. Finding the heterogeneous entities between two ontologies, i.e., ontology matching, is an effective method of solving ontology heterogeneity problems. When matching two ontologies, it is critical to construct the entity pair’s similarity feature by comprehensively taking into consideration various similarity features, so that the identical entities can be distinguished. Due to the ability of learning complex calculating model, recently, Artificial Neural Network (ANN) is a popular method of constructing similarity features for matching ontologies. The existing ANNs construct the similarity feature in a single perspective, which could not ensure its effectiveness under diverse heterogeneous contexts. To construct an accurate similarity feature for each entity pair, in this work, we propose an adaptive aggregating method of combining different ANNs. In particular, we first propose a context-based ANN and syntax-based ANN to respectively construct two similarity feature matrices, which are then adaptively integrated to obtain a final similarity feature matrix through the Ordered Weighted Averaging (OWA) and Analytic hierarchy process (AHP). Ontology Alignment Evaluation Initiative (OAEI)’s benchmark and anatomy track are used to verify the effectiveness of our method. The experimental results show that our approach’s results are better than single ANN-based ontology matching techniques and state-of-the-art ontology matching techniques. Full article
Show Figures

Figure 1

Back to TopTop