Applications of Artificial Intelligence, Machine Learning, Deep Learning, and Explainable AI (XAI)

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 16 May 2024 | Viewed by 8138

Special Issue Editors


E-Mail Website
Guest Editor
School of Computing, Engineering & Intelligent Systems, Ulster University, Coleraine BT52 1SA, UK
Interests: computer vision; machine learning; deep learning; medical imaging; explainable AI (XAI)

E-Mail Website
Guest Editor
School of Information and Data Sciences, Nagasaki University, Nagasaki City 852-8521, Japan
Interests: image processing; machine learning; neural network algorithms

E-Mail Website
Guest Editor
Faculty of Engineering, Misr International University, Cairo 11828, Egypt
Interests: medical image & signal processing; affective computing; machine & deep learning; explainable AI (XAI)

E-Mail Website
Guest Editor
School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 632014, India
Interests: deep learning; computer vision; image processing; Artificial Intelligence

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) with machine learning and deep learning models has been widely applied in numerous domains, including medical imaging, healthcare, industrial manufacturing, sports, and many more. In recent years, many efforts have been made to improve the interpretability of the decisions of machine learning and deep learning algorithms. Explainable Artificial Intelligence (XAI) has been established as a new research area, which aims to provide new methodologies and algorithms to enhance transparency and reliability to both the decisions made by predictive algorithms and the contributions and importance of individual features to the outcome.

The purpose of this Special Issue is to report on the advances in state-of-the-art research on artificial intelligence machine learning, deep learning, and explainable AI applications. The advances in the state-of-the-art for addressing real-world AI applications are of great interest.

The research domains may involve (but are not limited to):

  • Medical Image and Signal Processing
  • Uncertainty detection, Synthesis, and registration of Images
  • Detection from noisy labels and limited data
  • Semi-supervised/Self-supervised detection
  • Affective Computing
  • Computer vision based Intelligent applications
  • Intelligent Applications using Augmented and Virtual Reality
  • Healthcare Intelligence
  • Advanced security using AI
  • AI-enabled decision support systems
  • Image processing applications
  • XAI methods for Deep Learning (e.g., medical domain, industrial applications, security, surveillance)
  • Multimodal XAI approaches

Dr. Pratheepan Yogarajah
Dr. Muthu Subash Kavitha
Dr. Lamiaa Abdel-Hamid
Dr. Ananthakrishnan Balasundaram
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical Image and Signal Processing
  • uncertainty detection, Synthesis, and registration of Images
  • detection from noisy labels and limited data
  • semi-supervised/Self-supervised detection
  • affective Computing
  • computer vision based Intelligent applications
  • intelligent Applications using Augmented and Virtual Reality
  • healthcare Intelligence
  • advanced security using AI
  • AI-enabled decision support systems
  • image processing applications
  • XAI methods for Deep Learning (e.g., medical domain, industrial applications, security, surveillance)
  • multimodal XAI approaches

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 4239 KiB  
Article
Design and Evaluation of CPU-, GPU-, and FPGA-Based Deployment of a CNN for Motor Imagery Classification in Brain-Computer Interfaces
by Federico Pacini, Tommaso Pacini, Giuseppe Lai, Alessandro Michele Zocco and Luca Fanucci
Electronics 2024, 13(9), 1646; https://doi.org/10.3390/electronics13091646 - 25 Apr 2024
Viewed by 153
Abstract
Brain–computer interfaces (BCIs) have gained popularity in recent years. Among noninvasive BCIs, EEG-based systems stand out as the primary approach, utilizing the motor imagery (MI) paradigm to discern movement intentions. Initially, BCIs were predominantly focused on nonembedded systems. However, there is now a [...] Read more.
Brain–computer interfaces (BCIs) have gained popularity in recent years. Among noninvasive BCIs, EEG-based systems stand out as the primary approach, utilizing the motor imagery (MI) paradigm to discern movement intentions. Initially, BCIs were predominantly focused on nonembedded systems. However, there is now a growing momentum towards shifting computation to the edge, offering advantages such as enhanced privacy, reduced transmission bandwidth, and real-time responsiveness. Despite this trend, achieving the desired target remains a work in progress. To illustrate the feasibility of this shift and quantify the potential benefits, this paper presents a comparison of deploying a CNN for MI classification across different computing platforms, namely, CPU-, embedded GPU-, and FPGA-based. For our case study, we utilized data from 29 participants included in a dataset acquired using an EEG cap for training the models. The FPGA solution emerged as the most efficient in terms of the power consumption–inference time product. Specifically, it delivers an impressive reduction of up to 89% in power consumption compared to the CPU and 71% compared to the GPU and up to a 98% reduction in memory footprint for model inference, albeit at the cost of a 39% increase in inference time compared to the GPU. Both the embedded GPU and FPGA outperform the CPU in terms of inference time. Full article
Show Figures

Figure 1

30 pages, 17457 KiB  
Article
Melanoma Skin Cancer Identification with Explainability Utilizing Mask Guided Technique
by Lahiru Gamage, Uditha Isuranga, Dulani Meedeniya, Senuri De Silva and Pratheepan Yogarajah
Electronics 2024, 13(4), 680; https://doi.org/10.3390/electronics13040680 - 06 Feb 2024
Viewed by 972
Abstract
Melanoma is a highly prevalent and lethal form of skin cancer, which has a significant impact globally. The chances of recovery for melanoma patients substantially improve with early detection. Currently, deep learning (DL) methods are gaining popularity in assisting with the identification of [...] Read more.
Melanoma is a highly prevalent and lethal form of skin cancer, which has a significant impact globally. The chances of recovery for melanoma patients substantially improve with early detection. Currently, deep learning (DL) methods are gaining popularity in assisting with the identification of diseases using medical imaging. The paper introduces a computational model for classifying melanoma skin cancer images using convolutional neural networks (CNNs) and vision transformers (ViT) with the HAM10000 dataset. Both approaches utilize mask-guided techniques, employing a specialized U2-Net segmentation module to generate masks. The CNN-based approach utilizes ResNet50, VGG16, and Xception with transfer learning. The training process is enhanced using a Bayesian hyperparameter tuner. Moreover, this study applies gradient-weighted class activation mapping (Grad-CAM) and Grad-CAM++ to generate heatmaps to explain the classification models. These visual heatmaps elucidate the contribution of each input region to the classification outcome. The CNN-based model approach achieved the highest accuracy at 98.37% in the Xception model with a sensitivity and specificity of 95.92% and 99.01%, respectively. The ViT-based model approach achieved high values for accuracy, sensitivity, and specificity, such as 92.79%, 91.09%, and 93.54%, respectively. Furthermore, the performance of the model was assessed through intersection over union (IOU) and other qualitative evaluations. Finally, we developed the proposed model as a web application that can be used as a support tool for medical practitioners in real-time. The system usability study score of 86.87% is reported, which shows the usefulness of the proposed solution. Full article
Show Figures

Figure 1

20 pages, 1873 KiB  
Article
Entity Matching by Pool-Based Active Learning
by Youfang Han and Chunping Li
Electronics 2024, 13(3), 559; https://doi.org/10.3390/electronics13030559 - 30 Jan 2024
Viewed by 529
Abstract
The goal of entity matching is to find the corresponding records representing the same entity from different data sources. At present, in the mainstream methods, rule-based entity matching methods need tremendous domain knowledge. Machine-learning-based or deep-learning-based entity matching methods need a large number [...] Read more.
The goal of entity matching is to find the corresponding records representing the same entity from different data sources. At present, in the mainstream methods, rule-based entity matching methods need tremendous domain knowledge. Machine-learning-based or deep-learning-based entity matching methods need a large number of labeled samples to build the model, which is difficult to achieve in some applications. In addition, learning-based methods are more likely to overfit, so the quality requirements of training samples are very high. In this paper, we present an active learning method for entity matching tasks. This method needs to manually label only a small number of valuable samples, and use these labeled samples to build a model with high quality. This paper proposes hybrid uncertainty as a query strategy to find those valuable samples for labeling, which can minimize the number of labeled training samples and at the same time meet the requirements of entity matching tasks. The proposed method is validated on seven data sets in different fields. The experiments show that the proposed method uses only a small number of labeled samples and achieves better effects compared to current existing approaches. Full article
Show Figures

Figure 1

16 pages, 2408 KiB  
Article
Random Convolutional Kernels for Space-Detector Based Gravitational Wave Signals
by Ruben Poghosyan and Yuan Luo
Electronics 2023, 12(20), 4360; https://doi.org/10.3390/electronics12204360 - 20 Oct 2023
Viewed by 1256
Abstract
Neural network models have entered the realm of gravitational wave detection, proving their effectiveness in identifying synthetic gravitational waves. However, these models rely on learned parameters, which necessitates time-consuming computations and expensive hardware resources. To address this challenge, we propose a gravitational wave [...] Read more.
Neural network models have entered the realm of gravitational wave detection, proving their effectiveness in identifying synthetic gravitational waves. However, these models rely on learned parameters, which necessitates time-consuming computations and expensive hardware resources. To address this challenge, we propose a gravitational wave detection model tailored specifically for binary black hole mergers, inspired by the Random Convolutional Kernel Transform (ROCKET) family of models. We conduct a rigorous analysis by factoring in realistic signal-to-noise ratios in our datasets, demonstrating that conventional techniques lose predictive accuracy when applied to ground-based detector signals. In contrast, for space-based detectors with high signal-to-noise ratios, our method not only detects signals effectively but also enhances inference speed due to its streamlined complexity—a notable achievement. Compared to previous gravitational wave models, we observe a significant acceleration in training time while maintaining acceptable performance metrics for ground-based detector signals and achieving equal or even superior metrics for space-based detector signals. Our experiments on synthetic data yield impressive results, with the model achieving an AUC score of 96.1% and a perfect recall rate of 100% on a dataset with a 1:3 class imbalance for ground-based detectors. For high signal-to-noise ratio signals, we achieve flawless precision and recall of 100% without losing precision on datasets with low-class ratios. Additionally, our approach reduces inference time by a factor of 1.88. Full article
Show Figures

Figure 1

22 pages, 5485 KiB  
Article
A Hierarchical Resource Scheduling Method for Satellite Control System Based on Deep Reinforcement Learning
by Yang Li, Xiye Guo, Zhijun Meng, Junxiang Qin, Xuan Li, Xiaotian Ma, Sichuang Ren and Jun Yang
Electronics 2023, 12(19), 3991; https://doi.org/10.3390/electronics12193991 - 22 Sep 2023
Viewed by 996
Abstract
Space-based systems providing remote sensing, communication, and navigation services are essential to the economy and national defense. Users’ demand for satellites has increased sharply in recent years, but resources such as storage, energy, and computation are limited. Therefore, an efficient resource scheduling strategy [...] Read more.
Space-based systems providing remote sensing, communication, and navigation services are essential to the economy and national defense. Users’ demand for satellites has increased sharply in recent years, but resources such as storage, energy, and computation are limited. Therefore, an efficient resource scheduling strategy is urgently needed to satisfy users’ demands maximally and get high task execution benefits. A hierarchical scheduling method is proposed in this work, which combines improved ant colony optimization and an improved deep Q network. The proposed method considers the quality of current task execution and resource load balance. The entire resource scheduling process contains two steps, task allocation and resource scheduling in the timeline. The former mainly implements load balance by improved ant colony optimization, while the latter mainly accomplishes the high task completion rate by an improved deep Q network. Compared with several other heuristic algorithms, the proposed approach is proven to have advantages in terms of CPU runtime, task completion rate, and resource variance between satellites. In the simulation scenarios, the proposed method can achieve up to 97.3% task completion rate, with almost 50% of the CPU runtime required by HAW and HADRT. Furthermore, this method has successfully implemented load balance. Full article
Show Figures

Figure 1

12 pages, 9865 KiB  
Article
Evo-MAML: Meta-Learning with Evolving Gradient
by Jiaxing Chen, Weilin Yuan, Shaofei Chen, Zhenzhen Hu and Peng Li
Electronics 2023, 12(18), 3865; https://doi.org/10.3390/electronics12183865 - 13 Sep 2023
Cited by 2 | Viewed by 1091
Abstract
How to rapidly adapt to new tasks and improve model generalization through few-shot learning remains a significant challenge in meta-learning. Model-Agnostic Meta-Learning (MAML) has become a powerful approach, with offers a simple framework with excellent generality. However, the requirement to compute second-order derivatives [...] Read more.
How to rapidly adapt to new tasks and improve model generalization through few-shot learning remains a significant challenge in meta-learning. Model-Agnostic Meta-Learning (MAML) has become a powerful approach, with offers a simple framework with excellent generality. However, the requirement to compute second-order derivatives and retain a lengthy calculation graph poses considerable computational and memory burdens, limiting the practicality of MAML. To address this issue, we propose Evolving MAML (Evo-MAML), an optimization-based meta-learning method that incorporates evolving gradient within the inner loop. Evo-MAML avoids the second-order information, resulting in reduced computational complexity. Experimental results show that Evo-MAML exhibits higher generality and competitive performance when compared to existing first-order approximation approaches, making it suitable for both few-shot learning and meta-reinforcement learning settings. Full article
Show Figures

Graphical abstract

19 pages, 8208 KiB  
Article
A Bi-Directional Two-Dimensional Deep Subspace Learning Network with Sparse Representation for Object Recognition
by Xiaoxue Li, Weijia Feng, Xiaofeng Wang, Jia Guo, Yuanxu Chen, Yumeng Yang, Chao Wang, Xinyu Zuo and Manlu Xu
Electronics 2023, 12(18), 3745; https://doi.org/10.3390/electronics12183745 - 05 Sep 2023
Viewed by 780
Abstract
A principal component analysis network (PCANet), as one of the representative deep subspace learning networks, utilizes principal component analysis (PCA) to learn filters that represent the dominant structural features of objects. However, the filters used in PCANet are linear combinations of all the [...] Read more.
A principal component analysis network (PCANet), as one of the representative deep subspace learning networks, utilizes principal component analysis (PCA) to learn filters that represent the dominant structural features of objects. However, the filters used in PCANet are linear combinations of all the original variables and contain complex and redundant principal components, which hinders the interpretability of the results. To address this problem, we introduce sparse constraints into a subspace learning network and propose three sparse bi-directional two-dimensional PCANet algorithms, including sparse row 2D2PCANet (SR2D2PCANet), sparse column 2D2PCANet (SC2D2PCANet), and sparse row–column 2D2PCANet (SRC2D2PCANet). These algorithms perform sparse operations on the projection matrices in the row, column, and row–column direction, respectively. Sparsity is achieved by utilizing the elastic net to shrink the loads of the non-primary elements in the principal components to zero and to reduce the redundancy in the projection matrices, thus improving the learning efficiency of the networks. Finally, a variety of experimental results on ORL, COIL-100, NEC, and AR datasets demonstrate that the proposed algorithms learn filters with more discriminative information and outperform other subspace learning networks and traditional deep learning networks in terms of classification and run-time performance, especially for less sample learning. Full article
Show Figures

Figure 1

16 pages, 7047 KiB  
Article
BoT2L-Net: Appearance-Based Gaze Estimation Using Bottleneck Transformer Block and Two Identical Losses in Unconstrained Environments
by Xiaohan Wang, Jian Zhou, Lin Wang, Yong Yin, Yu Wang and Zhongjun Ding
Electronics 2023, 12(7), 1704; https://doi.org/10.3390/electronics12071704 - 04 Apr 2023
Cited by 2 | Viewed by 1489
Abstract
As a nonverbal cue, gaze plays a critical role in communication, expressing emotions and reflecting mental activity. It has widespread applications in various fields. Recently, the appearance-based gaze estimation method, which utilizes CNN (convolutional neural networks), has rapidly improved the accuracy and robustness [...] Read more.
As a nonverbal cue, gaze plays a critical role in communication, expressing emotions and reflecting mental activity. It has widespread applications in various fields. Recently, the appearance-based gaze estimation method, which utilizes CNN (convolutional neural networks), has rapidly improved the accuracy and robustness of gaze estimation algorithms. Due to their insufficient ability to capture global relationships, the present accuracy of gaze estimation methods in unconstrained environments, has the potential for improvement. To address this challenge, the focus of this paper is to enhance the accuracy of gaze estimation, which is typically measured by mean angular error. In light of Transformer’s breakthrough in image classification and target detection tasks, and the need for an efficient network, the Transformer-enhanced-CNN method is a suitable choice. This paper proposed a novel model for 3D gaze estimation in unconstrained environments, based on the Bottleneck Transformer block and multi-loss methods. Our designed network (BoT2L-Net), incorporates self-attention through the BoT block, utilizing two identical loss functions to predict the two gaze angles. Additionally, the back-propagation network was combined with classification and regression losses, to improve the network’s accuracy and robustness. Our model was evaluated on two commonly used gaze datasets: Gaze360 and MPIIGaze, achieving mean angular errors of 11.53° and 9.59° for front 180° and front-facing gaze angles, respectively, on the Gaze360 testing set, and a mean angular error of 3.97° on the MPIIGaze testing set, outperforming the CNN-based gaze estimation method. The BoT2L-Net model proposed in this paper performs well on two publicly available datasets, demonstrating the effectiveness of our approach. Full article
Show Figures

Figure 1

Back to TopTop