Recent Applications of Explainable AI (XAI)

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 August 2024 | Viewed by 2679

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Information Technology, University of Jyväskylä, P.O. Box 35, FI-40014 Jyväskylä, Finland
Interests: data mining; machine learning; cognitive computing; explainable artificial intelligence

E-Mail Website
Guest Editor
School of Mathematical & Computer Sciences, Heriot Watt University, Edinburgh EH14 4AS, UK
Interests: knowledge and data representation; XAI; data mining

E-Mail Website
Guest Editor
Faculty of Electrical Engineering and Computer Science, University of Maribor, 2000 Maribor, Slovenia
Interests: data analytics; data science intelligent systems; systems theory machine learning; data mining; knowledge discovery semantic technologies; knowledge management complexity; complex systems; complexity metrics software engineering; medical informatics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

It is well known that current nonlinear machine learning methods often outperform traditional linear statistical methods in several domains. However, actual applications of these better-performing models remain scarce, as the nonlinear machine learning techniques—such as the currently popular deep learning methods with many kinds of processing layers—are presumed black boxes. In other words, their great predictive power often comes at the price of explainability.

Explainable artificial intelligence (XAI) refers to machine learning techniques, or general methods in artificial intelligence, for which the underlying decision logic and outcomes can be explained. It addresses the tradeoff between powerful but opaque machine learning models by shedding light into the black boxes. Thus, XAI are applicable only for verifying that the algorithms work as intended, but also for gaining actionable information, generating new hypotheses, enabling developers, end users, and domain experts to trust the models, deepening our understanding of the hidden
(causal) relationship, and ensuring algorithmic fairness.

This Special Issue will be dedicated to the latest applications of XAI that make powerful automated decision-making models explainable. All recent and novel applications of XAI are of interest, including but not limited to models in agriculture, astrophysics, biomedicine, crime prevention, digital forensics, eHealth, energy, finances, food production, games, information systems, learning analytics, material science, nanoscience, neuroscience, retail, smart cities, social media, and sports.

Dr. Mirka Saarela
Dr. Lilia Georgieva
Prof. Dr. Vili Podgorelec
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • explainability
  • interpretability
  • explainable artificial intelligence
  • performance-understandability trade-off
  • novel applications

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 2473 KiB  
Article
Remote Sensing Image Segmentation for Aircraft Recognition Using U-Net as Deep Learning Architecture
by Fadi Shaar, Arif Yılmaz, Ahmet Ercan Topcu and Yehia Ibrahim Alzoubi
Appl. Sci. 2024, 14(6), 2639; https://doi.org/10.3390/app14062639 - 21 Mar 2024
Viewed by 554
Abstract
Recognizing aircraft automatically by using satellite images has different applications in both the civil and military sectors. However, due to the complexity and variety of the foreground and background of the analyzed images, it remains challenging to obtain a suitable representation of aircraft [...] Read more.
Recognizing aircraft automatically by using satellite images has different applications in both the civil and military sectors. However, due to the complexity and variety of the foreground and background of the analyzed images, it remains challenging to obtain a suitable representation of aircraft for identification. Many studies and solutions have been presented in the literature, but only a few studies have suggested handling the issue using semantic image segmentation techniques due to the lack of publicly labeled datasets. With the advancement of CNNs, researchers have presented some CNN architectures, such as U-Net, which has the ability to obtain very good performance using a small training dataset. The U-Net architecture has received much attention for segmenting 2D and 3D biomedical images and has been recognized to be highly successful for pixel-wise satellite image classification. In this paper, we propose a binary image segmentation model to recognize aircraft by exploiting and adopting the U-Net architecture for remote sensing satellite images. The proposed model does not require a significant amount of labeled data and alleviates the need for manual aircraft feature extraction. The public dense labeling remote sensing dataset is used to perform the experiments and measure the robustness and performance of the proposed model. The mean IoU and pixel accuracy are adopted as metrics to assess the obtained results. The results in the testing dataset indicate that the proposed model can achieve a 95.08% mean IoU and a pixel accuracy of 98.24%. Full article
(This article belongs to the Special Issue Recent Applications of Explainable AI (XAI))
Show Figures

Figure 1

15 pages, 483 KiB  
Article
A Feature Selection Algorithm Based on Differential Evolution for English Speech Emotion Recognition
by Liya Yue, Pei Hu, Shu-Chuan Chu and Jeng-Shyang Pan
Appl. Sci. 2023, 13(22), 12410; https://doi.org/10.3390/app132212410 - 16 Nov 2023
Viewed by 849
Abstract
The automatic identification of emotions from speech holds significance in facilitating interactions between humans and machines. To improve the recognition accuracy of speech emotion, we extract mel-frequency cepstral coefficients (MFCCs) and pitch features from raw signals, and an improved differential evolution (DE) algorithm [...] Read more.
The automatic identification of emotions from speech holds significance in facilitating interactions between humans and machines. To improve the recognition accuracy of speech emotion, we extract mel-frequency cepstral coefficients (MFCCs) and pitch features from raw signals, and an improved differential evolution (DE) algorithm is utilized for feature selection based on K-nearest neighbor (KNN) and random forest (RF) classifiers. The proposed multivariate DE (MDE) adopts three mutation strategies to solve the slow convergence of the classical DE and maintain population diversity, and employs a jumping method to avoid falling into local traps. The simulations are conducted on four public English speech emotion datasets: eNTERFACE05, Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), Surrey Audio-Visual Expressed Emotion (SAEE), and Toronto Emotional Speech Set (TESS), and they cover a diverse range of emotions. The MDE algorithm is compared with PSO-assisted biogeography-based optimization (BBO_PSO), DE, and the sine cosine algorithm (SCA) on emotion recognition error, number of selected features, and running time. From the results obtained, MDE obtains the errors of 0.5270, 0.5044, 0.4490, and 0.0420 in eNTERFACE05, RAVDESS, SAVEE, and TESS based on the KNN classifier, and the errors of 0.4721, 0.4264, 0.3283 and 0.0114 based on the RF classifier. The proposed algorithm demonstrates excellent performance in emotion recognition accuracy, and it finds meaningful acoustic features from MFCCs and pitch. Full article
(This article belongs to the Special Issue Recent Applications of Explainable AI (XAI))
Show Figures

Figure 1

Back to TopTop