sensors-logo

Journal Browser

Journal Browser

Artificial Intelligence and Deep Learning in Sensors and Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (20 April 2023) | Viewed by 35433

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu 300, Taiwan
Interests: distributed system; middleware kernel; application; platform solution; web & wireless technologies and application; applied education system
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Information Engineering and Computer Science, Feng Chia University, Taichung 40724, Taiwan
Interests: distributed system; middleware kernel; application; platform solution; web & wireless technologies and application; applied education system

E-Mail Website
Guest Editor
Faculty of Information and Communication Technology, Department of Computer Science, Universiti Tunku Abdul Rahman, Jalan Universiti Bandar Barat. 31900 Kampar, Perak, Malaysia.
Interests: artificial intelligence; cloud computing; distributed and high-performance computing; Internet of Things (IoT); recommender systems; intelligent agent systems; financial technology
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

To effectively solve the increasingly complex problems experienced by human beings, the latest development trend is to apply a large number of different types of sensors to collect data in order to establish effective solutions based on deep learning and artificial intelligence.

This not only creates a huge demand for sensors, providing business opportunities, but also creates  new challenges to the development of sensor devices and their related applications. These technological developments that combine AI and sensors are being actively used in various application fields such as healthcare, manufacturing, agriculture and fisheries, transportation, construction, environmental monitoring, etc.

In this Special Issue, we aim to solicit high-quality original research papers and surveys that explore new developments in AI (deep learning) and sensor technology in various fields as well as to share ideas, designs, data-driven applications, and production and deployment experiences and challenges.

Topics of interest include, but are not limited to, the following:

  • Applications and sensors for manufacturing, machinery, and semiconductors and related industries such as quality inspection, defect detection, predictive maintenance, yield control, and related applications.
  • Smart applications and sensors for architecture, construction, buildings, e-learning, and recommendation systems.
  • Applications and sensors for autonomous vehicles, surveillance systems, traffic monitoring, suspicious tracking, and transportation.
  • Object recognition, image classification, object detection, speech processing, human behavior analysis, and related sensing applications.
  • Safety in nuclear power plants, drone-based delivery, medical systems, automation systems, security systems, smart farming, sensor performance optimization, thermal imaging (infection detection).
  • Sensor groups based communication for collective task operations, i.e., vehicle platooning, AI drones, manufacturing synchronization, etc.
  • Autonomous sensor devices in edge networks performing AI-based applications.
  • All other applications related to AI and sensors.

Prof. Dr. Shyan-Ming Yuan
Dr. Zeng-Wei Hong
Dr. Wai-Khuen Cheng
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensors
  • deep learning
  • big data
  • reinforcement learning
  • artificial intelligence

Related Special Issue

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

11 pages, 438 KiB  
Article
Single Modality vs. Multimodality: What Works Best for Lung Cancer Screening?
by Joana Vale Sousa, Pedro Matos, Francisco Silva, Pedro Freitas, Hélder P. Oliveira and Tania Pereira
Sensors 2023, 23(12), 5597; https://doi.org/10.3390/s23125597 - 15 Jun 2023
Viewed by 1218
Abstract
In a clinical context, physicians usually take into account information from more than one data modality when making decisions regarding cancer diagnosis and treatment planning. Artificial intelligence-based methods should mimic the clinical method and take into consideration different sources of data that allow [...] Read more.
In a clinical context, physicians usually take into account information from more than one data modality when making decisions regarding cancer diagnosis and treatment planning. Artificial intelligence-based methods should mimic the clinical method and take into consideration different sources of data that allow a more comprehensive analysis of the patient and, as a consequence, a more accurate diagnosis. Lung cancer evaluation, in particular, can benefit from this approach since this pathology presents high mortality rates due to its late diagnosis. However, many related works make use of a single data source, namely imaging data. Therefore, this work aims to study the prediction of lung cancer when using more than one data modality. The National Lung Screening Trial dataset that contains data from different sources, specifically, computed tomography (CT) scans and clinical data, was used for the study, the development and comparison of single-modality and multimodality models, that may explore the predictive capability of these two types of data to their full potential. A ResNet18 network was trained to classify 3D CT nodule regions of interest (ROI), whereas a random forest algorithm was used to classify the clinical data, with the former achieving an area under the ROC curve (AUC) of 0.7897 and the latter 0.5241. Regarding the multimodality approaches, three strategies, based on intermediate and late fusion, were implemented to combine the information from the 3D CT nodule ROIs and the clinical data. From those, the best model—a fully connected layer that receives as input a combination of clinical data and deep imaging features, given by a ResNet18 inference model—presented an AUC of 0.8021. Lung cancer is a complex disease, characterized by a multitude of biological and physiological phenomena and influenced by multiple factors. It is thus imperative that the models are capable of responding to that need. The results obtained showed that the combination of different types may have the potential to produce more comprehensive analyses of the disease by the models. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Sensors and Applications)
Show Figures

Figure 1

11 pages, 2533 KiB  
Communication
C2RL: Convolutional-Contrastive Learning for Reinforcement Learning Based on Self-Pretraining for Strong Augmentation
by Sanghoon Park, Jihun Kim, Han-You Jeong, Tae-Kyoung Kim and Jinwoo Yoo
Sensors 2023, 23(10), 4946; https://doi.org/10.3390/s23104946 - 21 May 2023
Cited by 2 | Viewed by 1788
Abstract
Reinforcement learning agents that have not been seen during training must be robust in test environments. However, the generalization problem is challenging to solve in reinforcement learning using high-dimensional images as the input. The addition of a self-supervised learning framework with data augmentation [...] Read more.
Reinforcement learning agents that have not been seen during training must be robust in test environments. However, the generalization problem is challenging to solve in reinforcement learning using high-dimensional images as the input. The addition of a self-supervised learning framework with data augmentation in the reinforcement learning architecture can promote generalization to a certain extent. However, excessively large changes in the input images may disturb reinforcement learning. Therefore, we propose a contrastive learning method that can help manage the trade-off relationship between the performance of reinforcement learning and auxiliary tasks against the data augmentation strength. In this framework, strong augmentation does not disturb reinforcement learning and instead maximizes the auxiliary effect for generalization. Results of experiments on the DeepMind Control suite demonstrate that the proposed method effectively uses strong data augmentation and achieves a higher generalization than the existing methods. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Sensors and Applications)
Show Figures

Figure 1

12 pages, 5006 KiB  
Article
TransNet: Transformer-Based Point Cloud Sampling Network
by Hookyung Lee, Jaeseung Jeon, Seokjin Hong, Jeesu Kim and Jinwoo Yoo
Sensors 2023, 23(10), 4675; https://doi.org/10.3390/s23104675 - 11 May 2023
Cited by 1 | Viewed by 2010
Abstract
As interest in point cloud processing has gradually increased in the industry, point cloud sampling techniques have been researched to improve deep learning networks. As many conventional models use point clouds directly, the consideration of computational complexity has become critical for practicality. One [...] Read more.
As interest in point cloud processing has gradually increased in the industry, point cloud sampling techniques have been researched to improve deep learning networks. As many conventional models use point clouds directly, the consideration of computational complexity has become critical for practicality. One of the representative ways to decrease computations is downsampling, which also affects the performance in terms of precision. Existing classic sampling methods have adopted a standardized way regardless of the task-model property in learning. However, this limits the improvement of the point cloud sampling network’s performance. That is, the performance of such task-agnostic methods is too low when the sampling ratio is high. Therefore, this paper proposes a novel downsampling model based on the transformer-based point cloud sampling network (TransNet) to efficiently perform downsampling tasks. The proposed TransNet utilizes self-attention and fully connected layers to extract meaningful features from input sequences and perform downsampling. By introducing attention techniques into downsampling, the proposed network can learn about the relationships between point clouds and generate a task-oriented sampling methodology. The proposed TransNet outperforms several state-of-the-art models in terms of accuracy. It has a particular advantage in generating points from sparse data when the sampling ratio is high. We expect that our approach can provide a promising solution for downsampling tasks in various point cloud applications. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Sensors and Applications)
Show Figures

Figure 1

20 pages, 912 KiB  
Article
DCFF-MTAD: A Multivariate Time-Series Anomaly Detection Model Based on Dual-Channel Feature Fusion
by Zheng Xu, Yumeng Yang, Xinwen Gao and Min Hu
Sensors 2023, 23(8), 3910; https://doi.org/10.3390/s23083910 - 12 Apr 2023
Cited by 4 | Viewed by 3365
Abstract
The detection of anomalies in multivariate time-series data is becoming increasingly important in the automated and continuous monitoring of complex systems and devices due to the rapid increase in data volume and dimension. To address this challenge, we present a multivariate time-series anomaly [...] Read more.
The detection of anomalies in multivariate time-series data is becoming increasingly important in the automated and continuous monitoring of complex systems and devices due to the rapid increase in data volume and dimension. To address this challenge, we present a multivariate time-series anomaly detection model based on a dual-channel feature extraction module. The module focuses on the spatial and time features of the multivariate data using spatial short-time Fourier transform (STFT) and a graph attention network, respectively. The two features are then fused to significantly improve the model’s anomaly detection performance. In addition, the model incorporates the Huber loss function to enhance its robustness. A comparative study of the proposed model with existing state-of-the-art ones was presented to prove the effectiveness of the proposed model on three public datasets. Furthermore, by using in shield tunneling applications, we verify the effectiveness and practicality of the model. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Sensors and Applications)
Show Figures

Figure 1

13 pages, 2344 KiB  
Article
MTGEA: A Multimodal Two-Stream GNN Framework for Efficient Point Cloud and Skeleton Data Alignment
by Gawon Lee and Jihie Kim
Sensors 2023, 23(5), 2787; https://doi.org/10.3390/s23052787 - 03 Mar 2023
Cited by 1 | Viewed by 2061
Abstract
Because of societal changes, human activity recognition, part of home care systems, has become increasingly important. Camera-based recognition is mainstream but has privacy concerns and is less accurate under dim lighting. In contrast, radar sensors do not record sensitive information, avoid the invasion [...] Read more.
Because of societal changes, human activity recognition, part of home care systems, has become increasingly important. Camera-based recognition is mainstream but has privacy concerns and is less accurate under dim lighting. In contrast, radar sensors do not record sensitive information, avoid the invasion of privacy, and work in poor lighting. However, the collected data are often sparse. To address this issue, we propose a novel Multimodal Two-stream GNN Framework for Efficient Point Cloud and Skeleton Data Alignment (MTGEA), which improves recognition accuracy through accurate skeletal features from Kinect models. We first collected two datasets using the mmWave radar and Kinect v4 sensors. Then, we used zero-padding, Gaussian Noise (GN), and Agglomerative Hierarchical Clustering (AHC) to increase the number of collected point clouds to 25 per frame to match the skeleton data. Second, we used Spatial Temporal Graph Convolutional Network (ST-GCN) architecture to acquire multimodal representations in the spatio-temporal domain focusing on skeletal features. Finally, we implemented an attention mechanism aligning the two multimodal features to capture the correlation between point clouds and skeleton data. The resulting model was evaluated empirically on human activity data and shown to improve human activity recognition with radar data only. All datasets and codes are available in our GitHub. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Sensors and Applications)
Show Figures

Figure 1

26 pages, 9577 KiB  
Article
Revisiting Consistency for Semi-Supervised Semantic Segmentation
by Ivan Grubišić, Marin Oršić and Siniša Šegvić
Sensors 2023, 23(2), 940; https://doi.org/10.3390/s23020940 - 13 Jan 2023
Viewed by 1715
Abstract
Semi-supervised learning is an attractive technique in practical deployments of deep models since it relaxes the dependence on labeled data. It is especially important in the scope of dense prediction because pixel-level annotation requires substantial effort. This paper considers semi-supervised algorithms that enforce [...] Read more.
Semi-supervised learning is an attractive technique in practical deployments of deep models since it relaxes the dependence on labeled data. It is especially important in the scope of dense prediction because pixel-level annotation requires substantial effort. This paper considers semi-supervised algorithms that enforce consistent predictions over perturbed unlabeled inputs. We study the advantages of perturbing only one of the two model instances and preventing the backward pass through the unperturbed instance. We also propose a competitive perturbation model as a composition of geometric warp and photometric jittering. We experiment with efficient models due to their importance for real-time and low-power applications. Our experiments show clear advantages of (1) one-way consistency, (2) perturbing only the student branch, and (3) strong photometric and geometric perturbations. Our perturbation model outperforms recent work and most of the contribution comes from the photometric component. Experiments with additional data from the large coarsely annotated subset of Cityscapes suggest that semi-supervised training can outperform supervised training with coarse labels. Our source code is available at https://github.com/Ivan1248/semisup-seg-efficient. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Sensors and Applications)
Show Figures

Graphical abstract

29 pages, 5375 KiB  
Article
Adversarial Patch Attacks on Deep-Learning-Based Face Recognition Systems Using Generative Adversarial Networks
by Ren-Hung Hwang, Jia-You Lin, Sun-Ying Hsieh, Hsuan-Yu Lin and Chia-Liang Lin
Sensors 2023, 23(2), 853; https://doi.org/10.3390/s23020853 - 11 Jan 2023
Cited by 7 | Viewed by 3062
Abstract
Deep learning technology has developed rapidly in recent years and has been successfully applied in many fields, including face recognition. Face recognition is used in many scenarios nowadays, including security control systems, access control management, health and safety management, employee attendance monitoring, automatic [...] Read more.
Deep learning technology has developed rapidly in recent years and has been successfully applied in many fields, including face recognition. Face recognition is used in many scenarios nowadays, including security control systems, access control management, health and safety management, employee attendance monitoring, automatic border control, and face scan payment. However, deep learning models are vulnerable to adversarial attacks conducted by perturbing probe images to generate adversarial examples, or using adversarial patches to generate well-designed perturbations in specific regions of the image. Most previous studies on adversarial attacks assume that the attacker hacks into the system and knows the architecture and parameters behind the deep learning model. In other words, the attacked model is a white box. However, this scenario is unrepresentative of most real-world adversarial attacks. Consequently, the present study assumes the face recognition system to be a black box, over which the attacker has no control. A Generative Adversarial Network method is proposed for generating adversarial patches to carry out dodging and impersonation attacks on the targeted face recognition system. The experimental results show that the proposed method yields a higher attack success rate than previous works. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Sensors and Applications)
Show Figures

Figure 1

19 pages, 10523 KiB  
Article
Enabling Real-Time On-Chip Audio Super Resolution for Bone-Conduction Microphones
by Yuang Li, Yuntao Wang, Xin Liu, Yuanchun Shi, Shwetak Patel and Shao-Fu Shih
Sensors 2023, 23(1), 35; https://doi.org/10.3390/s23010035 - 20 Dec 2022
Cited by 4 | Viewed by 2753
Abstract
Voice communication using an air-conduction microphone in noisy environments suffers from the degradation of speech audibility. Bone-conduction microphones (BCM) are robust against ambient noises but suffer from limited effective bandwidth due to their sensing mechanism. Although existing audio super-resolution algorithms can recover the [...] Read more.
Voice communication using an air-conduction microphone in noisy environments suffers from the degradation of speech audibility. Bone-conduction microphones (BCM) are robust against ambient noises but suffer from limited effective bandwidth due to their sensing mechanism. Although existing audio super-resolution algorithms can recover the high-frequency loss to achieve high-fidelity audio, they require considerably more computational resources than is available in low-power hearable devices. This paper proposes the first-ever real-time on-chip speech audio super-resolution system for BCM. To accomplish this, we built and compared a series of lightweight audio super-resolution deep-learning models. Among all these models, ATS-UNet was the most cost-efficient because the proposed novel Audio Temporal Shift Module (ATSM) reduces the network’s dimensionality while maintaining sufficient temporal features from speech audio. Then, we quantized and deployed the ATS-UNet to low-end ARM micro-controller units for a real-time embedded prototype. The evaluation results show that our system achieved real-time inference speed on Cortex-M7 and higher quality compared with the baseline audio super-resolution method. Finally, we conducted a user study with ten experts and ten amateur listeners to evaluate our method’s effectiveness to human ears. Both groups perceived a significantly higher speech quality with our method when compared to the solutions with the original BCM or air-conduction microphone with cutting-edge noise-reduction algorithms. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Sensors and Applications)
Show Figures

Figure 1

14 pages, 4335 KiB  
Article
Deep Learning Anomaly Classification Using Multi-Attention Residual Blocks for Industrial Control Systems
by Jehn-Ruey Jiang and Yan-Ting Lin
Sensors 2022, 22(23), 9084; https://doi.org/10.3390/s22239084 - 23 Nov 2022
Cited by 2 | Viewed by 1363
Abstract
This paper proposes a novel method monitoring network packets to classify anomalies in industrial control systems (ICSs). The proposed method combines different mechanisms. It is flow-based as it obtains new features through aggregating packets of the same flow. It then builds a deep [...] Read more.
This paper proposes a novel method monitoring network packets to classify anomalies in industrial control systems (ICSs). The proposed method combines different mechanisms. It is flow-based as it obtains new features through aggregating packets of the same flow. It then builds a deep neural network (DNN) with multi-attention blocks for spotting core features, and with residual blocks for avoiding the gradient vanishing problem. The DNN is trained with the Ranger (RAdam + Lookahead) optimizer to prevent the training from being stuck in local minima, and with the focal loss to address the data imbalance problem. The Electra Modbus dataset is used to evaluate the performance impacts of different mechanisms on the proposed method. The proposed method is compared with related methods in terms of the precision, recall, and F1-score to show its superiority. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Sensors and Applications)
Show Figures

Figure 1

15 pages, 1089 KiB  
Article
Convolutional Long-Short Term Memory Network with Multi-Head Attention Mechanism for Traffic Flow Prediction
by Yupeng Wei and Hongrui Liu
Sensors 2022, 22(20), 7994; https://doi.org/10.3390/s22207994 - 20 Oct 2022
Cited by 8 | Viewed by 1532
Abstract
Accurate predictive modeling of traffic flow is critically important as it allows transportation users to make wise decisions to circumvent traffic congestion regions. The advanced development of sensing technology makes big data more affordable and accessible, meaning that data-driven methods have been increasingly [...] Read more.
Accurate predictive modeling of traffic flow is critically important as it allows transportation users to make wise decisions to circumvent traffic congestion regions. The advanced development of sensing technology makes big data more affordable and accessible, meaning that data-driven methods have been increasingly adopted for traffic flow prediction. Although numerous data-driven methods have been introduced for traffic flow predictions, existing data-driven methods cannot consider the correlation of the extracted high-dimensional features and cannot use the most relevant part of the traffic flow data to make predictions. To address these issues, this work proposes a decoder convolutional LSTM network, where the convolutional operation is used to consider the correlation of the high-dimensional features, and the LSTM network is used to consider the temporal correlation of traffic flow data. Moreover, the multi-head attention mechanism is introduced to use the most relevant portion of the traffic data to make predictions so that the prediction performance can be improved. A traffic flow dataset collected from the Caltrans Performance Measurement System (PeMS) database is used to demonstrate the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Sensors and Applications)
Show Figures

Figure 1

Review

Jump to: Research

45 pages, 9741 KiB  
Review
Weed Detection Using Deep Learning: A Systematic Literature Review
by Nafeesa Yousuf Murad, Tariq Mahmood, Abdur Rahim Mohammad Forkan, Ahsan Morshed, Prem Prakash Jayaraman and Muhammad Shoaib Siddiqui
Sensors 2023, 23(7), 3670; https://doi.org/10.3390/s23073670 - 31 Mar 2023
Cited by 6 | Viewed by 4429
Abstract
Weeds are one of the most harmful agricultural pests that have a significant impact on crops. Weeds are responsible for higher production costs due to crop waste and have a significant impact on the global agricultural economy. The importance of this problem has [...] Read more.
Weeds are one of the most harmful agricultural pests that have a significant impact on crops. Weeds are responsible for higher production costs due to crop waste and have a significant impact on the global agricultural economy. The importance of this problem has promoted the research community in exploring the use of technology to support farmers in the early detection of weeds. Artificial intelligence (AI) driven image analysis for weed detection and, in particular, machine learning (ML) and deep learning (DL) using images from crop fields have been widely used in the literature for detecting various types of weeds that grow alongside crops. In this paper, we present a systematic literature review (SLR) on current state-of-the-art DL techniques for weed detection. Our SLR identified a rapid growth in research related to weed detection using DL since 2015 and filtered 52 application papers and 8 survey papers for further analysis. The pooled results from these papers yielded 34 unique weed types detection, 16 image processing techniques, and 11 DL algorithms with 19 different variants of CNNs. Moreover, we include a literature survey on popular vanilla ML techniques (e.g., SVM, random forest) that have been widely used prior to the dominance of DL. Our study presents a detailed thematic analysis of ML/DL algorithms used for detecting the weed/crop and provides a unique contribution to the analysis and assessment of the performance of these ML/DL techniques. Our study also details the use of crops associated with weeds, such as sugar beet, which was one of the most commonly used crops in most papers for detecting various types of weeds. It also discusses the modality where RGB was most frequently used. Crop images were frequently captured using robots, drones, and cell phones. It also discusses algorithm accuracy, such as how SVM outperformed all machine learning algorithms in many cases, with the highest accuracy of 99 percent, and how CNN with its variants also performed well with the highest accuracy of 99 percent, with only VGGNet providing the lowest accuracy of 84 percent. Finally, the study will serve as a starting point for researchers who wish to undertake further research in this area. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Sensors and Applications)
Show Figures

Figure 1

42 pages, 6895 KiB  
Review
A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System
by Ruey-Kai Sheu and Mayuresh Sunil Pardeshi
Sensors 2022, 22(20), 8068; https://doi.org/10.3390/s22208068 - 21 Oct 2022
Cited by 16 | Viewed by 8820
Abstract
The emerging field of eXplainable AI (XAI) in the medical domain is considered to be of utmost importance. Meanwhile, incorporating explanations in the medical domain with respect to legal and ethical AI is necessary to understand detailed decisions, results, and current status of [...] Read more.
The emerging field of eXplainable AI (XAI) in the medical domain is considered to be of utmost importance. Meanwhile, incorporating explanations in the medical domain with respect to legal and ethical AI is necessary to understand detailed decisions, results, and current status of the patient’s conditions. Successively, we will be presenting a detailed survey for the medical XAI with the model enhancements, evaluation methods, significant overview of case studies with open box architecture, medical open datasets, and future improvements. Potential differences in AI and XAI methods are provided with the recent XAI methods stated as (i) local and global methods for preprocessing, (ii) knowledge base and distillation algorithms, and (iii) interpretable machine learning. XAI characteristics details with future healthcare explainability is included prominently, whereas the pre-requisite provides insights for the brainstorming sessions before beginning a medical XAI project. Practical case study determines the recent XAI progress leading to the advance developments within the medical field. Ultimately, this survey proposes critical ideas surrounding a user-in-the-loop approach, with an emphasis on human–machine collaboration, to better produce explainable solutions. The surrounding details of the XAI feedback system for human rating-based evaluation provides intelligible insights into a constructive method to produce human enforced explanation feedback. For a long time, XAI limitations of the ratings, scores and grading are present. Therefore, a novel XAI recommendation system and XAI scoring system are designed and approached from this work. Additionally, this paper encourages the importance of implementing explainable solutions into the high impact medical field. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Sensors and Applications)
Show Figures

Figure 1

Back to TopTop