The Future of Artificial Intelligence (AI): Emerging Topics for AI and Its Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (30 August 2023) | Viewed by 15325

Special Issue Editors

Special Issue Information

Dear Colleagues,

Artificial Intelligence (AI) is a technological innovation that is transforming the world and changing the relationship between humans and machines. The past decade has seen tremendous progress in the area of AI, with giant leaps in the advancement of Machine Learning and Deep Learning. In the near future, most technology applications will harness or incorporate the output of some form of AI. AI turns autonomous vehicles and robots into reality and enables them to sense their environments, learn, adapt, and respond on their own. AI transforms healthcare and becomes a critical part of the healthcare industry. AI will disrupt business models and create new ways of working and facilitate digital transformation in many applications.

The techniques for the AI of the future are also changing to meet new requirements and applications. In a traditional approach, training a machine learning model requires building a large dataset locally and keeping it on the local machine or a data center. This is a centralized approach where data are gathered in a centralized server and machine learning models are trained over them. One future trend is distributed AI and performing AI on edge devices. Edge computing is a distributed computing paradigm that brings computation and data storage closer to the location of the device. Edge AI is growing in popularity and is the next frontier of development for the Intelligent Internet of Things or Artificial Intelligence IoT (AIoT).

This Special Issue on AI aims to collect the latest research works for the emerging topics in AI, machine learning, and deep learning. Some specific topics include but are not limited to:

- AI and edge computing or edge intelligence;

- Distributed AI or distributed deep learning;

- On-device machine learning/deep learning/AI;

- AI and Internet of Things for smart cities;

- Embedded Intelligence on GPU/ FPGA/ SoC;

- AI in software engineering;

- Scalable AI and big data;

- Trusted AI or trustworthy AI;

- Future AI threats and security;

- Future AI and digital health;

- Future AI data-driven technology;

- The future of AI in transportation;

- The future of mobile AI;

- The future of human-centered AI.

Prof. Dr. Kah Phooi Seng
Prof. Dr. Li-minn Ang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial Intelligence (AI)
  • embedded intelligence
  • edge intelligence
  • distributed AI
  • edge AI and AI IoT
  • intelligent systems
  • emerging technologies
  • mobile AI
  • AI applications and services

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 7694 KiB  
Article
Out-of-Distribution (OOD) Detection and Generalization Improved by Augmenting Adversarial Mixup Samples
by Kyungpil Gwon and Joonhyuk Yoo
Electronics 2023, 12(6), 1421; https://doi.org/10.3390/electronics12061421 - 16 Mar 2023
Cited by 1 | Viewed by 1721
Abstract
Deep neural network (DNN) models are usually built based on the i.i.d. (independent and identically distributed), also known as in-distribution (ID), assumption on the training samples and test data. However, when models are deployed in a real-world scenario with some distributional shifts, test [...] Read more.
Deep neural network (DNN) models are usually built based on the i.i.d. (independent and identically distributed), also known as in-distribution (ID), assumption on the training samples and test data. However, when models are deployed in a real-world scenario with some distributional shifts, test data can be out-of-distribution (OOD) and both OOD detection and OOD generalization should be simultaneously addressed to ensure the reliability and safety of applied AI systems. Most existing OOD detectors pursue these two goals separately, and therefore, are sensitive to covariate shift rather than semantic shift. To alleviate this problem, this paper proposes a novel adversarial mixup (AM) training method which simply executes OOD data augmentation to synthesize differently distributed data and designs a new AM loss function to learn how to handle OOD data. The proposed AM generates OOD samples being significantly diverged from the support of training data distribution but not completely disjoint to increase the generalization capability of the OOD detector. In addition, the AM is combined with a distributional-distance-aware OOD detector at inference to detect semantic OOD samples more efficiently while being robust to covariate shift due to data tampering. Experimental evaluation validates that the designed AM is effective on both OOD detection and OOD generalization tasks compared to previous OOD detectors and data mixup methods. Full article
Show Figures

Figure 1

20 pages, 4309 KiB  
Article
Impact of Retinal Vessel Image Coherence on Retinal Blood Vessel Segmentation
by Alqahtani Saeed S, Toufique A. Soomro, Nisar Ahmed Jandan, Ahmed Ali, Muhammad Irfan, Saifur Rahman, Waleed A. Aldhabaan, Abdulrahman Samir Khairallah and Ismail Abuallut
Electronics 2023, 12(2), 396; https://doi.org/10.3390/electronics12020396 - 12 Jan 2023
Cited by 1 | Viewed by 1798
Abstract
Retinal vessel segmentation is critical in detecting retinal blood vessels for a variety of eye disorders, and a consistent computerized method is required for automatic eye disorder screening. Many methods of retinal blood vessel segmentation are implemented, but these methods only yielded accuracy [...] Read more.
Retinal vessel segmentation is critical in detecting retinal blood vessels for a variety of eye disorders, and a consistent computerized method is required for automatic eye disorder screening. Many methods of retinal blood vessel segmentation are implemented, but these methods only yielded accuracy and lack of good sensitivity due to the coherence of retinal blood vessel segmentation. Another main factor of low sensitivity is the proper technique to handle the low-varying contrast problem. In this study, we proposed a five-step technique for assessing the impact of retinal blood vessel coherence on retinal blood vessel segmentation. The proposed technique for retinal blood vessels involved four steps and is known as the preprocessing module. These four stages of the pre-processing module handle the retinal image process in the first stage, uneven illumination and noise issues using morphological operations in the second stage, and image conversion to grayscale using principal component analysis (PCA) in the third step. The fourth step is the main step of contributing to the coherence of retinal blood vessels using anisotropic diffusion filtering and testing their different schemes and get a better coherent image on the optimized anisotropic diffusion filtering. The last step included double thresholds with morphological image reconstruction techniques to produce a segmented image of the vessel. The performances of the proposed method are validated on the publicly available database named DRIVE and STARE. Sensitivity values of 0.811 and 0.821 on STARE and DRIVE respectively meet and surpass other existing methods, and comparable accuracy values of 0.961 and 0.954 on STARE and DRIVE databases to existing methods. This proposed new method for retinal blood vessel segmentations can help medical experts diagnose eye disease and recommend treatment in a timely manner. Full article
Show Figures

Figure 1

17 pages, 3036 KiB  
Article
Towards QoS-Based Embedded Machine Learning
by Tom Springer, Erik Linstead, Peiyi Zhao and Chelsea Parlett-Pelleriti
Electronics 2022, 11(19), 3204; https://doi.org/10.3390/electronics11193204 - 06 Oct 2022
Cited by 2 | Viewed by 1327
Abstract
Due to various breakthroughs and advancements in machine learning and computer architectures, machine learning models are beginning to proliferate through embedded platforms. Some of these machine learning models cover a range of applications including computer vision, speech recognition, healthcare efficiency, industrial IoT, robotics [...] Read more.
Due to various breakthroughs and advancements in machine learning and computer architectures, machine learning models are beginning to proliferate through embedded platforms. Some of these machine learning models cover a range of applications including computer vision, speech recognition, healthcare efficiency, industrial IoT, robotics and many more. However, there is a critical limitation in implementing ML algorithms efficiently on embedded platforms: the computational and memory expense of many machine learning models can make them unsuitable in resource-constrained environments. Therefore, to efficiently implement these memory-intensive and computationally expensive algorithms in an embedded computing environment, innovative resource management techniques are required at the hardware, software and system levels. To this end, we present a novel quality-of-service based resource allocation scheme that uses feedback control to adjust compute resources dynamically to cope with the varying and unpredictable workloads of ML applications while still maintaining an acceptable level of service to the user. To evaluate the feasibility of our approach we implemented a feedback control scheduling simulator that was used to analyze our framework under various simulated workloads. We also implemented our framework as a Linux kernel module running on a virtual machine as well as a Raspberry Pi 4 single board computer. Results illustrate that our approach was able to maintain a sufficient level of service without overloading the processor as well as providing an energy savings of almost 20% as compared to the native resource management in Linux. Full article
Show Figures

Figure 1

13 pages, 701 KiB  
Article
Training Vision Transformers in Federated Learning with Limited Edge-Device Resources
by Jiang Tao, Zhen Gao and Zhaohui Guo
Electronics 2022, 11(17), 2638; https://doi.org/10.3390/electronics11172638 - 23 Aug 2022
Cited by 5 | Viewed by 2702
Abstract
Vision transformers (ViTs) demonstrate exceptional performance in numerous computer vision tasks owing to their self-attention modules. Despite improved network performance, transformers frequently require significant computational resources. The increasing need for data privacy has encouraged the development of federated learning (FL). Traditional FL places [...] Read more.
Vision transformers (ViTs) demonstrate exceptional performance in numerous computer vision tasks owing to their self-attention modules. Despite improved network performance, transformers frequently require significant computational resources. The increasing need for data privacy has encouraged the development of federated learning (FL). Traditional FL places a computing burden on edge devices. However, ViTs cannot be directly applied through FL on resource-constrained edge devices. To utilize the powerful ViT structure, we reformulated FL as a federated knowledge distillation training algorithm called FedVKD. FedVKD uses an alternating minimization strategy to train small convolutional neural networks on edge nodes and periodically transfers their knowledge to a large server-side transformer encoder via knowledge distillation. FedVKD affords the benefits of reduced edge-computing load and improved performance for vision tasks, while preserving FedGKT-like asynchronous training. We used four datasets and their non-IID variations to test the proposed FedVKD. When utilizing a larger dataset, FedVKD achieved higher accuracy than FedGKT and FedAvg. Full article
Show Figures

Figure 1

Review

Jump to: Research

21 pages, 1728 KiB  
Review
Machine Learning and AI Technologies for Smart Wearables
by Kah Phooi Seng, Li-Minn Ang, Eno Peter and Anthony Mmonyi
Electronics 2023, 12(7), 1509; https://doi.org/10.3390/electronics12071509 - 23 Mar 2023
Cited by 8 | Viewed by 6220
Abstract
The recent progress in computational, communications, and artificial intelligence (AI) technologies, and the widespread availability of smartphones together with the growing trends in multimedia data and edge computation devices have led to new models and paradigms for wearable devices. This paper presents a [...] Read more.
The recent progress in computational, communications, and artificial intelligence (AI) technologies, and the widespread availability of smartphones together with the growing trends in multimedia data and edge computation devices have led to new models and paradigms for wearable devices. This paper presents a comprehensive survey and classification of smart wearables and research prototypes using machine learning and AI technologies. The paper aims to survey these new paradigms for machine learning and AI for wearables from various technological perspectives which have emerged, including: (1) smart wearables empowered by machine learning and AI; (2) data collection architectures and information processing models for AI smart wearables; and (3) applications for AI smart wearables. The review covers a wide range of enabling technologies for AI and machine learning for wearables and research prototypes. The main findings of the review are that there are significant technical challenges for AI smart wearables in networking and communication aspects such as issues for routing and communication overheads, information processing and computational aspects such as issues for computational complexity and storage, and algorithmic and application-dependent aspects such as training and inference. The paper concludes with some future directions in the smart wearable market and potential research. Full article
Show Figures

Figure 1

Back to TopTop