Deep Learning Techniques for Manned and Unmanned Ground, Aerial and Marine Vehicles

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (30 April 2022) | Viewed by 43597

Special Issue Editors


E-Mail Website1 Website2 Website3 Website4
Guest Editor
1. College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
2. Automated Systems & Soft Computing Lab (ASSCL), Prince Sultan University, Riyadh 12435, Saudi Arabia
3. Faculty of Computers and Artificial Intelligence, Benha University, Benha 13518, Egypt
Interests: control theory and applications; robotics; process control; artificial intelligence; machine learning, computational intelligence, dynamic system modeling
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
PSU Faculty, Prince Sultan University, Riyadh 12435, Saudi Arabia
Interests: Internet of Things; unmanned aerial vehicles; wireless sensor networks; mobile robots; remote sensing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
General Motors Canada, 500 Wentworth St W, Oshawa, ON L1J 6J2, Canada
Interests: smart mobility; autonomous and connected vehicles; cognitive IoT; machine learning; combinatorial opti-mization

E-Mail Website
Guest Editor
Department of ICT and Natural Sciences, Norwegian University of Science and Technology, Larsg ardsveg-en, 26009 Alesund, Norway
Interests: artificial intelligence; field robotics; autonomous navigation; path planning; automation and control

E-Mail Website
Guest Editor
Department of Computer Science, University of Bari Aldo Moro, Via Orabona, 4-70125 Bari, Italy
Interests: computational intelligence; knowledge discovery from data; intelligent data analysis; matrix factorizations
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Manned and unmanned ground, aerial, and marine vehicles enable many promising and revolutionary civilian and military applications that will change our lives in the near future. These applications include, but are not limited to, surveillance, search and rescue, environment monitoring, infrastructure monitoring, self-driving cars, contactless last-mile delivery vehicles, autonomous ships, precision agriculture, and transmission line inspection, to name just a few. These vehicles will benefit from advances in deep learning as a subfield of machine learning able to endow these vehicles with different capabilities such as perception, situation awareness, planning, and intelligent control. Deep learning models also have the ability to generate actionable insights into the complex structures of large data sets.

In recent years, deep learning research has received increasing attention from researchers in academia, government laboratories, and industry. These research activities have borne some fruit in tackling some of the remaining challenging problems of manned and unmanned ground, aerial, and marine vehicles. Moreover, deep learning methods have recently been actively developed in other areas of machine learning, including reinforcement training and transfer/meta-learning, whereas standard deep learning methods such as RNN (recent neural network) and CNN (coevolutionary neural networks). 

The purpose of this Special Issue is to report recent applications of deep learning approaches in manned and unmanned ground, aerial, and marine vehicles. Topics include but are not limited to:

  • Cognitive data collection;
  • Data cleansing;
  • Data compression;
  • Multisensor data fusion;
  • Vehicle localization;
  • Perception systems;
  • AI for automation systems;
  • Object detection, localization, and tracking;
  • Situation awareness;
  • Vehicle control;
  • Autonomous vehicles;
  • Connected vehicles;
  • Self-driving cars;
  • Generative adversarial networks (GANs);
  • Collective intelligence;
  • Multiagent systems;
  • Platooning, flocking, and self-organization;
  • Applications: unmanned aerial vehicles (UAVs), unmanned ground vehicles (UGVs), unmanned underwater vehicles (UUVs), and unmanned surface vehicles (USVs), self-driving cars, delivery robots, search and rescue, reconnaissance, surveillance, swarm robotics, etc.

Prof. Dr. Ahmad Taher Azar
Prof. Dr. Anis Koubaa
Prof. Dr. Alaa Khamis
Prof. Dr. Ibrahim A. Hameed
Dr. Gabriella Casalino
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 4289 KiB  
Article
Pothole Detection Using Image Enhancement GAN and Object Detection Network
by Habeeb Salaudeen and Erbuğ Çelebi
Electronics 2022, 11(12), 1882; https://doi.org/10.3390/electronics11121882 - 15 Jun 2022
Cited by 11 | Viewed by 4119
Abstract
Many datasets used to train artificial intelligence systems to recognize potholes, such as the challenging sequences for autonomous driving (CCSAD) and the Pacific Northwest road (PNW) datasets, do not produce satisfactory results. This is due to the fact that these datasets present complex [...] Read more.
Many datasets used to train artificial intelligence systems to recognize potholes, such as the challenging sequences for autonomous driving (CCSAD) and the Pacific Northwest road (PNW) datasets, do not produce satisfactory results. This is due to the fact that these datasets present complex but realistic scenarios of pothole detection tasks than popularly used datasets that achieve better results but do not effectively represents realistic pothole detection task. In remote sensing, super-resolution generative adversarial networks (GAN), such as enhanced super-resolution generative adversarial networks (ESRGAN), have been employed to mitigate the issues of small-object detection, which has shown remarkable performance in detecting small objects from low-quality images. Inspired by this success in remote sensing, we apply similar techniques with an ESRGAN super-resolution network to improve the image quality of road surfaces, and we use different object detection networks in the same pipeline to detect instances of potholes in the images. The architecture we propose consists of two main components: ESRGAN and a detection network. For the detection network, we employ both you only look once (YOLOv5) and EfficientDet networks. Comprehensive experiments on different pothole detection datasets show better performance for our method compared to similar state-of-the-art methods for pothole detection. Full article
Show Figures

Figure 1

11 pages, 1190 KiB  
Article
Monocular Depth Estimation from a Single Infrared Image
by Daechan Han and Yukyung Choi
Electronics 2022, 11(11), 1729; https://doi.org/10.3390/electronics11111729 - 30 May 2022
Cited by 1 | Viewed by 2728
Abstract
Thermal infrared imaging is attracting much attention due to its strength against illuminance variation. However, because of the spectral difference between thermal infrared images and RGB images, the existing research on self-supervised monocular depth estimation has performance limitations. Therefore, in this study, we [...] Read more.
Thermal infrared imaging is attracting much attention due to its strength against illuminance variation. However, because of the spectral difference between thermal infrared images and RGB images, the existing research on self-supervised monocular depth estimation has performance limitations. Therefore, in this study, we propose a novel Self-Guided Framework using a Pseudolabel predicted from RGB images. Our proposed framework, which solves the problem of appearance matching loss in the existing framework, transfers the high accuracy of Pseudolabel to the thermal depth estimation network by comparing low- and high-level pixels. Furthermore, we propose Patch-NetVLAD Loss, which strengthens local detail and global context information in the depth map from thermal infrared imaging by comparing locally global patch-level descriptors. Finally, we introduce an Image Matching Loss to estimate a more accurate depth map in a thermal depth network by enhancing the performance of the Pseudolabel. We demonstrate that the proposed framework shows significant performance improvement even when applied to various depth networks in the KAIST Multispectral Dataset. Full article
Show Figures

Figure 1

12 pages, 2524 KiB  
Article
Decision Making for Self-Driving Vehicles in Unexpected Environments Using Efficient Reinforcement Learning Methods
by Min-Seong Kim, Gyuho Eoh and Tae-Hyoung Park
Electronics 2022, 11(11), 1685; https://doi.org/10.3390/electronics11111685 - 25 May 2022
Cited by 1 | Viewed by 1892
Abstract
Deep reinforcement learning (DRL) enables autonomous vehicles to perform complex decision making using neural networks. However, previous DRL networks only output decisions, so there is no way to determine whether the decision is proper. Reinforcement learning agents may continue to produce wrong decisions [...] Read more.
Deep reinforcement learning (DRL) enables autonomous vehicles to perform complex decision making using neural networks. However, previous DRL networks only output decisions, so there is no way to determine whether the decision is proper. Reinforcement learning agents may continue to produce wrong decisions in unexpected environments not encountered during the learning process. In particular, one wrong decision can lead to an accident in autonomous driving. Therefore, it is necessary to indicate whether the action is a reasonable decision. As one such method, uncertainty can inform whether the agent’s decision is appropriate for practical application where safety must be guaranteed. Therefore, this paper provides uncertainty in the decision by proposing DeepSet-Q with Gaussian mixture (DwGM-Q), which converges the existing DeepSet-Q and mixture density network (MDN). Calculating uncertainty with the Gaussian mixture model (GMM) produced from MDN made it possible to calculate faster than the existing ensemble method. Moreover, it verified how the agent responds to the unlearned situation through the Simulation of Urban MObility (SUMO) simulator and compared the uncertainty of the decision between the learned and nontrained situation. Full article
Show Figures

Figure 1

15 pages, 9085 KiB  
Article
MarsExplorer: Exploration of Unknown Terrains via Deep Reinforcement Learning and Procedurally Generated Environments
by Dimitrios I. Koutras, Athanasios C. Kapoutsis, Angelos A. Amanatiadis and Elias B. Kosmatopoulos
Electronics 2021, 10(22), 2751; https://doi.org/10.3390/electronics10222751 - 11 Nov 2021
Cited by 6 | Viewed by 2724
Abstract
This paper is an initial endeavor to bridge the gap between powerful Deep Reinforcement Learning methodologies and the problem of exploration/coverage of unknown terrains. Within this scope, MarsExplorer, an openai-gym compatible environment tailored to exploration/coverage of unknown areas, is presented. MarsExplorer translates the [...] Read more.
This paper is an initial endeavor to bridge the gap between powerful Deep Reinforcement Learning methodologies and the problem of exploration/coverage of unknown terrains. Within this scope, MarsExplorer, an openai-gym compatible environment tailored to exploration/coverage of unknown areas, is presented. MarsExplorer translates the original robotics problem into a Reinforcement Learning setup that various off-the-shelf algorithms can tackle. Any learned policy can be straightforwardly applied to a robotic platform without an elaborate simulation model of the robot’s dynamics to apply a different learning/adaptation phase. One of its core features is the controllable multi-dimensional procedural generation of terrains, which is the key for producing policies with strong generalization capabilities. Four different state-of-the-art RL algorithms (A3C, PPO, Rainbow, and SAC) are trained on the MarsExplorer environment, and a proper evaluation of their results compared to the average human-level performance is reported. In the follow-up experimental analysis, the effect of the multi-dimensional difficulty setting on the learning capabilities of the best-performing algorithm (PPO) is analyzed. A milestone result is the generation of an exploration policy that follows the Hilbert curve without providing this information to the environment or rewarding directly or indirectly Hilbert-curve-like trajectories. The experimental analysis is concluded by evaluating PPO learned policy algorithm side-by-side with frontier-based exploration strategies. A study on the performance curves revealed that PPO-based policy was capable of performing adaptive-to-the-unknown-terrain sweeping without leaving expensive-to-revisit areas uncovered, underlying the capability of RL-based methodologies to tackle exploration tasks efficiently. Full article
Show Figures

Figure 1

29 pages, 46424 KiB  
Article
Development of Air Conditioner Robot Prototype That Follows Humans in Outdoor Applications
by Mohamed Zied Chaari, Mohamed Abdelfatah, Christopher Loreno and Rashid Al-Rahimi
Electronics 2021, 10(14), 1700; https://doi.org/10.3390/electronics10141700 - 15 Jul 2021
Cited by 4 | Viewed by 4061
Abstract
According to Robert McSweeney, in light of a new study: “Conditions in the GCC could become so hot and humid in the coming years that staying outside for more than six hours will become difficult”. He is a climate analyst at CARBON BRIEF, [...] Read more.
According to Robert McSweeney, in light of a new study: “Conditions in the GCC could become so hot and humid in the coming years that staying outside for more than six hours will become difficult”. He is a climate analyst at CARBON BRIEF, a nonprofit temperature and climate analysis group. He also states that changes there can help give us an idea of what the rest of the world can expect if we do not reduce the emissions that pollute homes and factories. Because of the high temperatures in GCC countries, the effect of heat stress is very high there, which discourages shoppers and pedestrians from shopping in the open area due to the physical exertion and high risks faced by people and workers. Heat stress peaks in most Arab Gulf countries from 11:00 a.m. to 4:00 p.m. during the summer season. Heat stress is increasingly an obstacle to economic efficiency in these countries. This work designs and develops a robot that tracks shoppers and provides a cool stream of air directly around them during shopping in open areas to reduce the effect of heat stress. The robot enables us to cool the temperature around customers in the market to increase comfort. In this project, a robot was designed and manufactured to track a specific person and cool the air around him through a cool stream of air generated by the air conditioner installed inside the robot. We used a Raspberry Pi camera sensor to detect the target person and interact with a single-board computer (Raspberry Pi 3) to accomplish this design and the prototype. Raspberry Pi controls the air-conditioning robot to follow the movement of the target person. We used image processing to discover the target shopper, the control system, and then guide the bot. In the meantime, the robot must also bypass any potential obstacles that could prevent its movement and cause a collision. We made a highly efficient design that can synchronize between the software algorithm and the mechanical platform of the robot. This work is merely the combination of a cool stream of air and a robot that follows a human. Full article
Show Figures

Figure 1

31 pages, 8950 KiB  
Article
Vehicle Detection from Aerial Images Using Deep Learning: A Comparative Study
by Adel Ammar, Anis Koubaa, Mohanned Ahmed, Abdulrahman Saad and Bilel Benjdira
Electronics 2021, 10(7), 820; https://doi.org/10.3390/electronics10070820 - 30 Mar 2021
Cited by 49 | Viewed by 6721
Abstract
This paper addresses the problem of car detection from aerial images using Convolutional Neural Networks (CNNs). This problem presents additional challenges as compared to car (or any object) detection from ground images because the features of vehicles from aerial images are more difficult [...] Read more.
This paper addresses the problem of car detection from aerial images using Convolutional Neural Networks (CNNs). This problem presents additional challenges as compared to car (or any object) detection from ground images because the features of vehicles from aerial images are more difficult to discern. To investigate this issue, we assess the performance of three state-of-the-art CNN algorithms, namely Faster R-CNN, which is the most popular region-based algorithm, as well as YOLOv3 and YOLOv4, which are known to be the fastest detection algorithms. We analyze two datasets with different characteristics to check the impact of various factors, such as the UAV’s (unmanned aerial vehicle) altitude, camera resolution, and object size. A total of 52 training experiments were conducted to account for the effect of different hyperparameter values. The objective of this work is to conduct the most robust and exhaustive comparison between these three cutting-edge algorithms on the specific domain of aerial images. By using a variety of metrics, we show that the difference between YOLOv4 and YOLOv3 on the two datasets is statistically insignificant in terms of Average Precision (AP) (contrary to what was obtained on the COCO dataset). However, both of them yield markedly better performance than Faster R-CNN in most configurations. The only exception is that both of them exhibit a lower recall when object sizes and scales in the testing dataset differ largely from those in the training dataset. Full article
Show Figures

Figure 1

Review

Jump to: Research

30 pages, 1401 KiB  
Review
Drone Deep Reinforcement Learning: A Review
by Ahmad Taher Azar, Anis Koubaa, Nada Ali Mohamed, Habiba A. Ibrahim, Zahra Fathy Ibrahim, Muhammad Kazim, Adel Ammar, Bilel Benjdira, Alaa M. Khamis, Ibrahim A. Hameed and Gabriella Casalino
Electronics 2021, 10(9), 999; https://doi.org/10.3390/electronics10090999 - 22 Apr 2021
Cited by 129 | Viewed by 19219
Abstract
Unmanned Aerial Vehicles (UAVs) are increasingly being used in many challenging and diversified applications. These applications belong to the civilian and the military fields. To name a few; infrastructure inspection, traffic patrolling, remote sensing, mapping, surveillance, rescuing humans and animals, environment monitoring, and [...] Read more.
Unmanned Aerial Vehicles (UAVs) are increasingly being used in many challenging and diversified applications. These applications belong to the civilian and the military fields. To name a few; infrastructure inspection, traffic patrolling, remote sensing, mapping, surveillance, rescuing humans and animals, environment monitoring, and Intelligence, Surveillance, Target Acquisition, and Reconnaissance (ISTAR) operations. However, the use of UAVs in these applications needs a substantial level of autonomy. In other words, UAVs should have the ability to accomplish planned missions in unexpected situations without requiring human intervention. To ensure this level of autonomy, many artificial intelligence algorithms were designed. These algorithms targeted the guidance, navigation, and control (GNC) of UAVs. In this paper, we described the state of the art of one subset of these algorithms: the deep reinforcement learning (DRL) techniques. We made a detailed description of them, and we deduced the current limitations in this area. We noted that most of these DRL methods were designed to ensure stable and smooth UAV navigation by training computer-simulated environments. We realized that further research efforts are needed to address the challenges that restrain their deployment in real-life scenarios. Full article
Show Figures

Figure 1

Back to TopTop