sensors-logo

Journal Browser

Journal Browser

Environment Perception for Industrial Robotics, Connected and Autonomous Vehicles and Beyond

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 31 July 2024 | Viewed by 10512

Special Issue Editor

Special Issue Information

Dear Colleagues,

Environment perception using multiple sensor technologies is currently a key enabler in various applications ranging from industrial robotics to mobility for visually impaired and blind people, but it is also used in connected and autonomous vehicles (CAVs). Several environmental perception technologies already exist, including point cloud analysis and occupancy grid calculation. They differ not only from the sensor modalities used to build digital models of the environment but also from the computational techniques used to extract information from the sensors.

Recently, machine learning (ML) has been used for classifying and training CAVs in different aspects. Both supervised and unsupervised ML algorithms are being used in solving different issues in CAVs. Certainly, there is still a huge research gap and there are many opportunities to develop and improve ML-based techniques to solve many existing challenges in CAVs. Some of the challenges facing ML-based techniques for CAVs are efficient computation, neural architecture, reward functions design, adaptability, generalization, verification and validation, safety, etc. 

This Special Issue encourages authors from both academia and industry to submit new research results regarding technological innovations and novel ideas for environmental perception, considering both hardware and software, with a special interest in artificial intelligence and advanced sensors and sensing systems. This Special Issue welcomes submission regarding, but not limited to, the following topics:

  • Autonomous vehicle contexts;
  • Efficient ML (deep learning, reinforcement learning, etc.)-based CAV computation and architecture;
  •  Environmental perception;
  • Object detection;
  • Occupancy grids;
  • Point cloud;
  • Bird view;
  • Deep learning;
  • Datasets for environment perception ;
  • Embedded environmental perception;
  • Advanced sensors for environmental perception;
  • Perception systems and devices.

Dr. Suzanne Lesecq
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • environmental perception
  • perception systems and devices

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 5928 KiB  
Article
A Versatile Approach to Polygonal Object Avoidance in Indoor Environments with Hardware Schemes Using an FPGA-Based Multi-Robot
by Mudasar Basha, Munuswamy Siva Kumar, Mangali Chinna Chinnaiah, Siew-Kei Lam, Thambipillai Srikanthan, Narambhatla Janardhan, Dodde Hari Krishna and Sanjay Dubey
Sensors 2023, 23(23), 9480; https://doi.org/10.3390/s23239480 - 28 Nov 2023
Cited by 1 | Viewed by 613
Abstract
Service robots perform versatile functions in indoor environments. This study focuses on obstacle avoidance using flock-type indoor-based multi-robots. Each robot was developed with rendezvous behavior and distributed intelligence to perform obstacle avoidance. The hardware scheme-based obstacle-avoidance algorithm was developed using a bio-inspired flock [...] Read more.
Service robots perform versatile functions in indoor environments. This study focuses on obstacle avoidance using flock-type indoor-based multi-robots. Each robot was developed with rendezvous behavior and distributed intelligence to perform obstacle avoidance. The hardware scheme-based obstacle-avoidance algorithm was developed using a bio-inspired flock approach, which was developed with three stages. Initially, the algorithm estimates polygonal obstacles and their orientations. The second stage involves performing avoidance at different orientations of obstacles using a heuristic based Bug2 algorithm. The final stage involves performing a flock rendezvous with distributed approaches and linear movements using a behavioral control mechanism. VLSI architectures were developed for multi-robot obstacle avoidance algorithms and were coded using Verilog HDL. The novel design of this article integrates the multi-robot’s obstacle approaches with behavioral control and hardware scheme-based partial reconfiguration (PR) flow. The experiments were validated using FPGA-based multi-robots. Full article
Show Figures

Figure 1

20 pages, 1958 KiB  
Article
Grand Theft Auto-Based Cycling Simulator for Cognitive Enhancement Technologies in Dangerous Traffic Situations
by Julius Schöning, Jan Kettler, Milena I. Jäger and Artur Gunia
Sensors 2023, 23(7), 3672; https://doi.org/10.3390/s23073672 - 31 Mar 2023
Viewed by 1589
Abstract
While developing traffic-based cognitive enhancement technology (CET), such as bike accident prevention systems, it can be challenging to test and evaluate them properly. After all, the real-world scenario could endanger the subjects’ health and safety. Therefore, a simulator is needed, preferably one that [...] Read more.
While developing traffic-based cognitive enhancement technology (CET), such as bike accident prevention systems, it can be challenging to test and evaluate them properly. After all, the real-world scenario could endanger the subjects’ health and safety. Therefore, a simulator is needed, preferably one that is realistic yet low cost. This paper introduces a way to use the video game Grand Theft Auto V (GTA V) and its sophisticated traffic system as a base to create such a simulator, allowing for the safe and realistic testing of dangerous traffic situations involving cyclists, cars, and trucks. The open world of GTA V, which can be explored on foot and via various vehicles, serves as an immersive stand-in for the real world. Custom modification scripts of the game give the researchers control over the experiment scenario and the output data to be evaluated. An off-the-shelf bicycle equipped with three sensors serves as a realistic input device for the subject’s movement direction and speed. The simulator was used to test two early-stage CET concepts enabling cyclists to sense dangerous traffic situations, such as trucks approaching from behind the cyclist. Thus, this paper also presents the user evaluation of the cycling simulator and the CET used by the subjects to sense dangerous traffic situations. With the knowledge of the first iteration of the user-centered design (UCD) process, this paper concludes by naming improvements for the cycling simulator and discussing further research directions for CET that enable users to sense dangerous situations better. Full article
Show Figures

Figure 1

18 pages, 4232 KiB  
Article
Vehicle Detection on Occupancy Grid Maps: Comparison of Five Detectors Regarding Real-Time Performance
by Nils Defauw, Marielle Malfante, Olivier Antoni, Tiana Rakotovao and Suzanne Lesecq
Sensors 2023, 23(3), 1613; https://doi.org/10.3390/s23031613 - 02 Feb 2023
Cited by 1 | Viewed by 2437
Abstract
Occupancy grid maps are widely used as an environment model that allows the fusion of different range sensor technologies in real-time for robotics applications. In an autonomous vehicle setting, occupancy grid maps are especially useful for their ability to accurately represent the position [...] Read more.
Occupancy grid maps are widely used as an environment model that allows the fusion of different range sensor technologies in real-time for robotics applications. In an autonomous vehicle setting, occupancy grid maps are especially useful for their ability to accurately represent the position of surrounding obstacles while being robust to discrepancies between the fused sensors through the use of occupancy probabilities representing uncertainty. In this article, we propose to evaluate the applicability of real-time vehicle detection on occupancy grid maps. State of the art detectors in sensor-specific domains such as YOLOv2/YOLOv3 for images or PIXOR for LiDAR point clouds are modified to use occupancy grid maps as input and produce oriented bounding boxes enclosing vehicles as output. The five proposed detectors are trained on the Waymo Open automotive dataset and compared regarding the quality of their detections measured in terms of Average Precision (AP) and their real-time capabilities measured in Frames Per Second (FPS). Of the five detectors presented, one inspired from the PIXOR backbone reaches the highest AP0.7 of 0.82 and runs at 20 FPS. Comparatively, two other proposed detectors inspired from YOLOv2 achieve an almost as good, with a AP0.7 of 0.79 while running at 91 FPS. These results validate the feasibility of real-time vehicle detection on occupancy grids. Full article
Show Figures

Figure 1

25 pages, 9981 KiB  
Article
ConcentrateNet: Multi-Scale Object Detection Model for Advanced Driving Assistance System Using Real-Time Distant Region Locating Technique
by Bo-Xun Wu, Vinay M. Shivanna, Hsiang-Hsuan Hung and Jiun-In Guo
Sensors 2022, 22(19), 7371; https://doi.org/10.3390/s22197371 - 28 Sep 2022
Viewed by 1521
Abstract
This paper proposes a deep learning based object detection method to locate a distant region in an image in real-time. It concentrates on distant objects from a vehicular front camcorder perspective, trying to solve one of the common problems in Advanced Driver Assistance [...] Read more.
This paper proposes a deep learning based object detection method to locate a distant region in an image in real-time. It concentrates on distant objects from a vehicular front camcorder perspective, trying to solve one of the common problems in Advanced Driver Assistance Systems (ADAS) applications, which is, to detect the smaller and faraway objects with the same confidence as those with the bigger and closer objects. This paper presents an efficient multi-scale object detection network, termed as ConcentrateNet to detect a vanishing point and concentrate on the near-distant region. Initially, the object detection model inferencing will produce a larger scale of receptive field detection results and predict a potentially vanishing point location, that is, the farthest location in the frame. Then, the image is cropped near the vanishing point location and processed with the object detection model for second inferencing to obtain distant object detection results. Finally, the two-inferencing results are merged with a specific Non-Maximum Suppression (NMS) method. The proposed network architecture can be employed in most of the object detection models as the proposed model is implemented in some of the state-of-the-art object detection models to check feasibility. Compared with original models using higher resolution input size, ConcentrateNet architecture models use lower resolution input size, with less model complexity, achieving significant precision and recall improvements. Moreover, the proposed ConcentrateNet architecture model is successfully ported onto a low-powered embedded system, NVIDIA Jetson AGX Xavier, suiting the real-time autonomous machines. Full article
Show Figures

Figure 1

19 pages, 4663 KiB  
Article
Ultra-Wideband Communication and Sensor Fusion Platform for the Purpose of Multi-Perspective Localization
by Chunxu Li, Henry Bulman, Toby Whitley and Shaoxiang Li
Sensors 2022, 22(18), 6880; https://doi.org/10.3390/s22186880 - 12 Sep 2022
Viewed by 1825
Abstract
Localization is a keystone for a robot to work within its environment and with other robots. There have been many methods used to solve this problem. This paper deals with the use of beacon-based localization to answer the research question: Can ultra-wideband technology [...] Read more.
Localization is a keystone for a robot to work within its environment and with other robots. There have been many methods used to solve this problem. This paper deals with the use of beacon-based localization to answer the research question: Can ultra-wideband technology be used to effectively localize a robot with sensor fusion? This paper has developed an innovative solution for creating a sensor fusion platform that uses ultra-wideband communication as a localization method to allow an environment to be perceived and inspected in three dimensions from multiple perspectives simultaneously. A series of contributions have been presented, supported by an in-depth literature review regarding topics in this field of knowledge. The proposed method was then designed, built, and tested successfully in two different environments exceeding its required tolerances. The result of the testing and the ideas formulated throughout the paper were discussed and future work outlined on how to build upon this work in potential academic papers and projects. Full article
Show Figures

Figure 1

26 pages, 18395 KiB  
Article
Reconstructing Superquadrics from Intensity and Color Images
by Darian Tomašević, Peter Peer, Franc Solina, Aleš Jaklič and Vitomir Štruc
Sensors 2022, 22(14), 5332; https://doi.org/10.3390/s22145332 - 16 Jul 2022
Cited by 1 | Viewed by 1730
Abstract
The task of reconstructing 3D scenes based on visual data represents a longstanding problem in computer vision. Common reconstruction approaches rely on the use of multiple volumetric primitives to describe complex objects. Superquadrics (a class of volumetric primitives) have shown great promise due [...] Read more.
The task of reconstructing 3D scenes based on visual data represents a longstanding problem in computer vision. Common reconstruction approaches rely on the use of multiple volumetric primitives to describe complex objects. Superquadrics (a class of volumetric primitives) have shown great promise due to their ability to describe various shapes with only a few parameters. Recent research has shown that deep learning methods can be used to accurately reconstruct random superquadrics from both 3D point cloud data and simple depth images. In this paper, we extended these reconstruction methods to intensity and color images. Specifically, we used a dedicated convolutional neural network (CNN) model to reconstruct a single superquadric from the given input image. We analyzed the results in a qualitative and quantitative manner, by visualizing reconstructed superquadrics as well as observing error and accuracy distributions of predictions. We showed that a CNN model designed around a simple ResNet backbone can be used to accurately reconstruct superquadrics from images containing one object, but only if one of the spatial parameters is fixed or if it can be determined from other image characteristics, e.g., shadows. Furthermore, we experimented with images of increasing complexity, for example, by adding textures, and observed that the results degraded only slightly. In addition, we show that our model outperforms the current state-of-the-art method on the studied task. Our final result is a highly accurate superquadric reconstruction model, which can also reconstruct superquadrics from real images of simple objects, without additional training. Full article
Show Figures

Figure 1

Back to TopTop