sensors-logo

Journal Browser

Journal Browser

Applications of Multisensory Fusion for Automation and Control of Robotic Systems

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (15 March 2023) | Viewed by 39628

Special Issue Editors


E-Mail Website
Guest Editor
Versailles Engineering Systems Laboratory, University of Versailles Saint-Quentin, 78000 Versailles, France
Interests: multi-sensor fusion; control systems; robotics and mechatronic systems; autonomous vehicles; UAVs; aerial manipulation; humanoids

E-Mail Website
Guest Editor
Versailles Engineering Systems Laboratory, University of Versailles Saint-Quentin, 78000 Versailles, France
Interests: multi-sensor fusion; systems modeling and analysis; control systems; robotics and mechatronic systems; autonomous vehicle; UAV's; aerial manipulation

Special Issue Information

Dear Colleagues,

For decades, the tasks assigned to robots have constantly evolved to give birth to today's robots, that which participate more and more in the daily life of humans. The robots of yesteryear were very heavy manipulators, static and confined in factories to perform repetitive tasks in often familiar and static environments. Today's robots are increasingly mobile, independent, intelligent, and autonomous to provide more services that were, not so long ago, pure science fiction. The continuous improvement of their autonomy and efficiency has enabled their implementation in many fields such as transportation, medicine, construction, agriculture, and human services. Advances in sensor technologies and their integration into intelligent devices and systems have accelerated the increase in the autonomy and efficiency of robotic systems, enabling them to perform tasks that are more complex.

In this Special Issue, we invite original review and research papers addressing multi-sensor integration and fusion for the control of robotic systems such as UAVs, humanoid robots, collaborative robots, robotic manipulators, aerial manipulators, automated vehicles, etc.

Topics of interest include, but are not limited to, the following:

  • Advanced robotics
  • Advances in robotic applications
  • Advances in perception and control
  • Multi-sensor system design
  • Integration of sensors in control systems
  • Position and localization systems
  • Sensor technique in robotic applications
  • Sensor-based robot navigation, task, and motion planning
  • Sensor integration and fusion for positioning and navigation
  • Flight control and surveillance systems
  • Guidance control systems
  • Intelligent transportation systems
  • Locomotion and manipulation in robot systems

Prof. Dr. Abdelaziz Benallegue
Dr. A. El Hadri
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Robot sensing and data fusion
  • Perception systems
  • Robust control systems
  • Nonlinear control systems
  • Smart sensors
  • Fusion techniques
  • Multisensory integration and fusion
  • Vision-based sensing

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

22 pages, 1037 KiB  
Article
Guidance, Navigation and Control for Autonomous Quadrotor Flight in an Agricultural Field: The Case of Vineyards
by Adel Mokrane, Abdelaziz Benallegue, Amal Choukchou-Braham, Abdelhafid El Hadri and Brahim Cherki
Sensors 2022, 22(22), 8865; https://doi.org/10.3390/s22228865 - 16 Nov 2022
Cited by 1 | Viewed by 1686
Abstract
In this paper, we present a complete and efficient solution of guidance, navigation and control for a quadrotor platform to accomplish 3D coverage flight missions in mapped vineyard terrains. Firstly, an occupancy grid map of the terrain is used to generate a safe [...] Read more.
In this paper, we present a complete and efficient solution of guidance, navigation and control for a quadrotor platform to accomplish 3D coverage flight missions in mapped vineyard terrains. Firstly, an occupancy grid map of the terrain is used to generate a safe guiding coverage path using an Iterative Structured Orientation planning algorithm. Secondly, way-points are extracted from the generated path and added to them trajectory’s velocities and accelerations constraints. The constrained way-points are fed into a Linear Quadratic Regulator algorithm so as to generate global minimum snap optimal trajectory while satisfying both the pointing and the corridor constraints. Then, when facing unexpected obstacles, the quadrotor tends to re-plan its path in real-time locally using an Improved Artificial Potential Field algorithm. Finally, a geometric trajectory tracking controller is developed on the Special Euclidean group SE(3). The aim of this controller is to track the generated trajectory while pointing towards predetermined direction using the vector measurements provided by the inertial unit. The performance of the proposed method is demonstrated through several simulation results. In particular, safe guiding paths are achieved. Obstacle-free optimal trajectories that satisfy the way-point position, the pointing direction, and the corridor constraints, are successfully generated with optimized platform snap. Besides, the implemented geometric controller can achieve higher trajectory tracking accuracy with an absolute value of the maximum error in the order of 103 m. Full article
Show Figures

Figure 1

20 pages, 5486 KiB  
Article
Determining Exception Context in Assembly Operations from Multimodal Data
by Mihael Simonič, Matevž Majcen Hrovat, Sašo Džeroski, Aleš Ude and Bojan Nemec
Sensors 2022, 22(20), 7962; https://doi.org/10.3390/s22207962 - 19 Oct 2022
Cited by 1 | Viewed by 1541
Abstract
Robot assembly tasks can fail due to unpredictable errors and can only continue with the manual intervention of a human operator. Recently, we proposed an exception strategy learning framework based on statistical learning and context determination, which can successfully resolve such situations. This [...] Read more.
Robot assembly tasks can fail due to unpredictable errors and can only continue with the manual intervention of a human operator. Recently, we proposed an exception strategy learning framework based on statistical learning and context determination, which can successfully resolve such situations. This paper deals with context determination from multimodal data, which is the key component of our framework. We propose a novel approach to generate unified low-dimensional context descriptions based on image and force-torque data. For this purpose, we combine a state-of-the-art neural network model for image segmentation and contact point estimation using force-torque measurements. An ensemble of decision trees is used to combine features from the two modalities. To validate the proposed approach, we have collected datasets of deliberately induced insertion failures both for the classic peg-in-hole insertion task and for an industrially relevant task of car starter assembly. We demonstrate that the proposed approach generates reliable low-dimensional descriptors, suitable as queries necessary in statistical learning. Full article
Show Figures

Figure 1

18 pages, 3618 KiB  
Article
Artificial Neural Network Approach to Guarantee the Positioning Accuracy of Moving Robots by Using the Integration of IMU/UWB with Motion Capture System Data Fusion
by Ahmed M. M. Almassri, Natsuki Shirasawa, Amarbold Purev, Kaito Uehara, Wataru Oshiumi, Satoru Mishima and Hiroaki Wagatsuma
Sensors 2022, 22(15), 5737; https://doi.org/10.3390/s22155737 - 31 Jul 2022
Cited by 9 | Viewed by 2697
Abstract
This study presents an effective artificial neural network (ANN) approach to combine measurements from inertial measurement units (IMUs) and time-of-flight (TOF) measurements from an ultra-wideband (UWB) system with OptiTrack Motion Capture System (OptiT-MCS) data to guarantee the positioning accuracy of motion tracking in [...] Read more.
This study presents an effective artificial neural network (ANN) approach to combine measurements from inertial measurement units (IMUs) and time-of-flight (TOF) measurements from an ultra-wideband (UWB) system with OptiTrack Motion Capture System (OptiT-MCS) data to guarantee the positioning accuracy of motion tracking in indoor environments. The proposed fusion approach unifies the following advantages of both technologies: high data rates from the MCS, and global translational precision from the inertial measurement unit (IMU)/UWB localization system. Consequently, it leads to accurate position estimates when compared with data from the IMU/UWB system relative to the OptiT-MCS reference system. The calibrations of the positioning IMU/UWB and MCS systems are utilized in real-time movement with a diverse set of motion recordings using a mobile robot. The proposed neural network (NN) approach experimentally revealed accurate position estimates, giving an enhancement average mean absolute percentage error (MAPE) of 17.56% and 7.48% in the X and Y coordinates, respectively, and the coefficient of correlation R greater than 99%. Moreover, the experimental results prove that the proposed NN fusion is capable of maintaining high accuracy in position estimates while preventing drift errors from increasing in an unbounded manner, implying that the proposed approach is more effective than the compared approaches. Full article
Show Figures

Figure 1

28 pages, 3740 KiB  
Article
Reliable Control Applications with Wireless Communication Technologies: Application to Robotic Systems
by Isidro Calvo, Eneko Villar, Cristian Napole, Aitor Fernández, Oscar Barambones and José Miguel Gil-García
Sensors 2021, 21(21), 7107; https://doi.org/10.3390/s21217107 - 26 Oct 2021
Cited by 5 | Viewed by 2442
Abstract
The nature of wireless propagation may reduce the QoS of the applications, such that some packages can be delayed or lost. For this reason, the design of wireless control applications must be faced in a holistic way to avoid degrading the performance of [...] Read more.
The nature of wireless propagation may reduce the QoS of the applications, such that some packages can be delayed or lost. For this reason, the design of wireless control applications must be faced in a holistic way to avoid degrading the performance of the control algorithms. This paper is aimed at improving the reliability of wireless control applications in the event of communication degradation or temporary loss at the wireless links. Two controller levels are used: sophisticated algorithms providing better performance are executed in a central node, whereas local independent controllers, implemented as back-up controllers, are executed next to the process in case of QoS degradation. This work presents a reliable strategy for switching between central and local controllers avoiding that plants may become uncontrolled. For validation purposes, the presented approach was used to control a planar robot. A Fuzzy Logic control algorithm was implemented as a main controller at a high performance computing platform. A back-up controller was implemented on an edge device. This approach avoids the robot becoming uncontrolled in case of communication failure. Although a planar robot was chosen in this work, the presented approach may be extended to other processes. XBee 900 MHz communication technology was selected for control tasks, leaving the 2.4 GHz band for integration with cloud services. Several experiments are presented to analyze the behavior of the control application under different circumstances. The results proved that our approach allows the use of wireless communications, even in critical control applications. Full article
Show Figures

Figure 1

23 pages, 18302 KiB  
Article
Social Robot Navigation Tasks: Combining Machine Learning Techniques and Social Force Model
by Óscar Gil, Anaís Garrell and Alberto Sanfeliu
Sensors 2021, 21(21), 7087; https://doi.org/10.3390/s21217087 - 26 Oct 2021
Cited by 10 | Viewed by 3440
Abstract
Social robot navigation in public spaces, buildings or private houses is a difficult problem that is not well solved due to environmental constraints (buildings, static objects etc.), pedestrians and other mobile vehicles. Moreover, robots have to move in a human-aware manner—that is, robots [...] Read more.
Social robot navigation in public spaces, buildings or private houses is a difficult problem that is not well solved due to environmental constraints (buildings, static objects etc.), pedestrians and other mobile vehicles. Moreover, robots have to move in a human-aware manner—that is, robots have to navigate in such a way that people feel safe and comfortable. In this work, we present two navigation tasks, social robot navigation and robot accompaniment, which combine machine learning techniques with the Social Force Model (SFM) allowing human-aware social navigation. The robots in both approaches use data from different sensors to capture the environment knowledge as well as information from pedestrian motion. The two navigation tasks make use of the SFM, which is a general framework in which human motion behaviors can be expressed through a set of functions depending on the pedestrians’ relative and absolute positions and velocities. Additionally, in both social navigation tasks, the robot’s motion behavior is learned using machine learning techniques: in the first case using supervised deep learning techniques and, in the second case, using Reinforcement Learning (RL). The machine learning techniques are combined with the SFM to create navigation models that behave in a social manner when the robot is navigating in an environment with pedestrians or accompanying a person. The validation of the systems was performed with a large set of simulations and real-life experiments with a new humanoid robot denominated IVO and with an aerial robot. The experiments show that the combination of SFM and machine learning can solve human-aware robot navigation in complex dynamic environments. Full article
Show Figures

Figure 1

19 pages, 2645 KiB  
Article
Robust Tightly Coupled Pose Measurement Based on Multi-Sensor Fusion in Mobile Robot System
by Gang Peng, Zezao Lu, Jiaxi Peng, Dingxin He, Xinde Li and Bin Hu
Sensors 2021, 21(16), 5522; https://doi.org/10.3390/s21165522 - 17 Aug 2021
Cited by 4 | Viewed by 2423
Abstract
Currently, simultaneous localization and mapping (SLAM) is one of the main research topics in the robotics field. Visual-inertia SLAM, which consists of a camera and an inertial measurement unit (IMU), can significantly improve robustness and enable scale weak-visibility, whereas monocular visual SLAM is [...] Read more.
Currently, simultaneous localization and mapping (SLAM) is one of the main research topics in the robotics field. Visual-inertia SLAM, which consists of a camera and an inertial measurement unit (IMU), can significantly improve robustness and enable scale weak-visibility, whereas monocular visual SLAM is scale-invisible. For ground mobile robots, the introduction of a wheel speed sensor can solve the scale weak-visibility problem and improve robustness under abnormal conditions. In this paper, a multi-sensor fusion SLAM algorithm using monocular vision, inertia, and wheel speed measurements is proposed. The sensor measurements are combined in a tightly coupled manner, and a nonlinear optimization method is used to maximize the posterior probability to solve the optimal state estimation. Loop detection and back-end optimization are added to help reduce or even eliminate the cumulative error of the estimated poses, thus ensuring global consistency of the trajectory and map. The outstanding contribution of this paper is that the wheel odometer pre-integration algorithm, which combines the chassis speed and IMU angular speed, can avoid the repeated integration caused by linearization point changes during iterative optimization; state initialization based on the wheel odometer and IMU enables a quick and reliable calculation of the initial state values required by the state estimator in both stationary and moving states. Comparative experiments were conducted in room-scale scenes, building scale scenes, and visual loss scenarios. The results showed that the proposed algorithm is highly accurate—2.2 m of cumulative error after moving 812 m (0.28%, loopback optimization disabled)—robust, and has an effective localization capability even in the event of sensor loss, including visual loss. The accuracy and robustness of the proposed method are superior to those of monocular visual inertia SLAM and traditional wheel odometers. Full article
Show Figures

Figure 1

19 pages, 823 KiB  
Article
Optimal Deployment of Charging Stations for Aerial Surveillance by UAVs with the Assistance of Public Transportation Vehicles
by Hailong Huang and Andrey V. Savkin
Sensors 2021, 21(16), 5320; https://doi.org/10.3390/s21165320 - 06 Aug 2021
Cited by 5 | Viewed by 1991
Abstract
To overcome the limitation in flight time and enable unmanned aerial vehicles (UAVs) to survey remote sites of interest, this paper investigates an approach involving the collaboration with public transportation vehicles (PTVs) and the deployment of charging stations. In particular, the focus of [...] Read more.
To overcome the limitation in flight time and enable unmanned aerial vehicles (UAVs) to survey remote sites of interest, this paper investigates an approach involving the collaboration with public transportation vehicles (PTVs) and the deployment of charging stations. In particular, the focus of this paper is on the deployment of charging stations. In this approach, a UAV first travels with some PTVs, and then flies through some charging stations to reach remote sites. While the travel time with PTVs can be estimated by the Monte Carlo method to accommodate various uncertainties, we propose a new coverage model to compute the travel time taken for UAVs to reach the sites. With this model, we formulate the optimal deployment problem with the goal of minimising the average travel time of UAVs from the depot to the sites, which can be regarded as a reflection of the quality of surveillance (QoS) (the shorter the better). We then propose an iterative algorithm to place the charging stations. We show that this algorithm ensures that any movement of a charging station leads to a decrease in the average travel time of UAVs. To demonstrate the effectiveness of the proposed method, we make a comparison with a baseline method. The results show that the proposed model can more accurately estimate the travel time than the most commonly used model, and the proposed algorithm can relocate the charging stations to achieve a lower flight distance than the baseline method. Full article
Show Figures

Figure 1

17 pages, 7938 KiB  
Communication
Trajectory Planner CDT-RRT* for Car-Like Mobile Robots toward Narrow and Cluttered Environments
by Hyunki Kwon, Donggeun Cha, Jihoon Seong, Jinwon Lee and Woojin Chung
Sensors 2021, 21(14), 4828; https://doi.org/10.3390/s21144828 - 15 Jul 2021
Cited by 10 | Viewed by 3321
Abstract
In order to achieve the safe and efficient navigation of mobile robots, it is essential to consider both the environmental geometry and kinodynamic constraints of robots. We propose a trajectory planner for car-like robots on the basis of the Dual-Tree RRT (DT-RRT). DT-RRT [...] Read more.
In order to achieve the safe and efficient navigation of mobile robots, it is essential to consider both the environmental geometry and kinodynamic constraints of robots. We propose a trajectory planner for car-like robots on the basis of the Dual-Tree RRT (DT-RRT). DT-RRT utilizes two tree structures in order to generate fast-growing trajectories under the kinodynamic constraints of robots. A local trajectory generator has been newly designed for car-like robots. The proposed scheme of searching a parent node enables the efficient generation of safe trajectories in cluttered environments. The presented simulation results clearly show the usefulness and the advantage of the proposed trajectory planner in various environments. Full article
Show Figures

Figure 1

17 pages, 2309 KiB  
Article
General Purpose Low-Level Reinforcement Learning Control for Multi-Axis Rotor Aerial Vehicles
by Chen-Huan Pi, Yi-Wei Dai, Kai-Chun Hu and Stone Cheng
Sensors 2021, 21(13), 4560; https://doi.org/10.3390/s21134560 - 02 Jul 2021
Cited by 6 | Viewed by 2625
Abstract
This paper proposes a multipurpose reinforcement learning based low-level multirotor unmanned aerial vehicles control structure constructed using neural networks with model-free training. Other low-level reinforcement learning controllers developed in studies have only been applicable to a model-specific and physical-parameter-specific multirotor, and time-consuming training [...] Read more.
This paper proposes a multipurpose reinforcement learning based low-level multirotor unmanned aerial vehicles control structure constructed using neural networks with model-free training. Other low-level reinforcement learning controllers developed in studies have only been applicable to a model-specific and physical-parameter-specific multirotor, and time-consuming training is required when switching to a different vehicle. We use a 6-degree-of-freedom dynamic model combining acceleration-based control from the policy neural network to overcome these problems. The UAV automatically learns the maneuver by an end-to-end neural network from fusion states to acceleration command. The state estimation is performed using the data from on-board sensors and motion capture. The motion capture system provides spatial position information and a multisensory fusion framework fuses the measurement from the onboard inertia measurement units for compensating the time delay and low update frequency of the capture system. Without requiring expert demonstration, the trained control policy implemented using an improved algorithm can be applied to various multirotors with the output directly mapped to actuators. The algorithm’s ability to control multirotors in the hovering and the tracking task is evaluated. Through simulation and actual experiments, we demonstrate the flight control with a quadrotor and hexrotor by using the trained policy. With the same policy, we verify that we can stabilize the quadrotor and hexrotor in the air under random initial states. Full article
Show Figures

Figure 1

18 pages, 3183 KiB  
Article
Cascaded Complementary Filter Architecture for Sensor Fusion in Attitude Estimation
by Parag Narkhede, Shashi Poddar, Rahee Walambe, George Ghinea and Ketan Kotecha
Sensors 2021, 21(6), 1937; https://doi.org/10.3390/s21061937 - 10 Mar 2021
Cited by 26 | Viewed by 7020
Abstract
Attitude estimation is the process of computing the orientation angles of an object with respect to a fixed frame of reference. Gyroscope, accelerometer, and magnetometer are some of the fundamental sensors used in attitude estimation. The orientation angles computed from these sensors are [...] Read more.
Attitude estimation is the process of computing the orientation angles of an object with respect to a fixed frame of reference. Gyroscope, accelerometer, and magnetometer are some of the fundamental sensors used in attitude estimation. The orientation angles computed from these sensors are combined using the sensor fusion methodologies to obtain accurate estimates. The complementary filter is one of the widely adopted techniques whose performance is highly dependent on the appropriate selection of its gain parameters. This paper presents a novel cascaded architecture of the complementary filter that employs a nonlinear and linear version of the complementary filter within one framework. The nonlinear version is used to correct the gyroscope bias, while the linear version estimates the attitude angle. The significant advantage of the proposed architecture is its independence of the filter parameters, thereby avoiding tuning the filter’s gain parameters. The proposed architecture does not require any mathematical modeling of the system and is computationally inexpensive. The proposed methodology is applied to the real-world datasets, and the estimation results were found to be promising compared to the other state-of-the-art algorithms. Full article
Show Figures

Figure 1

Review

Jump to: Research

43 pages, 28107 KiB  
Review
Biosignal-Based Human–Machine Interfaces for Assistance and Rehabilitation: A Survey
by Daniele Esposito, Jessica Centracchio, Emilio Andreozzi, Gaetano D. Gargiulo, Ganesh R. Naik and Paolo Bifulco
Sensors 2021, 21(20), 6863; https://doi.org/10.3390/s21206863 - 15 Oct 2021
Cited by 30 | Viewed by 8165
Abstract
As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals [...] Read more.
As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal-based HMIs for assistance and rehabilitation to outline state-of-the-art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full-text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever-growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complexity, so their usefulness should be carefully evaluated for the specific application. Full article
Show Figures

Figure 1

Back to TopTop