sensors-logo

Journal Browser

Journal Browser

Sensing Applications in Robotics

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (31 October 2021) | Viewed by 46308

Special Issue Editors

System Engineering and Automation Department, Miguel Hernandez University, 03202 Elche, Spain
Interests: computer vision; omnidirectional imaging; appearance descriptors; image processing; mobile robotics; environment modeling; visual localization
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The presence of robots in a variety of scenarios has increased substantially in recent years, as their ability to solve diverse tasks has improved. Currently, a broad range of research lines continue to be very active in the field of robotics, with the common goal of increasing their safety, autonomy, and adaptability to unexpected circumstances. Additionally, improving collaborative mechanisms between robot and user constitutes an important line in this field. In all cases, sensing technologies play a crucial role in capturing the necessary information from the environment, robot, and/or user.

To address any specific task, the robot has to be equipped with different kinds of sensors to perceive the surroundings, such as touch sensors, laser rangefinders, GPS, visual sensors or combined vision-depth platforms. In some applications, a combination of these is used, and data-fusion algorithms must be implemented. Currently, machine learning and deep learning approaches may play an important role in data analysis, interpretation, and fusion. Additionally, some specific tasks can be performed more efficiently if a team of robots is used, so an optimal combination of the information captured between the different sensors is crucial. In this sense, IoT (Internet of Things) approaches may ease this labor. Finally, in some cases, the robots must operate in social environments, and it is necessary to implement interfaces that permit an easy and intuitive interaction between data captured by the sensors (either pre- or post-processed) and the users.

The aim of this Special Issue is to present current applications of sensing technologies in robotics. In this way, this Special Issue invites contributions to the following topics (but is not limited to them):

  • Sensing technologies in mobile robots;
  • Sensing technologies in industrial robots;
  • Design of new sensors for robots;
  • Processing and interpretation of sensory data;
  • Machine learning and deep learning for data treatment in robotics;
  • Sensing technologies in multirobot systems;
  • Interfaces for robot/user interaction;
  • Sensing technologies in collaborative robot/user applications;
  • Applications of sensing technologies in robotics.

Prof. Dr. Oscar Reinoso García
Prof. Dr. Luis Payá
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Vision sensors
  • Range sensors
  • Touch sensors
  • Sensor networks
  • Intelligent sensors
  • Mobile robots
  • Industrial robots
  • Data interpretation
  • Data fusion
  • Sensor applications

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 35287 KiB  
Article
Autonomous Thermal Vision Robotic System for Victims Recognition in Search and Rescue Missions
by Christyan Cruz Ulloa, Guillermo Prieto Sánchez, Antonio Barrientos and Jaime Del Cerro
Sensors 2021, 21(21), 7346; https://doi.org/10.3390/s21217346 - 04 Nov 2021
Cited by 22 | Viewed by 4149
Abstract
Technological breakthroughs in recent years have led to a revolution in fields such as Machine Vision and Search and Rescue Robotics (SAR), thanks to the application and development of new and improved neural networks to vision models together with modern optical sensors that [...] Read more.
Technological breakthroughs in recent years have led to a revolution in fields such as Machine Vision and Search and Rescue Robotics (SAR), thanks to the application and development of new and improved neural networks to vision models together with modern optical sensors that incorporate thermal cameras, capable of capturing data in post-disaster environments (PDE) with rustic conditions (low luminosity, suspended particles, obstructive materials). Due to the high risk posed by PDE because of the potential collapse of structures, electrical hazards, gas leakage, etc., primary intervention tasks such as victim identification are carried out by robotic teams, provided with specific sensors such as thermal, RGB cameras, and laser. The application of Convolutional Neural Networks (CNN) to computer vision is a breakthrough for detection algorithms. Conventional methods for victim identification in these environments use RGB image processing or trained dogs, but detection with RGB images is inefficient in the absence of light or presence of debris; on the other hand, developments with thermal images are limited to the field of surveillance. This paper’s main contribution focuses on implementing a novel automatic method based on thermal image processing and CNN for victim identification in PDE, using a Robotic System that uses a quadruped robot for data capture and transmission to the central station. The robot’s automatic data processing and control have been carried out through Robot Operating System (ROS). Several tests have been carried out in different environments to validate the proposed method, recreating PDE with varying conditions of light, from which the datasets have been generated for the training of three neural network models (Fast R-CNN, SSD, and YOLO). The method’s efficiency has been tested against another method based on CNN and RGB images for the same task showing greater effectiveness in PDE main results show that the proposed method has an efficiency greater than 90%. Full article
(This article belongs to the Special Issue Sensing Applications in Robotics)
Show Figures

Figure 1

17 pages, 6508 KiB  
Communication
Use of Force Feedback Device in a Hybrid Brain-Computer Interface Based on SSVEP, EOG and Eye Tracking for Sorting Items
by Arkadiusz Kubacki
Sensors 2021, 21(21), 7244; https://doi.org/10.3390/s21217244 - 30 Oct 2021
Cited by 6 | Viewed by 2017
Abstract
Research focused on signals derived from the human organism is becoming increasingly popular. In this field, a special role is played by brain-computer interfaces based on brainwaves. They are becoming increasingly popular due to the downsizing of EEG signal recording devices and ever-lower [...] Read more.
Research focused on signals derived from the human organism is becoming increasingly popular. In this field, a special role is played by brain-computer interfaces based on brainwaves. They are becoming increasingly popular due to the downsizing of EEG signal recording devices and ever-lower set prices. Unfortunately, such systems are substantially limited in terms of the number of generated commands. This especially applies to sets that are not medical devices. This article proposes a hybrid brain-computer system based on the Steady-State Visual Evoked Potential (SSVEP), EOG, eye tracking, and force feedback system. Such an expanded system eliminates many of the particular system shortcomings and provides much better results. The first part of the paper presents information on the methods applied in the hybrid brain-computer system. The presented system was tested in terms of the ability of the operator to place the robot’s tip to a designated position. A virtual model of an industrial robot was proposed, which was used in the testing. The tests were repeated on a real-life industrial robot. Positioning accuracy of system was verified with the feedback system both enabled and disabled. The results of tests conducted both on the model and on the real object clearly demonstrate that force feedback improves the positioning accuracy of the robot’s tip when controlled by the operator. In addition, the results for the model and the real-life industrial model are very similar. In the next stage, research was carried out on the possibility of sorting items using the BCI system. The research was carried out on a model and a real robot. The results show that it is possible to sort using bio signals from the human body. Full article
(This article belongs to the Special Issue Sensing Applications in Robotics)
Show Figures

Figure 1

17 pages, 3680 KiB  
Article
A Smart Capacitive Sensor Skin with Embedded Data Quality Indication for Enhanced Safety in Human–Robot Interaction
by Christoph Scholl, Andreas Tobola, Klaus Ludwig, Dario Zanca and Bjoern M. Eskofier
Sensors 2021, 21(21), 7210; https://doi.org/10.3390/s21217210 - 29 Oct 2021
Cited by 8 | Viewed by 3334
Abstract
Smart sensors are an integral part of the Fourth Industrial Revolution and are widely used to add safety measures to human–robot interaction applications. With the advancement of machine learning methods in resource-constrained environments, smart sensor systems have become increasingly powerful. As more data-driven [...] Read more.
Smart sensors are an integral part of the Fourth Industrial Revolution and are widely used to add safety measures to human–robot interaction applications. With the advancement of machine learning methods in resource-constrained environments, smart sensor systems have become increasingly powerful. As more data-driven approaches are deployed on the sensors, it is of growing importance to monitor data quality at all times of system operation. We introduce a smart capacitive sensor system with an embedded data quality monitoring algorithm to enhance the safety of human–robot interaction scenarios. The smart capacitive skin sensor is capable of detecting the distance and angle of objects nearby by utilizing consumer-grade sensor electronics. To further acknowledge the safety aspect of the sensor, a dedicated layer to monitor data quality in real-time is added to the embedded software of the sensor. Two learning algorithms are used to implement the sensor functionality: (1) a fully connected neural network to infer the position and angle of objects nearby and (2) a one-class SVM to account for the data quality assessment based on out-of-distribution detection. We show that the sensor performs well under normal operating conditions within a range of 200 mm and also detects abnormal operating conditions in terms of poor data quality successfully. A mean absolute distance error of 11.6mm was achieved without data quality indication. The overall performance of the sensor system could be further improved to 7.5mm by monitoring the data quality, adding an additional layer of safety for human–robot interaction. Full article
(This article belongs to the Special Issue Sensing Applications in Robotics)
Show Figures

Figure 1

15 pages, 2261 KiB  
Article
Collaborative Complete Coverage and Path Planning for Multi-Robot Exploration
by Huei-Yung Lin and Yi-Chun Huang
Sensors 2021, 21(11), 3709; https://doi.org/10.3390/s21113709 - 26 May 2021
Cited by 23 | Viewed by 4046
Abstract
In mobile robotics research, the exploration of unknown environments has always been an important topic due to its practical uses in consumer and military applications. One specific interest of recent investigation is the field of complete coverage and path planning (CCPP) techniques for [...] Read more.
In mobile robotics research, the exploration of unknown environments has always been an important topic due to its practical uses in consumer and military applications. One specific interest of recent investigation is the field of complete coverage and path planning (CCPP) techniques for mobile robot navigation. In this paper, we present a collaborative CCPP algorithms for single robot and multi-robot systems. The incremental coverage from the robot movement is maximized by evaluating a new cost function. A goal selection function is then designed to facilitate the collaborative exploration for a multi-robot system. By considering the local gains from the individual robots as well as the global gain by the goal selection, the proposed method is able to optimize the overall coverage efficiency. In the experiments, our CCPP algorithms are carried out on various unknown and complex environment maps. The simulation results and performance evaluation demonstrate the effectiveness of the proposed collaborative CCPP technique. Full article
(This article belongs to the Special Issue Sensing Applications in Robotics)
Show Figures

Figure 1

22 pages, 4291 KiB  
Article
Visual Features Assisted Robot Localization in Symmetrical Environment Using Laser SLAM
by Gengyu Ge, Yi Zhang, Qin Jiang and Wei Wang
Sensors 2021, 21(5), 1772; https://doi.org/10.3390/s21051772 - 04 Mar 2021
Cited by 10 | Viewed by 2367
Abstract
Localization for estimating the position and orientation of a robot in an asymmetrical environment has been solved by using various 2D laser rangefinder simultaneous localization and mapping (SLAM) approaches. Laser-based SLAM generates an occupancy grid map, then the most popular Monte Carlo Localization [...] Read more.
Localization for estimating the position and orientation of a robot in an asymmetrical environment has been solved by using various 2D laser rangefinder simultaneous localization and mapping (SLAM) approaches. Laser-based SLAM generates an occupancy grid map, then the most popular Monte Carlo Localization (MCL) method spreads particles on the map and calculates the position of the robot by a probabilistic algorithm. However, this can be difficult, especially in symmetrical environments, because landmarks or features may not be sufficient to determine the robot’s orientation. Sometimes the position is not unique if a robot does not stay at the geometric center. This paper presents a novel approach to solving the robot localization problem in a symmetrical environment using the visual features-assisted method. Laser range measurements are used to estimate the robot position, while visual features determine its orientation. Firstly, we convert laser range scans raw data into coordinate data and calculate the geometric center. Secondly, we calculate the new distance from the geometric center point to all end points and find the longest distances. Then, we compare those distances, fit lines, extract corner points, and calculate the distance between adjacent corner points to determine whether the environment is symmetrical. Finally, if the environment is symmetrical, visual features based on the ORB keypoint detector and descriptor will be added to the system to determine the orientation of the robot. The experimental results show that our approach can successfully determine the position of the robot in a symmetrical environment, while ordinary MCL and its extension localization method always fail. Full article
(This article belongs to the Special Issue Sensing Applications in Robotics)
Show Figures

Figure 1

18 pages, 10837 KiB  
Article
Modeling of a Soft-Rigid Gripper Actuated by a Linear-Extension Soft Pneumatic Actuator
by Peilin Cheng, Jiangming Jia, Yuze Ye and Chuanyu Wu
Sensors 2021, 21(2), 493; https://doi.org/10.3390/s21020493 - 12 Jan 2021
Cited by 13 | Viewed by 5023
Abstract
Soft robot has been one significant study in recent decades and soft gripper is one of the popular research directions of soft robot. In a static gripping system, excessive gripping force and large deformation are the main reasons for damage of the object [...] Read more.
Soft robot has been one significant study in recent decades and soft gripper is one of the popular research directions of soft robot. In a static gripping system, excessive gripping force and large deformation are the main reasons for damage of the object during the gripping process. For achieving low-damage gripping to the object in static gripping system, we proposed a soft-rigid gripper actuated by a linear-extension soft pneumatic actuator in this study. The characteristic of the gripper under a no loading state was measured. When the pressure was >70 kPa, there was an approximately linear relation between the pressure and extension length of the soft actuator. To achieve gripping force and fingertip displacement control of the gripper without sensors integrated on the finger, we presented a non-contact sensing method for gripping state estimation. To analyze the gripping force and fingertip displacement, the relationship between the pressure and extension length of the soft actuator in loading state was compared with the relationship under a no-loading state. The experimental results showed that the relative error between the analytical gripping force and the measured gripping force of the gripper was ≤2.1%. The relative error between analytical fingertip displacement and theoretical fingertip displacement of the gripper was ≤7.4%. Furthermore, the low damage gripping to fragile and soft objects in static and dynamic gripping tests showed good performance of the gripper. Overall, the results indicated the potential application of the gripper in pick-and-place operations. Full article
(This article belongs to the Special Issue Sensing Applications in Robotics)
Show Figures

Figure 1

18 pages, 1322 KiB  
Article
An Improved Near-Field Computer Vision for Jet Trajectory Falling Position Prediction of Intelligent Fire Robot
by Jinsong Zhu, Lu Pan and Ge Zhao
Sensors 2020, 20(24), 7029; https://doi.org/10.3390/s20247029 - 08 Dec 2020
Cited by 6 | Viewed by 2411
Abstract
An improved Near-Field Computer Vision (NFCV) system for intelligent fire robot was proposed that was based on our previous works in this paper, whose aims are to realize falling position prediction of jet trajectory in fire extinguishing. Firstly, previous studies respecting the NFCV [...] Read more.
An improved Near-Field Computer Vision (NFCV) system for intelligent fire robot was proposed that was based on our previous works in this paper, whose aims are to realize falling position prediction of jet trajectory in fire extinguishing. Firstly, previous studies respecting the NFCV system were briefly reviewed and several issues during application testing were analyzed and summarized. The improved work mainly focuses on the segmentation and discrimination of jet trajectory adapted to complex lighting environment and interference scenes. It mainly includes parameters adjustment on the variance threshold and background update rate of the mixed Gaussian background method, jet trajectory discrimination based on length and area proportion parameters, parameterization, and feature extraction of jet trajectory based on superimposed radial centroid method. When compared with previous works, the proposed method reduces the average error of prediction results from 1.36 m to 0.1 m, and the error variance from 1.58 m to 0.13 m. The experimental results suggest that every part plays an important role in improving the functionality and reliability of the NFCV system, especially the background subtraction and radial centroid methods. In general, the improved NFCV system for jet trajectory falling position prediction has great potential for intelligent fire extinguishing by fire-fighting robots. Full article
(This article belongs to the Special Issue Sensing Applications in Robotics)
Show Figures

Figure 1

23 pages, 4165 KiB  
Article
kRadar++: Coarse-to-Fine FMCW Scanning Radar Localisation
by Daniele De Martini, Matthew Gadd and Paul Newman
Sensors 2020, 20(21), 6002; https://doi.org/10.3390/s20216002 - 22 Oct 2020
Cited by 18 | Viewed by 4480
Abstract
This paper presents a novel two-stage system which integrates topological localisation candidates from a radar-only place recognition system with precise pose estimation using spectral landmark-based techniques. We prove that the—recently available—seminal radar place recognition (RPR) and scan matching sub-systems are complementary in a [...] Read more.
This paper presents a novel two-stage system which integrates topological localisation candidates from a radar-only place recognition system with precise pose estimation using spectral landmark-based techniques. We prove that the—recently available—seminal radar place recognition (RPR) and scan matching sub-systems are complementary in a style reminiscent of the mapping and localisation systems underpinning visual teach-and-repeat (VTR) systems which have been exhibited robustly in the last decade. Offline experiments are conducted on the most extensive radar-focused urban autonomy dataset available to the community with performance comparing favourably with and even rivalling alternative state-of-the-art radar localisation systems. Specifically, we show the long-term durability of the approach and of the sensing technology itself to autonomous navigation. We suggest a range of sensible methods of tuning the system, all of which are suitable for online operation. For both tuning regimes, we achieve, over the course of a month of localisation trials against a single static map, high recalls at high precision, and much reduced variance in erroneous metric pose estimation. As such, this work is a necessary first step towards a radar teach-and-repeat (RTR) system and the enablement of autonomy across extreme changes in appearance or inclement conditions. Full article
(This article belongs to the Special Issue Sensing Applications in Robotics)
Show Figures

Figure 1

20 pages, 7300 KiB  
Article
Applying a 6 DoF Robotic Arm and Digital Twin to Automate Fan-Blade Reconditioning for Aerospace Maintenance, Repair, and Overhaul
by John Oyekan, Michael Farnsworth, Windo Hutabarat, David Miller and Ashutosh Tiwari
Sensors 2020, 20(16), 4637; https://doi.org/10.3390/s20164637 - 18 Aug 2020
Cited by 33 | Viewed by 6524
Abstract
The UK is home to several major air commercial and transport hubs. As a result, there is a high demand for Maintenance, Repair, and Overhaul (MRO) services to ensure that fleets of aircraft are in airworthy conditions. MRO services currently involve heavy manual [...] Read more.
The UK is home to several major air commercial and transport hubs. As a result, there is a high demand for Maintenance, Repair, and Overhaul (MRO) services to ensure that fleets of aircraft are in airworthy conditions. MRO services currently involve heavy manual labor. This creates bottlenecks, low repeatability, and low productivity. Presented in this paper is an investigation to create an automation cell for the fan-blade reconditioning component of MRO. The design and prototype of the automation cell is presented. Furthermore, a digital twin of the grinding process is developed and used as a tool to explore the required grinding force parameters needed to effectively remove surface material. An integration of a 6-DoF industrial robot with an end-effector grinder and a computer vision system was undertaken. The computer vision system was used for the digitization of the fan-blade surface as well as tracking and guidance of material removal. Our findings reveal that our proposed system can perform material removal, track the state of the fan blade during the reconditioning process and do so within a closed-loop automated robotic work cell. Full article
(This article belongs to the Special Issue Sensing Applications in Robotics)
Show Figures

Figure 1

20 pages, 5929 KiB  
Article
Optimization Design and Flexible Detection Method of Wall-Climbing Robot System with Multiple Sensors Integration for Magnetic Particle Testing
by Xiaojun Zhang, Xuan Zhang, Minglu Zhang, Lingyu Sun and Manhong Li
Sensors 2020, 20(16), 4582; https://doi.org/10.3390/s20164582 - 15 Aug 2020
Cited by 15 | Viewed by 3534
Abstract
Weld detection is vital to the quality of ship construction and navigation safety, and numerous detection robots have been developed and widely applied. Focusing on the current bottleneck of robot safety, efficiency, and intelligent detection, this paper developed a wall-climbing robot that integrates [...] Read more.
Weld detection is vital to the quality of ship construction and navigation safety, and numerous detection robots have been developed and widely applied. Focusing on the current bottleneck of robot safety, efficiency, and intelligent detection, this paper developed a wall-climbing robot that integrates multiple sensors and uses fluorescent magnetic powder for nondestructive testing. We designed a moving mechanism that can safely move on a curved surface and a serial-parallel hybrid flexible detection mechanism that incorporates a force sensor to solve the robot’s safe adsorption and a flexible detection of the curved surface to complete the flaw detection operation. We optimized the system structure and improved the overall performance of the robot by establishing a unified mechanical model for different operating conditions. Based on the collected sensor information, a multi-degree of freedom component collaborative flexible detection method with a standard detecting process was developed to complete efficient, high-quality detection. Results showed that the developed wall-climbing robot can move safely and steadily on the complex facade and can complete the flaw detection of wall welds. Full article
(This article belongs to the Special Issue Sensing Applications in Robotics)
Show Figures

Figure 1

14 pages, 5285 KiB  
Article
Absolute Positioning Accuracy Improvement in an Industrial Robot
by Yizhou Jiang, Liandong Yu, Huakun Jia, Huining Zhao and Haojie Xia
Sensors 2020, 20(16), 4354; https://doi.org/10.3390/s20164354 - 05 Aug 2020
Cited by 43 | Viewed by 5464
Abstract
The absolute positioning accuracy of a robot is an important specification that determines its performance, but it is affected by several error sources. Typical calibration methods only consider kinematic errors and neglect complex non-kinematic errors, thus limiting the absolute positioning accuracy. To further [...] Read more.
The absolute positioning accuracy of a robot is an important specification that determines its performance, but it is affected by several error sources. Typical calibration methods only consider kinematic errors and neglect complex non-kinematic errors, thus limiting the absolute positioning accuracy. To further improve the absolute positioning accuracy, we propose an artificial neural network optimized by the differential evolution algorithm. Specifically, the structure and parameters of the network are iteratively updated by differential evolution to improve both accuracy and efficiency. Then, the absolute positioning deviation caused by kinematic and non-kinematic errors is compensated using the trained network. To verify the performance of the proposed network, the simulations and experiments are conducted using a six-degree-of-freedom robot and a laser tracker. The robot average positioning accuracy improved from 0.8497 mm before calibration to 0.0490 mm. The results demonstrate the substantial improvement in the absolute positioning accuracy achieved by the proposed network on an industrial robot. Full article
(This article belongs to the Special Issue Sensing Applications in Robotics)
Show Figures

Figure 1

Back to TopTop