sensors-logo

Journal Browser

Journal Browser

Sensors for Robotic Applications in Europe

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (31 January 2023) | Viewed by 17904

Special Issue Editors


E-Mail Website
Guest Editor
INESC-TEC—Institute for Systems and Computer Engineering, Technology and Science, CRIIS—Centre for Robotics in Industry and Intelligent Systems, 4200-465 Porto, Portugal
Interests: robotics; computer vision; assistive robotics

E-Mail Website
Guest Editor
Department of Electrical Engineering, Institute of Engineering of the Polytechnic Institute of Porto, 4249-015 Porto, Portugal
Interests: modelling; simulation; robotics; biological inspired robots; control and education in robotics and control
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Historically, robots have been long used in industry for performing a vast array of tasks in which the objective was to remove humans from dirty, dull, and dangerous tasks and environments. Today, robots are increasingly playing a role in our daily life and in civil society. They are called upon to do housework, to help and extend the social life of the elderly and the weak, to protect the environment, to keep worrying climatic events under control, and to help workers and people in a variety of tasks. In addition, they are also being used in industry and increasingly so in tasks where close collaboration between humans and robots is required. In the past years, we have witnessed significant applications of personal and household service robots, including vacuum and cleaning robots, educational robots, toy robots, and assistive and elderly care robots.

In order to adequately perform their tasks, irrespective of the application sector, robots need to sense the environment and, for this reason, the development of the robotics area has been accompanied by the continuous development of sensors and sensor applications.

Given this, and the fact that there are many outstanding robot laboratories and industry leaders in Europe, the aim of this Special Issue is to collect experimental and theoretical papers covering different aspects and modalities of sensing applications in the wide industrial and service robotics domain in Europe.

Dr. Marcelo Petry
Prof. Dr. Manuel F. Silva
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Europe
  • sensors
  • robotics

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

16 pages, 16555 KiB  
Article
Comparison of 3D Sensors for Automating Bolt-Tightening Operations in the Automotive Industry
by Joana Dias, Pedro Simões, Nuno Soares, Carlos M. Costa, Marcelo R. Petry, Germano Veiga and Luís F. Rocha
Sensors 2023, 23(9), 4310; https://doi.org/10.3390/s23094310 - 27 Apr 2023
Cited by 1 | Viewed by 1460
Abstract
Machine vision systems are widely used in assembly lines for providing sensing abilities to robots to allow them to handle dynamic environments. This paper presents a comparison of 3D sensors for evaluating which one is best suited for usage in a machine vision [...] Read more.
Machine vision systems are widely used in assembly lines for providing sensing abilities to robots to allow them to handle dynamic environments. This paper presents a comparison of 3D sensors for evaluating which one is best suited for usage in a machine vision system for robotic fastening operations within an automotive assembly line. The perception system is necessary for taking into account the position uncertainty that arises from the vehicles being transported in an aerial conveyor. Three sensors with different working principles were compared, namely laser triangulation (SICK TriSpector1030), structured light with sequential stripe patterns (Photoneo PhoXi S) and structured light with infrared speckle pattern (Asus Xtion Pro Live). The accuracy of the sensors was measured by computing the root mean square error (RMSE) of the point cloud registrations between their scans and two types of reference point clouds, namely, CAD files and 3D sensor scans. Overall, the RMSE was lower when using sensor scans, with the SICK TriSpector1030 achieving the best results (0.25 mm ± 0.03 mm), the Photoneo PhoXi S having the intermediate performance (0.49 mm ± 0.14 mm) and the Asus Xtion Pro Live obtaining the higher RMSE (1.01 mm ± 0.11 mm). Considering the use case requirements, the final machine vision system relied on the SICK TriSpector1030 sensor and was integrated with a collaborative robot, which was successfully deployed in an vehicle assembly line, achieving 94% success in 53,400 screwing operations. Full article
(This article belongs to the Special Issue Sensors for Robotic Applications in Europe)
Show Figures

Figure 1

21 pages, 16994 KiB  
Article
The Measure of Motion Similarity for Robotics Application
by Teresa Zielinska and Gabriel Coba
Sensors 2023, 23(3), 1643; https://doi.org/10.3390/s23031643 - 02 Feb 2023
Viewed by 1330
Abstract
A new measure of motion similarity has been proposed. The formulation of this measure is presented and its logical basis is described. Unlike in most of other methods, the measure enables easy determination of the instantaneous synergies of the motion of body parts. [...] Read more.
A new measure of motion similarity has been proposed. The formulation of this measure is presented and its logical basis is described. Unlike in most of other methods, the measure enables easy determination of the instantaneous synergies of the motion of body parts. To demonstrate how to use the measure, the data describing human movement is used. The movement is recorded using a professional motion capture system. Two different cases of non-periodic movements are discussed: stepping forward and backward, and returning to a stable posture after an unexpected thrust to the side (hands free or tied). This choice enables the identification of synergies in slow dynamics (stepping) and in fast dynamics (push recovery). The trajectories of motion similarity measures are obtained for point masses of the human body. The interpretation of these trajectories in relation to motion events is discussed. In addition, ordinary motion trajectories and footprints are shown in order to better illustrate the specificity of the discussed examples. The article ends with a discussion and conclusions. Full article
(This article belongs to the Special Issue Sensors for Robotic Applications in Europe)
Show Figures

Figure 1

18 pages, 3816 KiB  
Article
Deep Learning Framework for Controlling Work Sequence in Collaborative Human–Robot Assembly Processes
by Pedro P. Garcia, Telmo G. Santos, Miguel A. Machado and Nuno Mendes
Sensors 2023, 23(1), 553; https://doi.org/10.3390/s23010553 - 03 Jan 2023
Cited by 7 | Viewed by 2708
Abstract
The human–robot collaboration (HRC) solutions presented so far have the disadvantage that the interaction between humans and robots is based on the human’s state or on specific gestures purposely performed by the human, thus increasing the time required to perform a task and [...] Read more.
The human–robot collaboration (HRC) solutions presented so far have the disadvantage that the interaction between humans and robots is based on the human’s state or on specific gestures purposely performed by the human, thus increasing the time required to perform a task and slowing down the pace of human labor, making such solutions uninteresting. In this study, a different concept of the HRC system is introduced, consisting of an HRC framework for managing assembly processes that are executed simultaneously or individually by humans and robots. This HRC framework based on deep learning models uses only one type of data, RGB camera data, to make predictions about the collaborative workspace and human action, and consequently manage the assembly process. To validate the HRC framework, an industrial HRC demonstrator was built to assemble a mechanical component. Four different HRC frameworks were created based on the convolutional neural network (CNN) model structures: Faster R-CNN ResNet-50 and ResNet-101, YOLOv2 and YOLOv3. The HRC framework with YOLOv3 structure showed the best performance, showing a mean average performance of 72.26% and allowed the HRC industrial demonstrator to successfully complete all assembly tasks within a desired time window. The HRC framework has proven effective for industrial assembly applications. Full article
(This article belongs to the Special Issue Sensors for Robotic Applications in Europe)
Show Figures

Figure 1

16 pages, 28468 KiB  
Article
A Mixed-Reality Tele-Operation Method for High-Level Control of a Legged-Manipulator Robot
by Christyan Cruz Ulloa, David Domínguez, Jaime Del Cerro and Antonio Barrientos
Sensors 2022, 22(21), 8146; https://doi.org/10.3390/s22218146 - 24 Oct 2022
Cited by 10 | Viewed by 2630
Abstract
In recent years, legged (quadruped) robots have been subject of technological study and continuous development. These robots have a leading role in applications that require high mobility skills in complex terrain, as is the case of Search and Rescue (SAR). These robots stand [...] Read more.
In recent years, legged (quadruped) robots have been subject of technological study and continuous development. These robots have a leading role in applications that require high mobility skills in complex terrain, as is the case of Search and Rescue (SAR). These robots stand out for their ability to adapt to different terrains, overcome obstacles and move within unstructured environments. Most of the implementations recently developed are focused on data collecting with sensors, such as lidar or cameras. This work seeks to integrate a 6DoF arm manipulator to the quadruped robot ARTU-R (A1 Rescue Tasks UPM Robot) by Unitree to perform manipulation tasks in SAR environments. The main contribution of this work is focused on the High-level control of the robotic set (Legged + Manipulator) using Mixed-Reality (MR). An optimization phase of the robotic set workspace has been previously developed in Matlab for the implementation, as well as a simulation phase in Gazebo to verify the dynamic functionality of the set in reconstructed environments. The first and second generation of Hololens glasses have been used and contrasted with a conventional interface to develop the MR control part of the proposed method. Manipulations of first aid equipment have been carried out to evaluate the proposed method. The main results show that the proposed method allows better control of the robotic set than conventional interfaces, improving the operator efficiency in performing robotic handling tasks and increasing confidence in decision-making. On the other hand, Hololens 2 showed a better user experience concerning graphics and latency time. Full article
(This article belongs to the Special Issue Sensors for Robotic Applications in Europe)
Show Figures

Figure 1

12 pages, 2376 KiB  
Article
Point Cloud Compression: Impact on Object Detection in Outdoor Contexts
by Luís Garrote, João Perdiz, Luís A. da Silva Cruz and Urbano J. Nunes
Sensors 2022, 22(15), 5767; https://doi.org/10.3390/s22155767 - 02 Aug 2022
Cited by 3 | Viewed by 1952
Abstract
Increasing demand for more reliable and safe autonomous driving means that data involved in the various aspects of perception, such as object detection, will become more granular as the number and resolution of sensors progress. Using these data for on-the-fly object detection causes [...] Read more.
Increasing demand for more reliable and safe autonomous driving means that data involved in the various aspects of perception, such as object detection, will become more granular as the number and resolution of sensors progress. Using these data for on-the-fly object detection causes problems related to the computational complexity of onboard processing in autonomous vehicles, leading to a desire to offload computation to roadside infrastructure using vehicle-to-infrastructure communication links. The need to transmit sensor data also arises in the context of vehicle fleets exchanging sensor data, over vehicle-to-vehicle communication links. Some types of sensor data modalities, such as Light Detection and Ranging (LiDAR) point clouds, are so voluminous that their transmission is impractical without data compression. With most emerging autonomous driving implementations being anchored on point cloud data, we propose to evaluate the impact of point cloud compression on object detection. To that end, two different object detection architectures are evaluated using point clouds from the KITTI object dataset: raw point clouds and point clouds compressed with a state-of-the-art encoder and three different compression levels. The analysis is extended to the impact of compression on depth maps generated from images projected from the point clouds, with two conversion methods tested. Results show that low-to-medium levels of compression do not have a major impact on object detection performance, especially for larger objects. Results also show that the impact of point cloud compression is lower when detecting objects using depth maps, placing this particular method of point cloud data representation on a competitive footing compared to raw point cloud data. Full article
(This article belongs to the Special Issue Sensors for Robotic Applications in Europe)
Show Figures

Figure 1

Other

Jump to: Research

52 pages, 4421 KiB  
Systematic Review
Augmented Reality for Human–Robot Collaboration and Cooperation in Industrial Applications: A Systematic Literature Review
by Gabriel de Moura Costa, Marcelo Roberto Petry and António Paulo Moreira
Sensors 2022, 22(7), 2725; https://doi.org/10.3390/s22072725 - 01 Apr 2022
Cited by 29 | Viewed by 6722
Abstract
With the continuously growing usage of collaborative robots in industry, the need for achieving a seamless human–robot interaction has also increased, considering that it is a key factor towards reaching a more flexible, effective, and efficient production line. As a prominent and prospective [...] Read more.
With the continuously growing usage of collaborative robots in industry, the need for achieving a seamless human–robot interaction has also increased, considering that it is a key factor towards reaching a more flexible, effective, and efficient production line. As a prominent and prospective tool to support the human operator to understand and interact with robots, Augmented Reality (AR) has been employed in numerous human–robot collaborative and cooperative industrial applications. Therefore, this systematic literature review critically appraises 32 papers’ published between 2016 and 2021 to identify the main employed AR technologies, outline the current state of the art of augmented reality for human–robot collaboration and cooperation, and point out future developments for this research field. Results suggest that this is still an expanding research field, especially with the advent of recent advancements regarding head-mounted displays (HMDs). Moreover, projector-based and HMDs developed approaches are showing promising positive influences over operator-related aspects such as performance, task awareness, and safety feeling, even though HMDs need further maturation in ergonomic aspects. Further research should focus on large-scale assessment of the proposed solutions in industrial environments, involving the solution’s target audience, and on establishing standards and guidelines for developing AR assistance systems. Full article
(This article belongs to the Special Issue Sensors for Robotic Applications in Europe)
Show Figures

Figure 1

Back to TopTop