Intelligent Robotics

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (31 August 2023) | Viewed by 12799

Special Issue Editors


E-Mail Website
Guest Editor
Autonomous University of Madrid, Fracisco Tomás y Valiente, 11, 28049 Madrid, Spain
Interests: robotics; multirobot systems; swarms; artificial intelligence; machine learning; adaptive and immersive interfaces; robotics in agriculture
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Delft University of Technology, Mekelweg 2, 2628 CD Delft, The Netherlands

Special Issue Information

Dear Colleagues,

Robots have come out of industries and spread across many areas, such as agriculture, environmental protection, and services. These systems are especially useful in scenarios where humans cannot work due to their remote or dangerous nature. However, robots still have to overcome challenges related to autonomy to perform in these scenarios, as well as to reduce the workload of operators.

The rise of machine learning in recent years promises to address these issues in robotics. Deep learning techniques can be thoroughly applied to robot missions. Convolutional neural networks are providing robots with improved perception skills, whereas reinforcement learning is leading to promising results in guidance, navigation, and control. Researchers are applying other techniques to create robots with decision-making capabilities and enhance human–robot interactions.

This Special Issue aims at collecting high-quality works that apply machine learning techniques to different types of robots (ground, aerial, marine, and submarine robots) and fleets (multirobot systems and robot swarms). We are looking for manuscripts with state-of-the-art reviews, original research, and real-world applications.

Please contact us if you have any doubt regarding your submission.

Dr. Juan Jesús Roldán-Gómez
Dr. Mario Andrei Garzón Oviedo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Robotics
  • Machine learning
  • Deep learning
  • Reinforcement learning
  • Imitation learning
  • Perception
  • Decision-making
  • Guidance, navigation, and control
  • Manipulation and grasping
  • Human–robot interaction
  • Ground, aerial and marine robots
  • Multirobot systems
  • Swarms

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 5658 KiB  
Article
Long-Range Navigation in Complex and Dynamic Environments with Full-Stack S-DOVS
by Diego Martinez-Baselga, Luis Riazuelo and Luis Montano
Appl. Sci. 2023, 13(15), 8925; https://doi.org/10.3390/app13158925 - 03 Aug 2023
Viewed by 864
Abstract
Robotic autonomous navigation in dynamic environments is a complex problem, as traditional planners may fail to take dynamic obstacles and their variables into account. The Strategy-based Dynamic Object Velocity Space (S-DOVS) planner has been proposed as a solution to navigate in such scenarios. [...] Read more.
Robotic autonomous navigation in dynamic environments is a complex problem, as traditional planners may fail to take dynamic obstacles and their variables into account. The Strategy-based Dynamic Object Velocity Space (S-DOVS) planner has been proposed as a solution to navigate in such scenarios. However, it has a number of limitations, such as inability to reach a goal in a large known map, avoid convex objects, or handle trap situations. In this article, we present a modified version of the S-DOVS planner that is integrated into a full navigation stack, which includes a localization system, obstacle tracker, and novel waypoint generator. The complete system takes into account robot kinodynamic constraints and is capable of navigating through large scenarios with known map information in the presence of dynamic obstacles. Extensive simulation and ground robot experiments demonstrate the effectiveness of our system even in environments with dynamic obstacles and replanning requirements, and show that our waypoint generator outperforms other approaches in terms of success rate and time to reach the goal when combined with the S-DOVS planner. Overall, our work represents a step forward in the development of robust and reliable autonomous navigation systems for real-world scenarios. Full article
(This article belongs to the Special Issue Intelligent Robotics)
Show Figures

Figure 1

20 pages, 1261 KiB  
Article
Real-Time Tension Distribution Design for Cable-Driven Parallel Robot
by Sheng Cao, Zhiwei Luo and Changqin Quan
Appl. Sci. 2023, 13(1), 10; https://doi.org/10.3390/app13010010 - 20 Dec 2022
Viewed by 1583
Abstract
In this study, we investigated dynamic control strategies for over-constrained cable-driven robots. In order to control a cable-driven robot, it is essential to address issues that arise from the restriction of cable tension, as well as to the redundancy issues that arise from [...] Read more.
In this study, we investigated dynamic control strategies for over-constrained cable-driven robots. In order to control a cable-driven robot, it is essential to address issues that arise from the restriction of cable tension, as well as to the redundancy issues that arise from an over-constrained cable-driven system. In contrast to previous research that required consideration of the relationship between tension constraints and computed control wrench in tension distribution problems, we developed a tension function that incorporates the hyperbolic tangent function, which allows tension to always satisfy tension constraints and eliminates the consideration of constraints at each step. The gradient descent method was applied to this tension function to determine an appropriate distribution of tension for the computed wrench. In order to manage tension distribution optimization for achieving objectives such as energy conservation, we provide a practical method to simultaneously realize the necessary wrench and the appropriate tension distribution. Compared with studies that focus on the complex analysis of the structure matrix to solve the tension distribution problem, the tension distribution issue is handled in a straightforward manner in our method, providing the solutions to other problems, such as discontinuity in the calculated wrench, and requirements of changing the cable’s force level during movement. The simulation results and results of comparison with other methods show the effectiveness of the method. Full article
(This article belongs to the Special Issue Intelligent Robotics)
Show Figures

Figure 1

17 pages, 4117 KiB  
Article
Solving a Simple Geduldspiele Cube with a Robotic Gripper via Sim-to-Real Transfer
by Ji-Hyeon Yoo, Ho-Jin Jung, Jang-Hyeon Kim, Dae-Han Sim and Han-Ul Yoon
Appl. Sci. 2022, 12(19), 10124; https://doi.org/10.3390/app121910124 - 09 Oct 2022
Cited by 1 | Viewed by 1489
Abstract
Geduldspiele cubes (also known as patience cubes in English) are interesting problems to solve with robotic systems on the basis of machine learning approaches. Generally, highly dexterous hand and finger movement is required to solve them. In this paper, we propose a reinforcement-learning-based [...] Read more.
Geduldspiele cubes (also known as patience cubes in English) are interesting problems to solve with robotic systems on the basis of machine learning approaches. Generally, highly dexterous hand and finger movement is required to solve them. In this paper, we propose a reinforcement-learning-based approach to solve simple geduldspiele cubes of a flat plane, a convex plane, and a concave plane. The key idea of the proposed approach is that we adopt a sim-to-real framework in which a robotic agent is virtually trained in simulation environment based on reinforcement learning, then the virtually trained robotic agent is deployed into a physical robotic system and evaluated for tasks in the real world. We developed a test bed which consists of a dual-arm robot with a patience cube in a gripper and the virtual avatar system to be trained in the simulation world. The experimental results showed that the virtually trained robotic agent was able to solve simple patience cubes in the real world as well. Based on the results, we could expect to solve the more complex patience cubes by augmenting the proposed approach with versatile reinforcement learning algorithms. Full article
(This article belongs to the Special Issue Intelligent Robotics)
Show Figures

Figure 1

30 pages, 10023 KiB  
Article
Using Deep Reinforcement Learning with Automatic Curriculum Learning for Mapless Navigation in Intralogistics
by Honghu Xue, Benedikt Hein, Mohamed Bakr, Georg Schildbach, Bengt Abel and Elmar Rueckert
Appl. Sci. 2022, 12(6), 3153; https://doi.org/10.3390/app12063153 - 19 Mar 2022
Cited by 10 | Viewed by 3362
Abstract
We propose a deep reinforcement learning approach for solving a mapless navigation problem in warehouse scenarios. In our approach, an automatic guided vehicle is equipped with two LiDAR sensors and one frontal RGB camera and learns to perform a targeted navigation task. The [...] Read more.
We propose a deep reinforcement learning approach for solving a mapless navigation problem in warehouse scenarios. In our approach, an automatic guided vehicle is equipped with two LiDAR sensors and one frontal RGB camera and learns to perform a targeted navigation task. The challenges reside in the sparseness of positive samples for learning, multi-modal sensor perception with partial observability, the demand for accurate steering maneuvers together with long training cycles. To address these points, we propose NavACL-Q as an automatic curriculum learning method in combination with a distributed version of the soft actor-critic algorithm. The performance of the learning algorithm is evaluated exhaustively in a different warehouse environment to validate both robustness and generalizability of the learned policy. Results in NVIDIA Isaac Sim demonstrates that our trained agent significantly outperforms the map-based navigation pipeline provided by NVIDIA Isaac Sim with an increased agent-goal distance of 3 m and a wider initial relative agent-goal rotation of approximately 45. The ablation studies also suggest that NavACL-Q greatly facilitates the whole learning process with a performance gain of roughly 40% compared to training with random starts and a pre-trained feature extractor manifestly boosts the performance by approximately 60%. Full article
(This article belongs to the Special Issue Intelligent Robotics)
Show Figures

Figure 1

19 pages, 8970 KiB  
Article
Artificial Subjectivity: Personal Semantic Memory Model for Cognitive Agents
by Aumm-e-hani Munir and Wajahat Mahmood Qazi
Appl. Sci. 2022, 12(4), 1903; https://doi.org/10.3390/app12041903 - 11 Feb 2022
Viewed by 2076
Abstract
Personal semantic memory is a way of inducing subjectivity in intelligent agents. Personal semantic memory has knowledge related to personal beliefs, self-knowledge, preferences, and perspectives in humans. Modeling this cognitive feature in the intelligent agent can help them in perception, learning, reasoning, and [...] Read more.
Personal semantic memory is a way of inducing subjectivity in intelligent agents. Personal semantic memory has knowledge related to personal beliefs, self-knowledge, preferences, and perspectives in humans. Modeling this cognitive feature in the intelligent agent can help them in perception, learning, reasoning, and judgments. This paper presents a methodology for the development of personal semantic memory in response to external information. The main contribution of the work is to propose and implement the computational version of personal semantic memory. The proposed model has modules for perception, learning, sentiment analysis, knowledge representation, and personal semantic construction. These modules work in synergy for personal semantic knowledge formulation, learning, and storage. Personal semantics are added to the existing body of knowledge qualitatively and quantitatively. We performed multiple experiments where the agent had conversations with the humans. Results show an increase in personal semantic knowledge in the agent’s memory during conversations with an F1 score of 0.86. These personal semantics evolved qualitatively and quantitatively with time during experiments. Results demonstrated that agents with the given personal semantics architecture possessed personal semantics that can help the agent to produce some sort of subjectivity in the future. Full article
(This article belongs to the Special Issue Intelligent Robotics)
Show Figures

Figure 1

14 pages, 4555 KiB  
Article
Improving Robot Perception Skills Using a Fast Image-Labelling Method with Minimal Human Intervention
by Carlos Ricolfe-Viala and Carlos Blanes
Appl. Sci. 2022, 12(3), 1557; https://doi.org/10.3390/app12031557 - 31 Jan 2022
Cited by 4 | Viewed by 1826
Abstract
Robot perception skills contribute to natural interfaces that enhance human–robot interactions. This can be notably improved by using convolutional neural networks. To train a convolutional neural network, the labelling process is the crucial first stage, in which image objects are marked with rectangles [...] Read more.
Robot perception skills contribute to natural interfaces that enhance human–robot interactions. This can be notably improved by using convolutional neural networks. To train a convolutional neural network, the labelling process is the crucial first stage, in which image objects are marked with rectangles or masks. There are many image-labelling tools, but all require human interaction to achieve good results. Manual image labelling with rectangles or masks is labor-intensive and unappealing work, which can take months to complete, making the labelling task tedious and lengthy. This paper proposes a fast method to create labelled images with minimal human intervention, which is tested with a robot perception task. Images of objects taken with specific backgrounds are quickly and accurately labelled with rectangles or masks. In a second step, detected objects can be synthesized with different backgrounds to improve the training capabilities of the image set. Experimental results show the effectiveness of this method with an example of human–robot interaction using hand fingers. This labelling method generates a database to train convolutional networks to detect hand fingers easily with minimal labelling work. This labelling method can be applied to new image sets or used to add new samples to existing labelled image sets of any application. This proposed method improves the labelling process noticeably and reduces the time required to start the training process of a convolutional neural network model. Full article
(This article belongs to the Special Issue Intelligent Robotics)
Show Figures

Figure 1

Back to TopTop