Deep Learning for “Intelligent” Robots

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Systems & Control Engineering".

Deadline for manuscript submissions: 15 September 2024 | Viewed by 3979

Special Issue Editors


E-Mail Website
Guest Editor
School of Information Science and Engineering, Shandong University, Qingdao 266237, China
Interests: computer vision; machine learning; neural network; pattern recognition; visual robot

E-Mail Website
Guest Editor
School of Intelligent Engineering, Shandong Management University, Jinan 250357, China.
Interests: industrial robot; maintenance and inspection robot; signal processing; industrial IoT; intelligent control

E-Mail Website
Guest Editor
School of Information Science and Engineering, Shandong University, Qingdao 266237, China
Interests: big data; cognitive computing; deep learning; pattern recognition; computer vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
National Deep Sea Center, Qingdao 266237, China
Interests: deep sea exploration technology; underwater robot; underwater image processing

Special Issue Information

Dear Colleagues,

The continuous development of robots offers a significant degree of convenience to our lives, but it also necessitates more stringent requirements for the technologies involved. This includes the continuous innovation and exploration of technological robots, in order to meet the specific needs of their rapid development.

In recent years, deep learning models represented by convolutional neural networks (CNNs), long short-term memory (LSTM) networks, and graph neural networks (GNNs) have achieved remarkable progress in many fields. The neural network exhibits a strong data nonlinear fitting ability, feature extraction and representation ability; flexible structure design ability; and cross-scenario generalization capability. First, the deployment of an efficient sensor network is the cornerstone of intelligent robots, which helps to sense and supervise the entire task, and provide closed-loop feedback and improve the entire process. On the other hand, for various types of massive data generated in the sensing process, neural networks can cope well with and adapt to various tasks. However, the construction of the model structure, the selection of the objective function, the setting of hyperparameters, the establishment of the optimization algorithm, and the edge deployment of the trained deep learning model still hinder its practical application. In addition, the development of edge computing platforms, power consumption, and the storage limitations of mobile devices; fast data collection and processing methods; cloud–edge communication efficiency; and data privacy protection are all key factors restricting the development of this field. These important development directions involve many hot research fields, including wireless communication, signal processing, machine learning, automatic control, etc.

This Special Issue aims to provide a platform for researchers and application developers to discuss the opportunities, issues, challenges, and possible solutions for integrating deep learning and edge computing in intelligent robots. This Special Issue will provide an opportunity to present technical strategies and empirical evidence to propose new tools, methodologies, approaches, frameworks, and techniques to design solutions to deep learning-based intelligent computing challenges faced in the design and application of automatic robots. In this Special Issue, original research articles and reviews are welcome. Research areas may include (but are not limited to) the following:

  • Intelligent robot for underwater exploration;
  • Visual robot for navigation;
  • Deep learning-based intelligent control system;
  • Reinforcement learning-based path planning;
  • Development and application of inspection and maintenance robot;
  • Image processing driven by confrontation generation network;
  • Optimization and generalization in deep learning;
  • Hyper-parameters setting in deep learning for robot control;
  • Architecture design in deep learning for robot operation;
  • Object detection;
  • 3D reconstruction;
  • Image classification.

Dr. Mingqiang Yang
Prof. Zhiguo Yu
Dr. Qinghe Zheng
Prof. Dr. Zhongjun Ding
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent robot
  • machine learning
  • deep learning
  • neural network
  • automatic obstacle avoidance
  • path planning
  • 3D reconstruction
  • object detection
  • instance segmentation

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 4418 KiB  
Article
Deep Learning-Based Ensemble Approach for Autonomous Object Manipulation with an Anthropomorphic Soft Robot Hand
by Edwin Valarezo Añazco, Sara Guerrero, Patricio Rivera Lopez, Ji-Heon Oh, Ga-Hyeon Ryu and Tae-Seong Kim
Electronics 2024, 13(2), 379; https://doi.org/10.3390/electronics13020379 - 17 Jan 2024
Viewed by 964
Abstract
Autonomous object manipulation is a challenging task in robotics because it requires an essential understanding of the object’s parameters such as position, 3D shape, grasping (i.e., touching) areas, and orientation. This work presents an autonomous object manipulation system using an anthropomorphic soft robot [...] Read more.
Autonomous object manipulation is a challenging task in robotics because it requires an essential understanding of the object’s parameters such as position, 3D shape, grasping (i.e., touching) areas, and orientation. This work presents an autonomous object manipulation system using an anthropomorphic soft robot hand with deep learning (DL) vision intelligence for object detection, 3D shape reconstruction, and object grasping area generation. Object detection is performed using Faster-RCNN and an RGB-D sensor to produce a partial depth view of the objects randomly located in the working space. Three-dimensional object shape reconstruction is performed using U-Net based on 3D convolutions with bottle-neck layers and skip connections generating a complete 3D shape of the object from the sensed single-depth view. Then, the grasping position and orientation are computed based on the reconstructed 3D object information (e.g., object shape and size) using U-Net based on 3D convolutions and Principal Component Analysis (PCA), respectively. The proposed autonomous object manipulation system is evaluated by grasping and relocating twelve objects not included in the training database, achieving an average of 95% successful object grasping and 93% object relocations. Full article
(This article belongs to the Special Issue Deep Learning for “Intelligent” Robots)
Show Figures

Figure 1

20 pages, 23576 KiB  
Article
Contingency Planning of Visual Contamination for Wheeled Mobile Robots with Chameleon-Inspired Visual System
by Yan Xu, Hongpeng Yu, Liyan Wu, Yuqiu Song and Cuihong Liu
Electronics 2023, 12(11), 2365; https://doi.org/10.3390/electronics12112365 - 24 May 2023
Viewed by 1063
Abstract
To enable mobile robots to effectively deal with the emergency of visual contamination, contingency planning based on case-based reasoning (CBR) was performed in this paper. First, for a wheeled mobile robot (WMR) equipped with a chameleon-inspired visual system, a target search model in [...] Read more.
To enable mobile robots to effectively deal with the emergency of visual contamination, contingency planning based on case-based reasoning (CBR) was performed in this paper. First, for a wheeled mobile robot (WMR) equipped with a chameleon-inspired visual system, a target search model in chameleon-inspired binocular negative correlation movement (CIBNCM) mode was established. Second, a CBR-based contingency planning model of visual contamination for WMRs was established, where the reasoning process using CBR for visual contamination was analyzed in detail. Third, through the analysis of environment perception when visual contamination occurs, a perception model in chameleon-inspired visual contamination for WMRs was built. Finally, to validate the proposed approach, a contingency planning experiment scheme for visual contamination was designed based on the robot’s general planning of target tracking, and the experimental result is discussed. The proposed CBR-based contingency planning approach for visual contamination can reason out effective solutions corresponding to the contamination situations. The rationality of the approach was verified by experiments with satisfactory results. Moreover, compared with the contingency planning method based on rule-based reasoning, the accuracy of target retracking after the robot visual system is contaminated is significantly higher for the CBR-based contingent planning method used in this paper. Full article
(This article belongs to the Special Issue Deep Learning for “Intelligent” Robots)
Show Figures

Figure 1

13 pages, 1330 KiB  
Article
Surface Defect Detection of Hot Rolled Steel Based on Attention Mechanism and Dilated Convolution for Industrial Robots
by Yuanfan Yu, Sixian Chan, Tinglong Tang, Xiaolong Zhou, Yuan Yao and Hongkai Zhang
Electronics 2023, 12(8), 1856; https://doi.org/10.3390/electronics12081856 - 14 Apr 2023
Cited by 3 | Viewed by 1468
Abstract
In the manufacturing process of industrial robots, the defect detection of raw materials includes two types of tasks, which makes the defect detection guarantee its accuracy. It also makes the defect detection task challenging in practical work. In analyzing the disadvantages of the [...] Read more.
In the manufacturing process of industrial robots, the defect detection of raw materials includes two types of tasks, which makes the defect detection guarantee its accuracy. It also makes the defect detection task challenging in practical work. In analyzing the disadvantages of the existing defect detection task methods, such as low precision and low generalization ability, a detection method on the basis of attention mechanism and dilated convolution module is proposed. In order to effectively extract features, a two-stage detection framework is chosen by applying Resnet50 as the pre-training network of our model. With this foundation, the attention mechanism and dilated convolution are utilized. With the attention mechanism, the network can focus on the features of effective regions and suppress the invalid regions during detection. With dilated convolution, the receptive field of the model can be increased without increasing the calculation amount of the model. As a result, it can achieve a larger receptive field, which will obtain more dense data and improve the detection effect of small target defects. Finally, great experiments are conducted on the NEU-DET dataset. Compared with the baseline network, the proposed method in this paper achieves 81.79% mAP, which are better results. Full article
(This article belongs to the Special Issue Deep Learning for “Intelligent” Robots)
Show Figures

Figure 1

Back to TopTop