Recent Advances in Industrial Robots

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Systems & Control Engineering".

Deadline for manuscript submissions: closed (30 April 2023) | Viewed by 7112

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, University of Nebraska at Omaha, Omaha, NE 68182, USA
Interests: cyber-physical systems; real-time systems; robotics; machine learning; wireless communication/networking systems
School of Automation, Beijing University of Posts and Telecommunications, Beijing 100876, China
Interests: robotic vision; robotic sensing and navigation; industrial augmented/mixed reality

Special Issue Information

Dear Colleagues,

Today’s industrial robots work in a wide range of industries for different applications, from metal forging and plastic forming to the production of semiconductors and automobiles, and still continue to rapidly evolve. This growth is driven largely by manufacturers that plan to use robots to tackle the looming shortage of skilled labor. Therefore, the recent advances in the development of industrial robotics demonstrated unique features, such as advanced actuators, advanced senses and perception, improved batteries, lightweight body materials, and control algorithms, as well as data processing capabilities along with artificial intelligence (AI), machine learning, and the Internet of Things technologies. These features give rise to a broad range of spectacular developments and lead to the development of a new version of industrial robots. To perform tasks in unstructured environments, robots can efficiently coordinate distinct actions in intelligent ways to carry out unseen and long horizon tasks.

In this Special Issue, we are particularly interested in the emerging trends in the development of industrial robots that will have a significant impact on the manufacturing, construction, and industrial sectors.  This Special Issue aims to continue to highlight recent advances in industrial robots. Topics include but are not limited to:

  • Intelligent industrial robotics;
  • AI-enabled robotics;
  • Cloud robots;
  • Collaborative robots;
  • Industrial Internet of Things (IIoT);
  • Robotics and automation;;
  • Computer architecture for robotics
  • Computer vision for automation and manufacturing;
  • Advanced sensing and navigation;
  • Motion and path planning;
  • Planning, scheduling and coordination;
  • Human–robot collaborations;
  • AR /MR in robotics;
  • Digital twin for industrial robotics.

Dr. Pei-Chi Huang
Dr. Wei Fang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent industrial robotics
  • AI-enabled robotics
  • cloud robots
  • collaborative robots
  • Industrial Internet of Things (IIoT)
  • robotics and automation
  • computer architecture for robotics computer vision for automation and manufacturing
  • advanced sensing and navigation
  • motion and path planning
  • planning, scheduling and coordination
  • human–robot collaborations
  • AR /MR in robotics
  • digital twin for industrial robotics

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 5072 KiB  
Article
HTC-Grasp: A Hybrid Transformer-CNN Architecture for Robotic Grasp Detection
by Qiang Zhang, Jianwei Zhu, Xueying Sun and Mingmin Liu
Electronics 2023, 12(6), 1505; https://doi.org/10.3390/electronics12061505 - 22 Mar 2023
Cited by 1 | Viewed by 1635
Abstract
Accurately detecting suitable grasp areas for unknown objects through visual information remains a challenging task. Drawing inspiration from the success of the Vision Transformer in vision detection, the hybrid Transformer-CNN architecture for robotic grasp detection, known as HTC-Grasp, is developed to improve the [...] Read more.
Accurately detecting suitable grasp areas for unknown objects through visual information remains a challenging task. Drawing inspiration from the success of the Vision Transformer in vision detection, the hybrid Transformer-CNN architecture for robotic grasp detection, known as HTC-Grasp, is developed to improve the accuracy of grasping unknown objects. The architecture employs an external attention-based hierarchical Transformer as an encoder to effectively capture global context and correlation features across the entire dataset. Furthermore, a channel-wise attention-based CNN decoder is presented to adaptively adjust the weight of the channels in the approach, resulting in more efficient feature aggregation. The proposed method is validated on the Cornell and the Jacquard dataset, achieving an image-wise detection accuracy of 98.3% and 95.8% on each dataset, respectively. Additionally, the object-wise detection accuracy of 96.9% and 92.4% on the same datasets are achieved based on this method. A physical experiment is also performed using the Elite 6Dof robot, with a grasping accuracy rate of 93.3%, demonstrating the proposed method’s ability to grasp unknown objects in real scenarios. The results of this study indicate that the proposed method outperforms other state-of-the-art methods. Full article
(This article belongs to the Special Issue Recent Advances in Industrial Robots)
Show Figures

Figure 1

17 pages, 5209 KiB  
Article
Mobile Robot Gas Source Localization Using SLAM-GDM with a Graphene-Based Gas Sensor
by Wan Abdul Syaqur Norzam, Huzein Fahmi Hawari, Kamarulzaman Kamarudin, Zaffry Hadi Mohd Juffry, Nurul Athirah Abu Hussein, Monika Gupta and Abdulnasser Nabil Abdullah
Electronics 2023, 12(1), 171; https://doi.org/10.3390/electronics12010171 - 30 Dec 2022
Cited by 3 | Viewed by 1907
Abstract
Mobile olfaction is one of the applications of mobile robots. Metal oxide sensors (MOX) are mobile robots’ most popular gas sensors. However, the sensor has drawbacks, such as high-power consumption, high operating temperature, and long recovery time. This research compares a reduced graphene [...] Read more.
Mobile olfaction is one of the applications of mobile robots. Metal oxide sensors (MOX) are mobile robots’ most popular gas sensors. However, the sensor has drawbacks, such as high-power consumption, high operating temperature, and long recovery time. This research compares a reduced graphene oxide (RGO) sensor with the traditionally used MOX in a mobile robot. The method uses a map created from simultaneous localization and mapping (SLAM) combined with gas distribution mapping (GDM) to draw the gas distribution in the map and locate the gas source. RGO and MOX are tested in the lab for their response to 100 and 300 ppm ethanol. Both sensors’ response and recovery times show that RGO resulted in 56% and 54% faster response times, with 33% and 57% shorter recovery times than MOX. In the experiment, one gas source, 95% ethanol solution, is placed in the lab, and the mobile robot runs through the map in 7 min and 12 min after the source is set, with five repetitions. The results show the average distance error of the predicted source from the actual location was 19.52 cm and 30.28 cm using MOX and 25.24 cm and 30.60 cm using the RGO gas sensor for the 7th and 12th min trials, respectively. The errors show that the predicted gas source location based on MOX is 1.0% (12th min), much closer to the actual site than that predicted with RGO. However, RGO also shows a larger gas sensing area than MOX by 0.35–8.33% based on the binary image of the SLAM-GDM map, which indicates that RGO is much more sensitive than MOX in the trial run. Regarding power consumption, RGO consumes an average of 294.605 mW, 56.33% less than MOX, with an average consumption of 674.565 mW. The experiment shows that RGO can perform as well as MOX in mobile olfaction applications but with lower power consumption and operating temperature. Full article
(This article belongs to the Special Issue Recent Advances in Industrial Robots)
Show Figures

Figure 1

10 pages, 2813 KiB  
Article
A Non-Local Tensor Completion Algorithm Based on Weighted Tensor Nuclear Norm
by Wenzhe Wang, Jingjing Zheng, Li Zhao, Huiling Chen and Xiaoqin Zhang
Electronics 2022, 11(19), 3250; https://doi.org/10.3390/electronics11193250 - 09 Oct 2022
Cited by 4 | Viewed by 1402
Abstract
In this paper, we proposed an image inpainting algorithm, including an interpolation step and a non-local tensor completion step based on a weighted tensor nuclear norm. Specifically, the proposed algorithm adopts the triangular based linear interpolation algorithm firstly to preform the observation image. [...] Read more.
In this paper, we proposed an image inpainting algorithm, including an interpolation step and a non-local tensor completion step based on a weighted tensor nuclear norm. Specifically, the proposed algorithm adopts the triangular based linear interpolation algorithm firstly to preform the observation image. Second, we extract the non-local similar patches in the image using the patch match algorithm and rearrange them to a similar tensor. Then, we use the tensor completion algorithm based on the weighted tensor nuclear norm to recover the similar tensors. Finally, we regroup all these recovered tensors to obtain the final recovered image. From the image inpainting experiments on color RGB images, we can see that the performance of the proposed algorithm on the peak signal-to-noise ratio (PSNR) and the relative standard error (RSE) are significantly better than other image inpainting methods. Full article
(This article belongs to the Special Issue Recent Advances in Industrial Robots)
Show Figures

Figure 1

16 pages, 2696 KiB  
Article
A Fusion Model for Saliency Detection Based on Semantic Soft Segmentation
by Jie Tao, Yaocai Wu, Xiaolong Zhou, Qike Shao and Sixian Chan
Electronics 2022, 11(17), 2712; https://doi.org/10.3390/electronics11172712 - 29 Aug 2022
Viewed by 1207
Abstract
With the rapid development of neural networks in recent years, saliency detection based on deep learning has made great breakthroughs. Most deep saliency detection algorithms are based on convolutional neural networks, which still have great room for improvement in the edge accuracy of [...] Read more.
With the rapid development of neural networks in recent years, saliency detection based on deep learning has made great breakthroughs. Most deep saliency detection algorithms are based on convolutional neural networks, which still have great room for improvement in the edge accuracy of salient objects recognition, which may lead to fuzzy results in practical applications such as image matting. In order to improve the accuracy of detection, a saliency detection model based on semantic soft segmentation is proposed in this paper. Firstly, the semantic segmentation module combines spectral extinction and residual network model to obtain low-level color features and high-level semantic features, which can clearly segment all kinds of objects in the image. Then, the saliency detection module locates the position and contour of the main body of the object, and the edge accurate results are obtained after the processing of the two modules. Finally, compared with the other 11 algorithms on the DUTS-TEST data set, the weighted F-measure value of the proposed algorithm ranked first, which was 5.8% higher than the original saliency detection algorithm, and the accuracy was significantly improved. Full article
(This article belongs to the Special Issue Recent Advances in Industrial Robots)
Show Figures

Figure 1

Back to TopTop