Autonomous Intelligent Robots and Systems

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Robotics and Automation".

Deadline for manuscript submissions: closed (10 March 2024) | Viewed by 4626

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science & Engineering, South China University of Technology, Guangzhou 510641, China
Interests: intelligent robot research and development; artificial intelligence and computer application technology

E-Mail Website
Guest Editor
School of Instrumentation Science and Opto-electronics Engineering, Beijing Advanced Innovation Center for Big Data-based Precision Medicine, Beihang University, Beijing 100191, China
Interests: plasmons; terahertz; photodetectors; object detection; CNN; IOU; electromagnetic shielding; effective bandwidth; electric network analyzers

Special Issue Information

Dear Colleagues,

At present, there are the following key problems to be solved in research into autonomous intelligent robots and systems: 1) the traditional audio-visual perception and task-control methods cannot be separated from the control of a given task, and they do not have the ability to face complex environments. 2) The existing AI methods cannot effectively and comprehensively solve the main problems faced by visual perception, task control, and deep learning. 3) The existing audio-visual perception methods need to be improved in feature extraction and learning efficiency, which can be effectively solved by high-performance computing. 4) In the research of existing visual perception methods, convolution neural networks (CNNs), the residual neural network (ResNet), and the dense connection network (DenseNet) are commonly used for image recognition, face recognition, image semantic segmentation, etc. With the expansion of the convolution network parameters and the update of the network structure, its training cost is also greatly increased. 5) The existing dexterous hand operation control method basically controls the position and force feedback or impedance of the finger equipped with independent force/torque sensors at the fingertips of the hand. None of the control methods consider or control the contact force of many tactile points on the surface of fingers and palms. The existing control methods of the autonomous robots and systems make it difficult to effectively solve the above problems. Therefore, it is very urgent to study the characteristics, requirements, and limitations of audio-visual perception and task-control methods of the autonomous robots and systems so as to further improve the ability of the autonomous robots and systems in completing various tasks.

Prof. Dr. Nanfeng Xiao
Prof. Dr. Guangcun Shan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • autonomous intelligent robots and systems
  • audio-visual perception
  • task control methods
  • deep learning
  • high-performance computing
  • dexterous hands
  • force/torque control methods

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 1383 KiB  
Article
Research on Applying the “Shift” Concept to Deep Attention Matching
by Kai Hu and Nanfeng Xiao
Appl. Sci. 2023, 13(6), 3934; https://doi.org/10.3390/app13063934 - 20 Mar 2023
Viewed by 875
Abstract
The main purpose of chat robots is to realize intelligent interaction between human beings and chat robots. Generally, a complete conversation involves chat contexts. Before responding, human beings need to extract information from their chat contexts, and human beings are very good at [...] Read more.
The main purpose of chat robots is to realize intelligent interaction between human beings and chat robots. Generally, a complete conversation involves chat contexts. Before responding, human beings need to extract information from their chat contexts, and human beings are very good at extracting information. However, how to make the chatbots more complete in extracting appropriate contextual information has become a key issue for the chatbots in multi-round dialogues. Most research in recent years has paid attention to the point-to-point matching of responses and utterances at reciprocal granularity, such as SMN (Sequential Matching Network) and DAM (Deep Attention Matching Network), for which these parsing methods affect the effectiveness of the above networks to some extent. For example, DAM introduces an attention mechanism, which can obtain good results by parsing five levels of granularity of response and utterance through a self-attention module, and then matching the same level of granularity to mine similar spatial information in it. Based on the structure of DAM, this paper proposes an information-mining idea for improving DAM, which applies to those models with a multi-layer matching structure. Therefore, based on this idea, this paper presents two improved methods for DAM, so as to improve the accuracy of the information extraction from the contexts of the multiple round chats. The experiments show that the improved DAM has better results than that of the unimproved DAM, which are, respectively, R_2@1 increased by 0.425%, R_10@1 increased by 0.515%, R_10@2 increased by 0.341%, MAP (Mean Average Precision) increased by 0.358% in Douban data set, P@1 increased by 1.16%, R_10@1 increased by 1.17%, and these results are better than those state-of-the-art methods, and the improved methods presented in this paper can be used not only in DAM but also in other models with similar the point-to-point matching structures. Full article
(This article belongs to the Special Issue Autonomous Intelligent Robots and Systems)
Show Figures

Figure 1

15 pages, 4411 KiB  
Article
Underwater Object Detection Method Based on Improved Faster RCNN
by Hao Wang and Nanfeng Xiao
Appl. Sci. 2023, 13(4), 2746; https://doi.org/10.3390/app13042746 - 20 Feb 2023
Cited by 9 | Viewed by 3092
Abstract
In order to better utilize and protect marine organisms, reliable underwater object detection methods need to be developed. Due to various influencing factors from complex and changeable underwater environments, the underwater object detection is full of challenges. Therefore, this paper improves a two-stage [...] Read more.
In order to better utilize and protect marine organisms, reliable underwater object detection methods need to be developed. Due to various influencing factors from complex and changeable underwater environments, the underwater object detection is full of challenges. Therefore, this paper improves a two-stage algorithm of Faster RCNN (Regions with Convolutional Neural Network Feature) to detect holothurian, echinus, scallop, starfish and waterweeds. The improved algorithm has better performance in underwater object detection. Firstly, we improved the backbone network of the Faster RCNN, replacing the VGG16 (Visual Geometry Group Network 16) structure in the original feature extraction module with the Res2Net101 network to enhance the expressive ability of the receptive field of each network layer. Secondly, the OHEM (Online Hard Example Mining) algorithm is introduced to solve the imbalance problem of positive and negative samples of the bounding box. Thirdly, GIOU (Generalized Intersection Over Union) and Soft-NMS (Soft Non-Maximum Suppression) are used to optimize the regression mechanism of the bounding box. Finally, the improved Faster RCNN model is trained using a multi-scale training strategy to enhance the robustness of the model. Through ablation experiments based on the improved Faster RCNN model, each improved part is disassembled and then the experiments are carried out one by one, which can be known from the experimental results that, based on the improved Faster RCNN model, mAP@0.5 reaches 71.7%, which is 3.3% higher than the original Faster RCNN model, and the average accuracy reaches 43%, and the F1-score reaches 55.3%, a 2.5% improvement over the original Faster RCNN model, which shows that the proposed method in this paper is effective in underwater object detection. Full article
(This article belongs to the Special Issue Autonomous Intelligent Robots and Systems)
Show Figures

Figure 1

Back to TopTop