sensors-logo

Journal Browser

Journal Browser

Human-Robot Interaction for Intelligent Education and Engineering Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 31 December 2024 | Viewed by 28900

Special Issue Editors

Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
Interests: intelligent sensors; educational robot; human-robot interaction; head pose estimation; facial expression recognization; gaze estimation; human pose estimation; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, University of Bristol, Bristol BS8 1UB, UK
Interests: human-computer interaction; intelligent sensors; soft robotics; material engineering

E-Mail Website
Guest Editor
College of Information Science and Engineering, Hunan Normal University, Changsha 410081, China
Interests: intelligent education; computer vision; pattern recognition; machine learning; robot sensing

E-Mail Website
Guest Editor
School of Education, Hubei University, No. 368 Youyi Road, Wuhan 430062, China
Interests: infrared sensor; intelligent education; deep learning; pattern recognition; learning behavior analysis; K-12 education

Special Issue Information

Dear Colleagues,

We are happy to invite you to submit a paper for a Special Issue titled “Human-Robot Interaction for Intelligent Education and Engineering Applications”. The details can be found below.

Recent years have seen significant improvements in robotics and sensors, such as humanoid robots, educational robots, and robot vision, which have led to the introduction of an advanced form of education using human–robot interaction (HRI). While intelligent education has been of interest to the research community for some time, the way the COVID-19 pandemic swept across the world, halting traditional education, led to increased attention being given to intelligent education, especially online education, educational agents, intelligent tutors, and educational robots.

New trends in education require novel sensors and technologies. HRI is a hotspot research domain in robotics and one of the research directions closest to industrialization, so its application in intelligent education is an important research domain for both robotics and education. The novel sensors include RGB-D cameras, infrared cameras, millimeter-wave sensors, event cameras, 3D point cloud cameras, laser imaging, etc.

The application prospects of HRI in education are very promising; however, there are a number of challenges associated with this endeavor which require a great deal of time and energy to address. For example, the automatic understanding of students’ emotional requires facial expression recognition, and educational robots need to be adaptive in order to deliver precise interventions to students. Additionally, robot vision can be utilized to estimate the head pose angle and gaze line of students, which can infer students’ attention in the classroom. Moreover, robot vision can be leveraged in educational technology, allowing students to more readily acquire knowledge and making abstract knowledge more intuitive.

Most educational robots have only motion and control systems but no sense system, so the addition of sensors will undoubtedly improve the intelligence of educational robots. Indeed, more effective applications are needed to address the problems present in this area, and this Special Issue aims to provide an opportunity for researchers to publish their theoretical and technological studies on emerging theories in HRI-based intelligent education and their engineering applications within this domain.

We invite authors to submit original research, new developments, experimental works, and surveys in the fields of human–robot interaction, intelligent education, and vision-based engineering applications. The topics of interest of this Special Issue include, but are not limited to:

  • Deep-learning-based approaches to teaching/learning behavior analysis;
  • Face, gesture and body analysis in classrooms/online learning platforms;
  • Sensor-based information fusion in human–robot interaction applications;
  • Interactive learning behavior analysis, including human–computer interaction and human–robot interaction;
  • Robot-vision-based head pose estimation and facial expression recognition;
  • Robot-vision-based gaze estimation and human pose estimation;
  • Databases and open-source tools for teaching/learning behavior analysis;
  • Combining robot vision with other modalities (e.g., audio, biosignals) for student behavior analysis;
  • Image processing, communications and signal processing for HRI;
  • Design of remote sensors, 3D sensing, and millimeter-wave sensors for intelligent education;
  • Virtual reality and augmented reality for educational robotics;
  • HRI applications in education, engineering, and related fields.

Dr. Hai Liu
Dr. Anne Roudaut
Dr. Zhanpeng Shao
Dr. Tingting Liu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human–robot interaction
  • learning behavior analysis
  • robot vision
  • machine learning
  • robot limb grasping
  • computer vision
  • human–computer interaction
  • learning quality
  • online education
  • artificial intelligence

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 11345 KiB  
Article
ST-TGR: Spatio-Temporal Representation Learning for Skeleton-Based Teaching Gesture Recognition
by Zengzhao Chen, Wenkai Huang, Hai Liu, Zhuo Wang, Yuqun Wen and Shengming Wang
Sensors 2024, 24(8), 2589; https://doi.org/10.3390/s24082589 - 18 Apr 2024
Viewed by 381
Abstract
Teaching gesture recognition is a technique used to recognize the hand movements of teachers in classroom teaching scenarios. This technology is widely used in education, including for classroom teaching evaluation, enhancing online teaching, and assisting special education. However, current research on gesture recognition [...] Read more.
Teaching gesture recognition is a technique used to recognize the hand movements of teachers in classroom teaching scenarios. This technology is widely used in education, including for classroom teaching evaluation, enhancing online teaching, and assisting special education. However, current research on gesture recognition in teaching mainly focuses on detecting the static gestures of individual students and analyzing their classroom behavior. To analyze the teacher’s gestures and mitigate the difficulty of single-target dynamic gesture recognition in multi-person teaching scenarios, this paper proposes skeleton-based teaching gesture recognition (ST-TGR), which learns through spatio-temporal representation. This method mainly uses the human pose estimation technique RTMPose to extract the coordinates of the keypoints of the teacher’s skeleton and then inputs the recognized sequence of the teacher’s skeleton into the MoGRU action recognition network for classifying gesture actions. The MoGRU action recognition module mainly learns the spatio-temporal representation of target actions by stacking a multi-scale bidirectional gated recurrent unit (BiGRU) and using improved attention mechanism modules. To validate the generalization of the action recognition network model, we conducted comparative experiments on datasets including NTU RGB+D 60, UT-Kinect Action3D, SBU Kinect Interaction, and Florence 3D. The results indicate that, compared with most existing baseline models, the model proposed in this article exhibits better performance in recognition accuracy and speed. Full article
Show Figures

Figure 1

17 pages, 1966 KiB  
Article
The Effects of Different Motor Teaching Strategies on Learning a Complex Motor Task
by Tjasa Kunavar, Marko Jamšek, Edwin Johnatan Avila-Mireles, Elmar Rueckert, Luka Peternel and Jan Babič
Sensors 2024, 24(4), 1231; https://doi.org/10.3390/s24041231 - 15 Feb 2024
Viewed by 553
Abstract
During the learning of a new sensorimotor task, individuals are usually provided with instructional stimuli and relevant information about the target task. The inclusion of haptic devices in the study of this kind of learning has greatly helped in the understanding of how [...] Read more.
During the learning of a new sensorimotor task, individuals are usually provided with instructional stimuli and relevant information about the target task. The inclusion of haptic devices in the study of this kind of learning has greatly helped in the understanding of how an individual can improve or acquire new skills. However, the way in which the information and stimuli are delivered has not been extensively explored. We have designed a challenging task with nonintuitive visuomotor perturbation that allows us to apply and compare different motor strategies to study the teaching process and to avoid the interference of previous knowledge present in the naïve subjects. Three subject groups participated in our experiment, where the learning by repetition without assistance, learning by repetition with assistance, and task Segmentation Learning techniques were performed with a haptic robot. Our results show that all the groups were able to successfully complete the task and that the subjects’ performance during training and evaluation was not affected by modifying the teaching strategy. Nevertheless, our results indicate that the presented task design is useful for the study of sensorimotor teaching and that the presented metrics are suitable for exploring the evolution of the accuracy and precision during learning. Full article
Show Figures

Figure 1

19 pages, 3783 KiB  
Article
DSPose: Dual-Space-Driven Keypoint Topology Modeling for Human Pose Estimation
by Anran Zhao, Jingli Li, Hongtao Zeng, Hongren Cheng and Liangshan Dong
Sensors 2023, 23(17), 7626; https://doi.org/10.3390/s23177626 - 03 Sep 2023
Viewed by 1097
Abstract
Human pose estimation is the basis of many downstream tasks, such as motor intervention, behavior understanding, and human–computer interaction. The existing human pose estimation methods rely too much on the similarity of keypoints at the image feature level, which is vulnerable to three [...] Read more.
Human pose estimation is the basis of many downstream tasks, such as motor intervention, behavior understanding, and human–computer interaction. The existing human pose estimation methods rely too much on the similarity of keypoints at the image feature level, which is vulnerable to three problems: object occlusion, keypoints ghost, and neighbor pose interference. We propose a dual-space-driven topology model for the human pose estimation task. Firstly, the model extracts relatively accurate keypoints features through a Transformer-based feature extraction method. Then, the correlation of keypoints in the physical space is introduced to alleviate the error localization problem caused by excessive dependence on the feature-level representation of the model. Finally, through the graph convolutional neural network, the spatial correlation of keypoints and the feature correlation are effectively fused to obtain more accurate human pose estimation results. The experimental results on real datasets also further verify the effectiveness of our proposed model. Full article
Show Figures

Figure 1

23 pages, 4508 KiB  
Article
Research on Safety Helmet Detection Algorithm Based on Improved YOLOv5s
by Qing An, Yingjian Xu, Jun Yu, Miao Tang, Tingting Liu and Feihong Xu
Sensors 2023, 23(13), 5824; https://doi.org/10.3390/s23135824 - 22 Jun 2023
Cited by 3 | Viewed by 3362
Abstract
Safety helmets are essential in various indoor and outdoor workplaces, such as metallurgical high-temperature operations and high-rise building construction, to avoid injuries and ensure safety in production. However, manual supervision is costly and prone to lack of enforcement and interference from other human [...] Read more.
Safety helmets are essential in various indoor and outdoor workplaces, such as metallurgical high-temperature operations and high-rise building construction, to avoid injuries and ensure safety in production. However, manual supervision is costly and prone to lack of enforcement and interference from other human factors. Moreover, small target object detection frequently lacks precision. Improving safety helmets based on the helmet detection algorithm can address these issues and is a promising approach. In this study, we proposed a modified version of the YOLOv5s network, a lightweight deep learning-based object identification network model. The proposed model extends the YOLOv5s network model and enhances its performance by recalculating the prediction frames, utilizing the IoU metric for clustering, and modifying the anchor frames with the K-means++ method. The global attention mechanism (GAM) and the convolutional block attention module (CBAM) were added to the YOLOv5s network to improve its backbone and neck networks. By minimizing information feature loss and enhancing the representation of global interactions, these attention processes enhance deep learning neural networks’ capacity for feature extraction. Furthermore, the CBAM is integrated into the CSP module to improve target feature extraction while minimizing computation for model operation. In order to significantly increase the efficiency and precision of the prediction box regression, the proposed model additionally makes use of the most recent SIoU (SCYLLA-IoU LOSS) as the bounding box loss function. Based on the improved YOLOv5s model, knowledge distillation technology is leveraged to realize the light weight of the network model, thereby reducing the computational workload of the model and improving the detection speed to meet the needs of real-time monitoring. The experimental results demonstrate that the proposed model outperforms the original YOLOv5s network model in terms of accuracy (Precision), recall rate (Recall), and mean average precision (mAP). The proposed model may more effectively identify helmet use in low-light situations and at a variety of distances. Full article
Show Figures

Figure 1

29 pages, 10303 KiB  
Article
Child–Robot Interactions Using Educational Robots: An Ethical and Inclusive Perspective
by Marta I. Tarrés-Puertas, Vicent Costa, Montserrat Pedreira Alvarez, Gabriel Lemkow-Tovias, Josep M. Rossell and Antonio D. Dorado
Sensors 2023, 23(3), 1675; https://doi.org/10.3390/s23031675 - 03 Feb 2023
Cited by 2 | Viewed by 2529
Abstract
The Qui-Bot H2O project involves developing four educational sustainable robots and their associated software. Robots are equipped with HRI features such as voice recognition and color sensing, and they possess a humanoid appearance. The project highlights the social and ethical aspects [...] Read more.
The Qui-Bot H2O project involves developing four educational sustainable robots and their associated software. Robots are equipped with HRI features such as voice recognition and color sensing, and they possess a humanoid appearance. The project highlights the social and ethical aspects of robotics applied to chemistry and industry 4.0 at an early age. Here, we report the results of an interactive study that involved 212 students aged within the range of 3–18. Our educational robots were used to measure the backgrounds, impact, and interest of students, as well as their satisfaction after interacting with them. Additionally, we provide an ethical study of the use of these robots in the classroom and a comparison of the interactions of humanoid versus non-humanoid educational robots observed in early childhood learning. Our findings demonstrate that these robots are useful in teaching technical and scientific concepts in a playful and intuitive manner, as well as in increasing the number of girls who are interested in science and engineering careers. In addition, major impact measures generated by the project within a year of its implementation were analyzed. Several public administrations in the area of gender equality endorsed and participated in the Qui-Bot H2O project in addition to educational and business entities. Full article
Show Figures

Figure 1

18 pages, 7591 KiB  
Article
EPSDNet: Efficient Campus Parking Space Detection via Convolutional Neural Networks and Vehicle Image Recognition for Intelligent Human–Computer Interactions
by Qing An, Haojun Wang and Xijiang Chen
Sensors 2022, 22(24), 9835; https://doi.org/10.3390/s22249835 - 14 Dec 2022
Cited by 4 | Viewed by 2884
Abstract
The parking problem, which is caused by a low parking space utilization ratio, has always plagued drivers. In this work, we proposed an intelligent detection method based on deep learning technology. First, we constructed a TensorFlow deep learning platform for detecting vehicles. Second, [...] Read more.
The parking problem, which is caused by a low parking space utilization ratio, has always plagued drivers. In this work, we proposed an intelligent detection method based on deep learning technology. First, we constructed a TensorFlow deep learning platform for detecting vehicles. Second, the optimal time interval for extracting video stream images was determined in accordance with the judgment time for finding a parking space and the length of time taken by a vehicle from arrival to departure. Finally, the parking space order and number were obtained in accordance with the data layering method and the TimSort algorithm, and parking space vacancy was judged via the indirect Monte Carlo method. To improve the detection accuracy between vehicles and parking spaces, the distance between the vehicles in the training dataset was greater than that of the vehicles observed during detection. A case study verified the reliability of the parking space order and number and the judgment of parking space vacancies. Full article
Show Figures

Figure 1

14 pages, 2061 KiB  
Article
Domain Adaptive Hand Pose Estimation Based on Self-Looping Adversarial Training Strategy
by Rui Jin and Jianyu Yang
Sensors 2022, 22(22), 8843; https://doi.org/10.3390/s22228843 - 15 Nov 2022
Cited by 1 | Viewed by 1143
Abstract
In recent years, with the development of deep learning methods, hand pose estimation based on monocular RGB images has made great progress. However, insufficient labeled training datasets remain an important bottleneck for hand pose estimation. Because synthetic datasets can acquire a large number [...] Read more.
In recent years, with the development of deep learning methods, hand pose estimation based on monocular RGB images has made great progress. However, insufficient labeled training datasets remain an important bottleneck for hand pose estimation. Because synthetic datasets can acquire a large number of images with precise annotations, existing methods address this problem by using data from easily accessible synthetic datasets. Domain adaptation is a method for transferring knowledge from a labeled source domain to an unlabeled target domain. However, many domain adaptation methods fail to achieve good results in realistic datasets due to the domain gap. In this paper, we design a self-looping adversarial training strategy to reduce the domain gap between synthetic and realistic domains. Specifically, we use a multi-branch structure. Then, a new adversarial training strategy we designed for the regression task is introduced to reduce the size of the output space. As such, our model can reduce the domain gap and thus improve the prediction performance of the model. The experiments using H3D and STB datasets show that our method significantly outperforms state-of-the-art domain adaptive methods. Full article
Show Figures

Figure 1

17 pages, 1140 KiB  
Article
Energy Saving Planner Model via Differential Evolutionary Algorithm for Bionic Palletizing Robot
by Yi Deng, Tao Zhou, Guojin Zhao, Kuihu Zhu, Zhaixin Xu and Hai Liu
Sensors 2022, 22(19), 7545; https://doi.org/10.3390/s22197545 - 05 Oct 2022
Cited by 4 | Viewed by 1521
Abstract
Energy saving in palletizing robot is a fundamental problem in the field of industrial robots. However, the palletizing robot often suffers from the problems of high energy consumption and lacking flexibility. In this work, we introduce a novel differential evolution algorithm to address [...] Read more.
Energy saving in palletizing robot is a fundamental problem in the field of industrial robots. However, the palletizing robot often suffers from the problems of high energy consumption and lacking flexibility. In this work, we introduce a novel differential evolution algorithm to address the adverse effects caused by the instability of the initial trajectory parameters while reducing the energy. Specially, a simplified analytical model of the palletizing robot is firstly developed. Then, the simplified analytical model and the differential evolutionary algorithm are combined to form a planner with the goal of reducing energy consumption. The energy saving planner optimizes the initial parameters of the trajectories collected by the bionic demonstration system, which in turn enables a reduction in the operating power consumption of the palletizing robot. The major novelty of this article is the use of a differential evolutionary algorithm that can save the energy consumption as well as boosting its flexibility. Comparing with the traditional algorithms, the proposed method can achieve the state-of-the-art performance. Simulated and actual experimental results illustrate that the optimized trajectory parameters can effectively reduce the energy consumption of palletizing robot by 16%. Full article
Show Figures

Figure 1

15 pages, 4303 KiB  
Article
Intelligent Detection of Hazardous Goods Vehicles and Determination of Risk Grade Based on Deep Learning
by Qing An, Shisong Wu, Ruizhe Shi, Haojun Wang, Jun Yu and Zhifeng Li
Sensors 2022, 22(19), 7123; https://doi.org/10.3390/s22197123 - 20 Sep 2022
Cited by 3 | Viewed by 1388
Abstract
Currently, deep learning has been widely applied in the field of object detection, and some relevant scholars have applied it to vehicle detection. In this paper, the deep learning EfficientDet model is analyzed, and the advantages of the model in the detection of [...] Read more.
Currently, deep learning has been widely applied in the field of object detection, and some relevant scholars have applied it to vehicle detection. In this paper, the deep learning EfficientDet model is analyzed, and the advantages of the model in the detection of hazardous good vehicles are determined. The adaptive training model is built based on the optimization of the training process, and the training model is used to detect hazardous goods vehicles. The detection results are compared with Cascade R-CNN and CenterNet, and the results show that the proposed method is superior to the other two methods in two aspects of computational complexity and detection accuracy. Simultaneously, the proposed method is suitable for the detection of hazardous goods vehicles in different scenarios. We make statistics on the number of detected hazardous goods vehicles at different times and places. The risk grade of different locations is determined according to the statistical results. Finally, the case study shows that the proposed method can be used to detect hazardous goods vehicles and determine the risk level of different places. Full article
Show Figures

Figure 1

16 pages, 9099 KiB  
Article
Bimodal Learning Engagement Recognition from Videos in the Classroom
by Meijia Hu, Yantao Wei, Mengsiying Li, Huang Yao, Wei Deng, Mingwen Tong and Qingtang Liu
Sensors 2022, 22(16), 5932; https://doi.org/10.3390/s22165932 - 09 Aug 2022
Cited by 13 | Viewed by 2306
Abstract
Engagement plays an essential role in the learning process. Recognition of learning engagement in the classroom helps us understand the student’s learning state and optimize the teaching and study processes. Traditional recognition methods such as self-report and teacher observation are time-consuming and obtrusive [...] Read more.
Engagement plays an essential role in the learning process. Recognition of learning engagement in the classroom helps us understand the student’s learning state and optimize the teaching and study processes. Traditional recognition methods such as self-report and teacher observation are time-consuming and obtrusive to satisfy the needs of large-scale classrooms. With the development of big data analysis and artificial intelligence, applying intelligent methods such as deep learning to recognize learning engagement has become the research hotspot in education. In this paper, based on non-invasive classroom videos, first, a multi-cues classroom learning engagement database was constructed. Then, we introduced the power IoU loss function to You Only Look Once version 5 (YOLOv5) to detect the students and obtained a precision of 95.4%. Finally, we designed a bimodal learning engagement recognition method based on ResNet50 and CoAtNet. Our proposed bimodal learning engagement method obtained an accuracy of 93.94% using the KNN classifier. The experimental results confirmed that the proposed method outperforms most state-of-the-art techniques. Full article
Show Figures

Figure 1

24 pages, 8343 KiB  
Article
Design and Research of an Articulated Tracked Firefighting Robot
by Jianwei Zhao, Zhiwei Zhang, Shengyi Liu, Yuanhao Tao and Yushuo Liu
Sensors 2022, 22(14), 5086; https://doi.org/10.3390/s22145086 - 06 Jul 2022
Cited by 6 | Viewed by 2653
Abstract
Aiming to improve the situation where a firefighting robot is affected by conditions of space and complex terrain, a small four-track, four-drive articulated tracked fire-extinguishing robot is designed, which can flexibly perform fire detection and fire extinguishing tasks in a narrow space and [...] Read more.
Aiming to improve the situation where a firefighting robot is affected by conditions of space and complex terrain, a small four-track, four-drive articulated tracked fire-extinguishing robot is designed, which can flexibly perform fire detection and fire extinguishing tasks in a narrow space and complex terrain environment. Firstly, the overall structure of the robot is established. Secondly, the mathematical model of the robot’s motion is analyzed. On this basis, the kinematics simulation is carried out by using ADAMS, and the motion of the robot is analyzed when it overcomes obstacles. Finally, the prototype was produced and tested experimentally. The robot has good obstacle-surmounting ability and excellent stability, is a reasonable size, and can perform various firefighting tasks well. Full article
Show Figures

Figure 1

12 pages, 1104 KiB  
Article
Security Risk Intelligent Assessment of Power Distribution Internet of Things via Entropy-Weight Method and Cloud Model
by Siyuan Cai, Wei Wei, Deng Chen, Jianping Ju, Yanduo Zhang, Wei Liu and Zhaohui Zheng
Sensors 2022, 22(13), 4663; https://doi.org/10.3390/s22134663 - 21 Jun 2022
Cited by 6 | Viewed by 1525
Abstract
The current power distribution Internet of Things (PDIoT) lacks security protection terminals and techniques. Network security has a large exposure surface that can be attacked from multiple paths. In addition, there are many network security vulnerabilities and weak security protection capabilities of power [...] Read more.
The current power distribution Internet of Things (PDIoT) lacks security protection terminals and techniques. Network security has a large exposure surface that can be attacked from multiple paths. In addition, there are many network security vulnerabilities and weak security protection capabilities of power distribution Internet of Things terminals. Therefore, it is crucial to conduct a scientific assessment of the security of PDIoT. However, traditional security assessment methods are relatively subjective and ambiguous. To address the problems, we propose to use the entropy-weight method and cloud model theory to assess the security risk of the PDIoT. We first analyze the factors of security risks in PDIoT systems and establish a three-layer PDIoT security evaluation index system, including a perception layer, network layer, and application layer. The index system has three first-level indicators and sixteen second-level indicators. Then, the entropy-weight method is used to optimize the weight of each index. Additionally, the cloud model theory is employed to calculate the affiliation degree and eigenvalue of each evaluation index. Based on a comprehensive analysis of all evaluation indexes, we can achieve the security level of PDIoT. Taking the PDIoT of Meizhou Power Supply Bureau of Guangdong Power Grid as an example for empirical testing, the experimental results show that the evaluation results are consistent with the actual situation, which proves that the proposed method is effective and feasible. Full article
Show Figures

Figure 1

19 pages, 7040 KiB  
Article
Intelligent Scheduling Methodology for UAV Swarm Remote Sensing in Distributed Photovoltaic Array Maintenance
by Qing An, Qiqi Hu, Ruoli Tang and Lang Rao
Sensors 2022, 22(12), 4467; https://doi.org/10.3390/s22124467 - 13 Jun 2022
Cited by 3 | Viewed by 1344
Abstract
In recent years, the unmanned aerial vehicle (UAV) remote sensing technology has been widely used in the planning, design and maintenance of urban distributed photovoltaic arrays (UDPA). However, the existing studies rarely concern the UAV swarm scheduling problem when applied to remoting sensing [...] Read more.
In recent years, the unmanned aerial vehicle (UAV) remote sensing technology has been widely used in the planning, design and maintenance of urban distributed photovoltaic arrays (UDPA). However, the existing studies rarely concern the UAV swarm scheduling problem when applied to remoting sensing in UDPA maintenance. In this study, a novel scheduling model and algorithm for UAV swarm remote sensing in UDPA maintenance are developed. Firstly, the UAV swarm scheduling tasks in UDPA maintenance are described as a large-scale global optimization (LSGO) problem, in which the constraints are defined as penalty functions. Secondly, an adaptive multiple variable-grouping optimization strategy including adaptive random grouping, UAV grouping and task grouping is developed. Finally, a novel evolutionary algorithm, namely cooperatively coevolving particle swarm optimization with adaptive multiple variable-grouping and context vector crossover/mutation strategies (CCPSO-mg-cvcm), is developed in order to effectively optimize the aforementioned UAV swarm scheduling model. The results of the case study show that the developed CCPSO-mg-cvcm significantly outperforms the existing algorithms, and the UAV swarm remote sensing in large-scale UDPA maintenance can be optimally scheduled by the developed methodology. Full article
Show Figures

Figure 1

20 pages, 11064 KiB  
Article
A Robust Fire Detection Model via Convolution Neural Networks for Intelligent Robot Vision Sensing
by Qing An, Xijiang Chen, Junqian Zhang, Ruizhe Shi, Yuanjun Yang and Wei Huang
Sensors 2022, 22(8), 2929; https://doi.org/10.3390/s22082929 - 11 Apr 2022
Cited by 19 | Viewed by 3683
Abstract
Accurate fire identification can help to control fires. Traditional fire detection methods are mainly based on temperature or smoke detectors. These detectors are susceptible to damage or interference from the outside environment. Meanwhile, most of the current deep learning methods are less discriminative [...] Read more.
Accurate fire identification can help to control fires. Traditional fire detection methods are mainly based on temperature or smoke detectors. These detectors are susceptible to damage or interference from the outside environment. Meanwhile, most of the current deep learning methods are less discriminative with respect to dynamic fire and have lower detection precision when a fire changes. Therefore, we propose a dynamic convolution YOLOv5 fire detection method using a video sequence. Our method first uses the K-mean++ algorithm to optimize anchor box clustering; this significantly reduces the rate of classification error. Then, the dynamic convolution is introduced into the convolution layer of YOLOv5. Finally, pruning of the network heads of YOLOv5’s neck and head is carried out to improve the detection speed. Experimental results verify that the proposed dynamic convolution YOLOv5 fire detection method demonstrates better performance than the YOLOv5 method in recall, precision and F1-score. In particular, compared with three other deep learning methods, the precision of the proposed algorithm is improved by 13.7%, 10.8% and 6.1%, respectively, while the F1-score is improved by 15.8%, 12% and 3.8%, respectively. The method described in this paper is applicable not only to short-range indoor fire identification but also to long-range outdoor fire detection. Full article
Show Figures

Figure 1

Back to TopTop