Topic Editors

Department of Mechanical Engineering (ME), University of California, Merced, CA 95343, USA
School of Engineering, Macquarie University, Sydney, NSW 2109, Australia
Department of Engineering, University of Campania Luigi Vanvitelli, Via Roma 29, 81031 Aversa, Italy
Department of Electrical & Computer Engineering, Faculty of Engineering, McMaster University, Hamilton, ON L8S 4L8, Canada
Department of Materials Science and Engineering, Gachon University, Seongnam-si 1342, Republic of Korea
REQUIMTE–LAQV, School of Engineering, Polytechnic Institute of Porto, 4249-015 Porto, Portugal

Artificial Intelligence in Sensors

Abstract submission deadline
closed (30 October 2022)
Manuscript submission deadline
closed (31 December 2022)
Viewed by
300333

Topic Information

Dear Colleagues,

This topic comprises several interdisciplinary research areas that cover the main aspects of sensor sciences.

There has been an increase in both the capabilities and challenges related within numerous application fields, e.g., Robotics, Industry 4.0, Automotive, Smart Cities, Medicine, Diagnosis, Food, Telecommunication, Environmental and Civil Applications, Health, and Security.

The associated applications constantly require novel sensors to improve their capabilities and challenges. Thus, Sensor Sciences represents a paradigm characterized by the integration of modern nanotechnologies and nanomaterials into manufacturing and industrial practice to develop tools for several application fields. The primary underlying goal of Sensor Sciences is to facilitate the closer interconnection and control of complex systems, machines, devices, and people to increase the support provided to humans in several application fields.

Sensor Sciences comprises a set of significant research fields, including:

  • Chemical Sensors;
  • Biosensors;
  • Physical Sensors;
  • Optical Sensors;
  • Microfluidics;
  • Sensor Networks;
  • Electronics and Mechanicals;
  • Mechatronics;
  • Internet of Things platforms and their applications;
  • Materials and Nanomaterials;
  • Data Security;
  • Artificial Intelligence;
  • Robotics;
  • UAV; UGV;
  • Remote Sensing;
  • Measurement Science and Technology;
  • Cognitive Computing Platforms and Applications, including technologies related to Artificial Intelligence, Machine Learning, as well as Big Data Processing and Analytics;
  • Advanced Interactive Technologies, including Augmented/Virtual Reality;
  • Advanced Data Visualization Techniques;
  • Instrumentation Science and Technology;
  • Nanotechnology;
  • Organic Electronics, Biophotonics, and Smart Materials;
  • Optoelectronics, Photonics, and Optical fibers;
  • MEMS, Microwaves, and Acoustic waves;
  • Physics and Biophysics;
  • Interdisciplinary Sciences.

This topic aims to collect the results of research in these fields and others. Therefore, submitting papers within those areas connected to sensors is strongly encouraged.

Prof. Dr. Yangquan Chen
Prof. Dr. Subhas Mukhopadhyay
Dr. Nunzio Cennamo
Prof. Dr. M. Jamal Deen
Dr. Junseop Lee
Prof. Dr. Simone Morais
Topic Editors

Keywords

  • artificial intelligence
  • sensors
  • machine learning
  • deep learning
  • computer vision
  • image processing
  • smart sensing
  • smart sensor
  • intelligent sensor
  • unmanned aerial vehicle
  • UAV
  • unmanned ground vehicle
  • UGV
  • robotics
  • machine vision

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Sensors
sensors
3.9 6.8 2001 16.4 Days CHF 2600
Remote Sensing
remotesensing
5.0 7.9 2009 21.1 Days CHF 2700
Applied Sciences
applsci
2.7 4.5 2011 15.8 Days CHF 2300
Electronics
electronics
2.9 4.7 2012 15.8 Days CHF 2200
Drones
drones
4.8 6.1 2017 14.8 Days CHF 2600

Preprints is a platform dedicated to making early versions of research outputs permanently available and citable. MDPI journals allow posting on preprint servers such as Preprints.org prior to publication. For more details about reprints, please visit https://www.preprints.org.

Published Papers (138 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
Article
Study of Channel-Type Dynamic Weighing System for Goat Herds
Electronics 2023, 12(7), 1715; https://doi.org/10.3390/electronics12071715 - 04 Apr 2023
Viewed by 759
Abstract
This paper proposes a design method for a channel-type sheep dynamic weighing system to address the current problems encountered by pastoralists at home and abroad, such as time-consuming sheep weighing, difficulties with data collection, and management of the stress response in sheep. The [...] Read more.
This paper proposes a design method for a channel-type sheep dynamic weighing system to address the current problems encountered by pastoralists at home and abroad, such as time-consuming sheep weighing, difficulties with data collection, and management of the stress response in sheep. The complete system includes a hardware structure, dynamic characteristics, and a Kalman-aggregate empirical modal decomposition algorithm (Kalman-EEMD algorithm) model for dynamic data processing. The noise suppression effects of the Kalman filter, the empirical modal decomposition (EMD), and the ensemble empirical modal decomposition (EEMD) algorithms are discussed for practical applications. Field tests showed that the Kalman-EEMD algorithm model has the advantages of high accuracy, efficiency, and reliability. The maximum error between the actual weight of the goats and the measured value in the experiments was 1.0%, with an average error as low as 0.40% and a maximum pass time of 2 s for a single goat. This meets the needs for weighing accuracy and goat flock weighing rates. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
3D LiDAR Point Cloud Registration Based on IMU Preintegration in Coal Mine Roadways
Sensors 2023, 23(7), 3473; https://doi.org/10.3390/s23073473 - 26 Mar 2023
Cited by 1 | Viewed by 1151
Abstract
Point cloud registration is the basis of real-time environment perception for robots using 3D LiDAR and is also the key to robust simultaneous localization and mapping (SLAM) for robots. Because LiDAR point clouds are characterized by local sparseness and motion distortion, the point [...] Read more.
Point cloud registration is the basis of real-time environment perception for robots using 3D LiDAR and is also the key to robust simultaneous localization and mapping (SLAM) for robots. Because LiDAR point clouds are characterized by local sparseness and motion distortion, the point cloud features of coal mine roadway environments show a weak texture and degradation. Therefore, for these environments, the traditional point cloud registration method to register directly will lead to problems, such as a decline in registration accuracy, z-axis drift, and map ghosting. To solve the above problems, we propose a point cloud registration method based on IMU preintegration with the sensor characteristics of LiDAR and IMU. The system framework of this method mainly consists of four modules: IMU preintegration, point cloud preprocessing, point cloud frame matching and point cloud registration. First, IMU sensor data are introduced, and IMU linear interpolation is used to correct the motion distortion in LiDAR scanning, and the IMU preintegration error function is constructed. Second, the point cloud segmentation is performed using the ground segmentation method of RANSAC to provide additional ground constraints for the z-axis displacement and to remove the unstable flawed points from the point cloud. On this basis, the LiDAR point cloud registration error function is constructed by extracting the feature corner points and feature plane points. Finally, the Gaussian Newton solution is used to optimize the constraint relationship between the LiDAR odometry frames to minimize the error function, complete the LiDAR point cloud registration and better estimate the position and pose of the mobile robot. The experimental results show that compared with the traditional point cloud registration method, the proposed method has a higher point cloud registration accuracy, success rate and computational efficiency. The LiDAR odometry constructed using this method can better reflect the authenticity of the robot trajectory and has higher trajectory accuracy and smaller absolute position and pose error. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Temperature Drift Compensation of a MEMS Accelerometer Based on DLSTM and ISSA
Sensors 2023, 23(4), 1809; https://doi.org/10.3390/s23041809 - 06 Feb 2023
Cited by 2 | Viewed by 1324
Abstract
In order to improve the performance of a micro-electro-mechanical system (MEMS) accelerometer, three algorithms for compensating its temperature drift are proposed in this paper, including deep long short-term memory recurrent neural network (DLSTM-RNN, short DLSTM), DLSTM based on sparrow search algorithm (SSA), and [...] Read more.
In order to improve the performance of a micro-electro-mechanical system (MEMS) accelerometer, three algorithms for compensating its temperature drift are proposed in this paper, including deep long short-term memory recurrent neural network (DLSTM-RNN, short DLSTM), DLSTM based on sparrow search algorithm (SSA), and DLSTM based on improved SSA (ISSA). Moreover, the piecewise linear approximation (PLA) method is employed in this paper as a comparison to evaluate the impact of the proposed algorithm. First, a temperature experiment is performed to obtain the MEMS accelerometer’s temperature drift output (TDO). Then, we propose a real-time compensation model and a linear approximation model for neural network methods compensation and PLA method compensation, respectively. The real-time compensation model is a recursive method based on the TDO at the last moment. The linear approximation model considers the MEMS accelerometer’s temperature and TDO as input and output, respectively. Next, the TDO is analyzed and optimized by the real-time compensation model and the three algorithms mentioned before. Moreover, the TDO is also compensated by the linear approximation model and PLA method as a comparison. The compensation results show that the three neural network methods and the PLA method effectively compensate for the temperature drift of the MEMS accelerometer, and the DLSTM + ISSA method achieves the best compensation effect. After compensation by DLSTM + ISSA, the three Allen variance coefficients of the MEMS accelerometer that bias instability, rate random walk, and rate ramp are improved from 5.43×104mg, 4.33×105mg/s12, 1.18×106mg/s to 2.77×105mg, 1.14×106mg/s12, 2.63×108mg/s, respectively, with an increase of 96.68% on average. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Article
Adaptive Multi-Scale Fusion Blind Deblurred Generative Adversarial Network Method for Sharpening Image Data
Drones 2023, 7(2), 96; https://doi.org/10.3390/drones7020096 - 30 Jan 2023
Cited by 2 | Viewed by 1439
Abstract
Drone and aerial remote sensing images are widely used, but their imaging environment is complex and prone to image blurring. Existing CNN deblurring algorithms usually use multi-scale fusion to extract features in order to make full use of aerial remote sensing blurred image [...] Read more.
Drone and aerial remote sensing images are widely used, but their imaging environment is complex and prone to image blurring. Existing CNN deblurring algorithms usually use multi-scale fusion to extract features in order to make full use of aerial remote sensing blurred image information, but images with different degrees of blurring use the same weights, leading to increasing errors in the feature fusion process layer by layer. Based on the physical properties of image blurring, this paper proposes an adaptive multi-scale fusion blind deblurred generative adversarial network (AMD-GAN), which innovatively applies the degree of image blurring to guide the adjustment of the weights of multi-scale fusion, effectively suppressing the errors in the multi-scale fusion process and enhancing the interpretability of the feature layer. The research work in this paper reveals the necessity and effectiveness of a priori information on image blurring levels in image deblurring tasks. By studying and exploring the image blurring levels, the network model focuses more on the basic physical features of image blurring. Meanwhile, this paper proposes an image blurring degree description model, which can effectively represent the blurring degree of aerial remote sensing images. The comparison experiments show that the algorithm in this paper can effectively recover images with different degrees of blur, obtain high-quality images with clear texture details, outperform the comparison algorithm in both qualitative and quantitative evaluation, and can effectively improve the object detection performance of blurred aerial remote sensing images. Moreover, the average PSNR of this paper’s algorithm tested on the publicly available dataset RealBlur-R reached 41.02 dB, surpassing the latest SOTA algorithm. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Real-Time Finger-Writing Character Recognition via ToF Sensors on Edge Deep Learning
Electronics 2023, 12(3), 685; https://doi.org/10.3390/electronics12030685 - 30 Jan 2023
Cited by 3 | Viewed by 1561
Abstract
Human–computer interaction is demanded for natural and convenient approaches, in which finger-writing recognition has aroused more and more attention. In this paper, a device-free finger-writing character recognition system based on an array of time-of-flight (ToF) distance sensors is presented. The ToF sensors acquire [...] Read more.
Human–computer interaction is demanded for natural and convenient approaches, in which finger-writing recognition has aroused more and more attention. In this paper, a device-free finger-writing character recognition system based on an array of time-of-flight (ToF) distance sensors is presented. The ToF sensors acquire distance values between sensors to a writing finger within a 9.5 × 15 cm square on a surface at specific time intervals and send distance data to a low-power microcontroller STM32F401, equipped with deep learning algorithms for real-time inference and recognition tasks. The proposed method enables one to distinguish 26 English lower-case letters by users writing with their fingers and does not require one to wear additional devices. All data used in this work were collected from 21 subjects (12 males and 9 females) to evaluate the proposed system in a real scenario. In this work, the performance of different deep learning algorithms, such as long short-term memory (LSTM), convolutional neural networks (CNNs) and bidirectional LSTM (BiLSTM), was evaluated. Thus, these algorithms provide high accuracy, where the best result is extracted from the LSTM, with 98.31% accuracy and 50 ms of maximum latency. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Deep Learning-Based Cost-Effective and Responsive Robot for Autism Treatment
Drones 2023, 7(2), 81; https://doi.org/10.3390/drones7020081 - 23 Jan 2023
Cited by 22 | Viewed by 3025
Abstract
Recent studies state that, for a person with autism spectrum disorder, learning and improvement is often seen in environments where technological tools are involved. A robot is an excellent tool to be used in therapy and teaching. It can transform teaching methods, not [...] Read more.
Recent studies state that, for a person with autism spectrum disorder, learning and improvement is often seen in environments where technological tools are involved. A robot is an excellent tool to be used in therapy and teaching. It can transform teaching methods, not just in the classrooms but also in the in-house clinical practices. With the rapid advancement in deep learning techniques, robots became more capable of handling human behaviour. In this paper, we present a cost-efficient, socially designed robot called ‘Tinku’, developed to assist in teaching special needs children. ‘Tinku’ is low cost but is full of features and has the ability to produce human-like expressions. Its design is inspired by the widely accepted animated character ‘WALL-E’. Its capabilities include offline speech processing and computer vision—we used light object detection models, such as Yolo v3-tiny and single shot detector (SSD)—for obstacle avoidance, non-verbal communication, expressing emotions in an anthropomorphic way, etc. It uses an onboard deep learning technique to localize the objects in the scene and uses the information for semantic perception. We have developed several lessons for training using these features. A sample lesson about brushing is discussed to show the robot’s capabilities. Tinku is cute, and loaded with lots of features, and the management of all the processes is mind-blowing. It is developed in the supervision of clinical experts and its condition for application is taken care of. A small survey on the appearance is also discussed. More importantly, it is tested on small children for the acceptance of the technology and compatibility in terms of voice interaction. It helps autistic kids using state-of-the-art deep learning models. Autism Spectral disorders are being increasingly identified today’s world. The studies show that children are prone to interact with technology more comfortably than a with human instructor. To fulfil this demand, we presented a cost-effective solution in the form of a robot with some common lessons for the training of an autism-affected child. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Electrolyte-Gated Graphene Field Effect Transistor-Based Ca2+ Detection Aided by Machine Learning
Sensors 2023, 23(1), 353; https://doi.org/10.3390/s23010353 - 29 Dec 2022
Cited by 1 | Viewed by 1126
Abstract
Flexible electrolyte-gated graphene field effect transistors (Eg-GFETs) are widely developed as sensors because of fast response, versatility and low-cost. However, their sensitivities and responding ranges are often altered by different gate voltages. These bias-voltage-induced uncertainties are an obstacle in the development of Eg-GFETs. [...] Read more.
Flexible electrolyte-gated graphene field effect transistors (Eg-GFETs) are widely developed as sensors because of fast response, versatility and low-cost. However, their sensitivities and responding ranges are often altered by different gate voltages. These bias-voltage-induced uncertainties are an obstacle in the development of Eg-GFETs. To shield from this risk, a machine-learning-algorithm-based LgGFETs’ data analyzing method is studied in this work by using Ca2+ detection as a proof-of-concept. For the as-prepared Eg-GFET-Ca2+ sensors, their transfer and output features are first measured. Then, eight regression models are trained with the use of different machine learning algorithms, including linear regression, support vector machine, decision tree and random forest, etc. Then, the optimized model is obtained with the random-forest-method-treated transfer curves. Finally, the proposed method is applied to determine Ca2+ concentration in a calibration-free way, and it is found that the relation between the estimated and real Ca2+ concentrations is close-to y = x. Accordingly, we think the proposed method may not only provide an accurate result but also simplify the traditional calibration step in using Eg-GFET sensors. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Training Artificial Intelligence Algorithms with Automatically Labelled UAV Data from Physics-Based Simulation Software
Appl. Sci. 2023, 13(1), 131; https://doi.org/10.3390/app13010131 - 22 Dec 2022
Cited by 1 | Viewed by 1296
Abstract
Machine-learning (ML) requires human-labeled “truth” data to train and test. Acquiring and labeling this data can often be the most time-consuming and expensive part of developing trained models of convolutional neural networks (CNN). In this work, we show that an automated workflow using [...] Read more.
Machine-learning (ML) requires human-labeled “truth” data to train and test. Acquiring and labeling this data can often be the most time-consuming and expensive part of developing trained models of convolutional neural networks (CNN). In this work, we show that an automated workflow using automatically labeled synthetic data can be used to drastically reduce the time and effort required to train a machine learning algorithm for detecting buildings in aerial imagery acquired with low-flying unmanned aerial vehicles. The MSU Autonomous Vehicle Simulator (MAVS) was used in this work, and the process for integrating MAVS into an automated workflow is presented in this work, along with results for building detection with real and simulated images. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Non-Specific Responsive Nanogels and Plasmonics to Design MathMaterial Sensing Interfaces: The Case of a Solvent Sensor
Sensors 2022, 22(24), 10006; https://doi.org/10.3390/s222410006 - 19 Dec 2022
Cited by 2 | Viewed by 1305
Abstract
The combination of non-specific deformable nanogels and plasmonic optical probes provides an innovative solution for specific sensing using a generalistic recognition layer. Soft polyacrylamide nanogels that lack specific selectivity but are characterized by responsive behavior, i.e., shrinking and swelling dependent on the surrounding [...] Read more.
The combination of non-specific deformable nanogels and plasmonic optical probes provides an innovative solution for specific sensing using a generalistic recognition layer. Soft polyacrylamide nanogels that lack specific selectivity but are characterized by responsive behavior, i.e., shrinking and swelling dependent on the surrounding environment, were grafted to a gold plasmonic D-shaped plastic optical fiber (POF) probe. The nanogel–POF cyclically challenged with water or alcoholic solutions optically reported the reversible solvent-to-phase transitions of the nanomaterial, embodying a primary optical switch. Additionally, the non-specific nanogel–POF interface exhibited more degrees of freedom through which specific sensing was enabled. The real-time monitoring of the refractive index variations due to the time-related volume-to-phase transition effects of the nanogels enabled us to determine the environment’s characteristics and broadly classify solvents. Hence the nanogel–POF interface was a descriptor of mathematical functions for substance identification and classification processes. These results epitomize the concept of responsive non-specific nanomaterials to perform a multiparametric description of the environment, offering a specific set of features for the processing stage and particularly suitable for machine and deep learning. Thus, soft MathMaterial interfaces provide the ground to devise devices suitable for the next generation of smart intelligent sensing processes. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Towards Building a Distributed Virtual Flow Meter via Compressed Continual Learning
Sensors 2022, 22(24), 9878; https://doi.org/10.3390/s22249878 - 15 Dec 2022
Viewed by 959
Abstract
A robust–accurate estimation of fluid flow is the main building block of a distributed virtual flow meter. Unfortunately, a big leap in algorithm development would be required for this objective to come to fruition, mainly due to the inability of current machine learning [...] Read more.
A robust–accurate estimation of fluid flow is the main building block of a distributed virtual flow meter. Unfortunately, a big leap in algorithm development would be required for this objective to come to fruition, mainly due to the inability of current machine learning algorithms to make predictions outside the training data distribution. To improve predictions outside the training distribution, we explore the continual learning (CL) paradigm for accurately estimating the characteristics of fluid flow in pipelines. A significant challenge facing CL is the concept of catastrophic forgetting. In this paper, we provide a novel approach for how to address the forgetting problem via compressing the distributed sensor data to increase the capacity of the CL memory bank using a compressive learning algorithm. Through extensive experiments, we show that our approach provides around 8% accuracy improvement compared to other CL algorithms when applied to a real-world distributed sensor dataset collected from an oilfield. Noticeable accuracy improvement is also achieved when using our proposed approach with the CL benchmark datasets, achieving state-of-the-art accuracies for the CIFAR-10 dataset on blurry10 and blurry30 settings of 80.83% and 88.91%, respectively. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Visual/Inertial/GNSS Integrated Navigation System under GNSS Spoofing Attack
Remote Sens. 2022, 14(23), 5975; https://doi.org/10.3390/rs14235975 - 25 Nov 2022
Cited by 2 | Viewed by 926
Abstract
Visual/Inertial/GNSS (VIG) integrated navigation and positioning systems are widely used in unmanned vehicles and other systems. This VIG system is vulnerable to of GNSS spoofing attacks. Relevant research on the harm that spoofing causes to the system and performance analyses of VIG systems [...] Read more.
Visual/Inertial/GNSS (VIG) integrated navigation and positioning systems are widely used in unmanned vehicles and other systems. This VIG system is vulnerable to of GNSS spoofing attacks. Relevant research on the harm that spoofing causes to the system and performance analyses of VIG systems under GNSS spoofing are not sufficient. In this paper, an open-source VIG algorithm, VINS-Fusion, based on nonlinear optimization, is used to analyze the performance of the VIG system under a GNSS spoofing attack. The influence of the visual inertial odometer (VIO) scale estimation error and the transformation matrix deviation in the transition period of spoofing detection is analyzed. Deviation correction methods based on the GNSS-assisted scale compensation coefficient estimation method and optimal pose transformation matrix selection are proposed for VIG-integrated system in spoofing areas. For an area that the integrated system can revisit many times, a global pose map-matching method is proposed. An outfield experiment with a GNSS spoofing attack is carried out in this paper. The experiment result shows that, even if the GNSS measurements are seriously affected by the spoofing, the integrated system still can run independently, following the preset waypoint. The scale compensation coefficient estimation method, the optimal pose transformation matrix selection method and the global pose map-matching method can depress the estimation error under the circumstances of a spoofing attack. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
A Multi-Channel Descriptor for LiDAR-Based Loop Closure Detection and Its Application
Remote Sens. 2022, 14(22), 5877; https://doi.org/10.3390/rs14225877 - 19 Nov 2022
Cited by 1 | Viewed by 1144
Abstract
Simultaneous localization and mapping (SLAM) algorithm is a prerequisite for unmanned ground vehicle (UGV) localization, path planning, and navigation, which includes two essential components: frontend odometry and backend optimization. Frontend odometry tends to amplify the cumulative error continuously, leading to ghosting and drifting [...] Read more.
Simultaneous localization and mapping (SLAM) algorithm is a prerequisite for unmanned ground vehicle (UGV) localization, path planning, and navigation, which includes two essential components: frontend odometry and backend optimization. Frontend odometry tends to amplify the cumulative error continuously, leading to ghosting and drifting on the mapping results. However, loop closure detection (LCD) can be used to address this technical issue by significantly eliminating the cumulative error. The existing LCD methods decide whether a loop exists by constructing local or global descriptors and calculating the similarity between descriptors, which attaches great importance to the design of discriminative descriptors and effective similarity measurement mechanisms. In this paper, we first propose novel multi-channel descriptors (CMCD) to alleviate the lack of point cloud single information in the discriminative power of scene description. The distance, height, and intensity information of the point cloud is encoded into three independent channels of the shadow-casting region (bin) and then compressed it into a two-dimensional global descriptor. Next, an ORB-based dynamic threshold feature extraction algorithm (DTORB) is designed using objective 2D descriptors to describe the distributions of global and local point clouds. Then, a DTORB-based similarity measurement method is designed using the rotation-invariance and visualization characteristic of descriptor features to overcome the subjective tendency of the constant threshold ORB algorithm in descriptor feature extraction. Finally, verification is performed over KITTI odometry sequences and the campus datasets of Jilin University collected by us. The experimental results demonstrate the superior performance of our method to the state-of-the-art approaches. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

Article
Rolling Bearing Fault Diagnosis Using Hybrid Neural Network with Principal Component Analysis
Sensors 2022, 22(22), 8906; https://doi.org/10.3390/s22228906 - 17 Nov 2022
Cited by 13 | Viewed by 1264
Abstract
With the rapid development of fault prognostics and health management (PHM) technology, more and more deep learning algorithms have been applied to the intelligent fault diagnosis of rolling bearings, and although all of them can achieve over 90% diagnostic accuracy, the generality and [...] Read more.
With the rapid development of fault prognostics and health management (PHM) technology, more and more deep learning algorithms have been applied to the intelligent fault diagnosis of rolling bearings, and although all of them can achieve over 90% diagnostic accuracy, the generality and robustness of the models cannot be truly verified under complex extreme variable loading conditions. In this study, an end-to-end rolling bearing fault diagnosis model of a hybrid deep neural network with principal component analysis is proposed. Firstly, in order to reduce the complexity of deep learning computation, data pre-processing is performed by principal component analysis (PCA) with feature dimensionality reduction. The preprocessed data is imported into the hybrid deep learning model. The first layer of the model uses a CNN algorithm for denoising and simple feature extraction, the second layer makes use of bi-directional long and short memory (BiLSTM) for greater in-depth extraction of the data with time series features, and the last layer uses an attention mechanism for optimal weight assignment, which can further improve the diagnostic precision. The test accuracy of this model is fully comparable to existing deep learning fault diagnosis models, especially under low load; the test accuracy is 100% at constant load and nearly 90% for variable load, and the test accuracy is 72.8% at extreme variable load (2.205 N·m/s–0.735 N·m/s and 0.735 N·m/s–2.205 N·m/s), which are the worst possible load conditions. The experimental results fully prove that the model has reliable robustness and generality. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
AIoT Precision Feeding Management System
Electronics 2022, 11(20), 3358; https://doi.org/10.3390/electronics11203358 - 18 Oct 2022
Cited by 1 | Viewed by 1248
Abstract
Different fish species and different growth stages require different amounts of fish pellets. Excessive fish pellets increase the cost of aquaculture, and the leftover fish pellets sink to the bottom of the fish farm. This causes water pollution in the fish farm. Weather [...] Read more.
Different fish species and different growth stages require different amounts of fish pellets. Excessive fish pellets increase the cost of aquaculture, and the leftover fish pellets sink to the bottom of the fish farm. This causes water pollution in the fish farm. Weather changes and providing too many or too little fish pellets affect the growth of the fish. In light of the abovementioned factors, this article uses the artificial intelligence of things (AIoT) precision feeding management system to improve an existing fish feeder. The AIoT precision feeding management system is placed on the water surface of the breeding pond to measure the water surface fluctuations in the area of fish pellet application. The buoy, with s built-in three-axis accelerometer, senses the water surface fluctuations when the fish are foraging. Then, through the wireless transmission module, the data are sent back to the receiver and control device of the fish feeder. When the fish feeder receives the signal, it judges the returned value to adjust the feeding time. Through this system, the intelligent feeding of fish can be achieved by adjusting the amount of fish pellets in order to reduce the cost of aquaculture. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Intrinsic Calibration of Multi-Beam LiDARs for Agricultural Robots
Remote Sens. 2022, 14(19), 4846; https://doi.org/10.3390/rs14194846 - 28 Sep 2022
Cited by 1 | Viewed by 1255
Abstract
With the advantages of high measurement accuracy and wide detection range, LiDARs have been widely used in information perception research to develop agricultural robots. However, the internal configuration of the laser transmitter layout changes with increasing sensor working duration, which makes it difficult [...] Read more.
With the advantages of high measurement accuracy and wide detection range, LiDARs have been widely used in information perception research to develop agricultural robots. However, the internal configuration of the laser transmitter layout changes with increasing sensor working duration, which makes it difficult to obtain accurate measurement with calibration files based on factory settings. To solve this problem, we investigate the intrinsic calibration of multi-beam laser sensors. Specifically, we calibrate the five intrinsic parameters of LiDAR with a nonlinear optimization strategy based on static planar models, which include measured distance, rotation angle, pitch angle, horizontal distance, and vertical distance. Firstly, we establish a mathematical model based on the physical structure of LiDAR. Secondly, we calibrate the internal parameters according to the mathematical model and evaluate the measurement accuracy after calibration. Here, we illustrate the parameter calibration with three steps: planar model estimation, objective function construction, and nonlinear optimization. We also introduce the ranging accuracy evaluation metrics, including the standard deviation of the distance from the laser scanning points to the planar models and the 3σ criterion. Finally, the experimental results show that the ranging error of calibrated sensors can be maintained within 3 cm, which verifies the effectiveness of the laser intrinsic calibration. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Optimal Compensation of MEMS Gyroscope Noise Kalman Filter Based on Conv-DAE and MultiTCN-Attention Model in Static Base Environment
Sensors 2022, 22(19), 7249; https://doi.org/10.3390/s22197249 - 24 Sep 2022
Cited by 2 | Viewed by 1279
Abstract
Errors in microelectromechanical systems (MEMS) inertial measurement units (IMUs) are large, complex, nonlinear, and time varying. The traditional noise reduction and compensation methods based on traditional models are not applicable. This paper proposes a noise reduction method based on multi-layer combined deep learning [...] Read more.
Errors in microelectromechanical systems (MEMS) inertial measurement units (IMUs) are large, complex, nonlinear, and time varying. The traditional noise reduction and compensation methods based on traditional models are not applicable. This paper proposes a noise reduction method based on multi-layer combined deep learning for the MEMS gyroscope in the static base state. In this method, the combined model of MEMS gyroscope is constructed by Convolutional Denoising Auto-Encoder (Conv-DAE) and Multi-layer Temporal Convolutional Neural with the Attention Mechanism (MultiTCN-Attention) model. Based on the robust data processing capability of deep learning, the noise features are obtained from the past gyroscope data, and the parameter optimization of the Kalman filter (KF) by the Particle Swarm Optimization algorithm (PSO) significantly improves the filtering and noise reduction accuracy. The experimental results show that, compared with the original data, the noise standard deviation of the filtering effect of the combined model proposed in this paper decreases by 77.81% and 76.44% on the x and y axes, respectively; compared with the existing MEMS gyroscope noise compensation method based on the Autoregressive Moving Average with Kalman filter (ARMA-KF) model, the noise standard deviation of the filtering effect of the combined model proposed in this paper decreases by 44.00% and 46.66% on the x and y axes, respectively, reducing the noise impact by nearly three times. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
A Visual Saliency-Based Neural Network Architecture for No-Reference Image Quality Assessment
Appl. Sci. 2022, 12(19), 9567; https://doi.org/10.3390/app12199567 - 23 Sep 2022
Cited by 5 | Viewed by 1162
Abstract
Deep learning has recently been used to study blind image quality assessment (BIQA) in great detail. Yet, the scarcity of high-quality algorithms prevents from developing them further and being used in a real-time scenario. Patch-based techniques have been used to forecast the quality [...] Read more.
Deep learning has recently been used to study blind image quality assessment (BIQA) in great detail. Yet, the scarcity of high-quality algorithms prevents from developing them further and being used in a real-time scenario. Patch-based techniques have been used to forecast the quality of an image, but they typically award the picture quality score to an individual patch of the image. As a result, there would be a lot of misleading scores coming from patches. Some regions of the image are important and can contribute highly toward the right prediction of its quality. To prevent outlier regions, we suggest a technique with a visual saliency module which allows the only important region to bypass to the neural network and allows the network to only learn the important information required to predict the quality. The neural network architecture used in this study is Inception-ResNet-v2. We assess the proposed strategy using a benchmark database (KADID-10k) to show its efficacy. The outcome demonstrates better performance compared with certain popular no-reference IQA (NR-IQA) and full-reference IQA (FR-IQA) approaches. This technique is intended to be utilized to estimate the quality of an image being acquired in real time from drone imagery. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
An Adaptive Group of Density Outlier Removal Filter: Snow Particle Removal from LiDAR Data
Electronics 2022, 11(19), 2993; https://doi.org/10.3390/electronics11192993 - 21 Sep 2022
Cited by 3 | Viewed by 1408
Abstract
Light Detection And Ranging (LiDAR) is an important technology integrated into self-driving cars to enhance the reliability of these systems. Even with some advantages over cameras, it is still limited under extreme weather conditions such as heavy rain, fog, or snow. Traditional methods [...] Read more.
Light Detection And Ranging (LiDAR) is an important technology integrated into self-driving cars to enhance the reliability of these systems. Even with some advantages over cameras, it is still limited under extreme weather conditions such as heavy rain, fog, or snow. Traditional methods such as Radius Outlier Removal (ROR) and Statistical Outlier Removal (SOR) are limited in their ability to detect snow points in LiDAR point clouds. This paper proposes an Adaptive Group of Density Outlier Removal (AGDOR) filter that can remove snow particles more effectively in raw LiDAR point clouds, with verification on the Winter Adverse Driving Dataset (WADS). In our proposed method, an intensity threshold combined with a proposed outlier removal filter was employed. Outstanding performance was obtained, with higher accuracy up to 96% and processing speed of 0.51 s per frame in our result. In particular, our filter outperforms the state-of-the-art filter by achieving a 16.32% higher Precision at the same accuracy. However, our method archive is lower in recall than the state-of-the-art method. This clearly indicates that AGDOR retains a significant amount of object points from LiDAR. The results suggest that our filter would be useful for snow removal under harsh weathers for autonomous driving systems. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Line Structure Extraction from LiDAR Point Cloud Based on the Persistence of Tensor Feature
Appl. Sci. 2022, 12(18), 9190; https://doi.org/10.3390/app12189190 - 14 Sep 2022
Viewed by 1180
Abstract
The LiDAR point cloud has been widely used in scenarios of automatic driving, object recognition, structure reconstruction, etc., while it remains a challenging problem in line structure extraction, due to the noise and accuracy, especially in data acquired by consumer electronic devices. To [...] Read more.
The LiDAR point cloud has been widely used in scenarios of automatic driving, object recognition, structure reconstruction, etc., while it remains a challenging problem in line structure extraction, due to the noise and accuracy, especially in data acquired by consumer electronic devices. To address the issue, a line structure extraction method based on the persistence of tensor feature is proposed, and subsequently applied to the data acquired by an iPhone-based LiDAR sensor. The tensor of each point is encoded, voted, and aggregated by its neighborhood, and further decomposed into different geometric features in each dimension. Then, the line feature in the point cloud is represented and computed using the persistence of the tensor feature. Finally, the line structure is extracted based on the persistent homology according to the discrete Morse theory. With the LiDAR point cloud collected by the iPhone 12 Pro MAX, experiments are conducted, line structures are extracted from two different datasets, and results perform well in comparison with other related results. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
N-Step Pre-Training and Décalcomanie Data Augmentation for Micro-Expression Recognition
Sensors 2022, 22(17), 6671; https://doi.org/10.3390/s22176671 - 03 Sep 2022
Cited by 1 | Viewed by 1178
Abstract
Facial expressions are divided into micro- and macro-expressions. Micro-expressions are low-intensity emotions presented for a short moment of about 0.25 s, whereas macro-expressions last up to 4 s. To derive micro-expressions, participants are asked to suppress their emotions as much as possible while [...] Read more.
Facial expressions are divided into micro- and macro-expressions. Micro-expressions are low-intensity emotions presented for a short moment of about 0.25 s, whereas macro-expressions last up to 4 s. To derive micro-expressions, participants are asked to suppress their emotions as much as possible while watching emotion-inducing videos. However, it is a challenging process, and the number of samples collected tends to be less than those of macro-expressions. Because training models with insufficient data may lead to decreased performance, this study proposes two ways to solve the problem of insufficient data for micro-expression training. The first method involves N-step pre-training, which performs multiple transfer learning from action recognition datasets to those in the facial domain. Second, we propose Décalcomanie data augmentation, which is based on facial symmetry, to create a composite image by cutting and pasting both faces around their center lines. The results show that the proposed methods can successfully overcome the data shortage problem and achieve high performance. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Deep-Learning-Based Method for Estimating Permittivity of Ground-Penetrating Radar Targets
Remote Sens. 2022, 14(17), 4293; https://doi.org/10.3390/rs14174293 - 31 Aug 2022
Cited by 2 | Viewed by 1527
Abstract
Correctly estimating the relative permittivity of buried targets is crucial for accurately determining the target type, geometric size, and reconstruction of shallow surface geological structures. In order to effectively identify the dielectric properties of buried targets, on the basis of extracting the feature [...] Read more.
Correctly estimating the relative permittivity of buried targets is crucial for accurately determining the target type, geometric size, and reconstruction of shallow surface geological structures. In order to effectively identify the dielectric properties of buried targets, on the basis of extracting the feature information of B-SCAN images, we propose an inversion method based on a deep neural network (DNN) to estimate the relative permittivity of targets. We first take the physical mechanism of ground-penetrating radar (GPR), working in the reflection measurement mode as the constrain condition, and then design a convolutional neural network (CNN) to extract the feature hyperbola of the underground target, which is used to calculate the buried depth of the target and the relative permittivity of the background medium. We further build a regression network and train the network model with the labeled sample set to estimate the relative permittivity of the target. Tests were carried out on the GPR simulation dataset and the field dataset of underground rainwater pipelines, respectively. The results show that the inversion method has high accuracy in estimating the relative permittivity of the target. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

Article
SGSNet: A Lightweight Depth Completion Network Based on Secondary Guidance and Spatial Fusion
Sensors 2022, 22(17), 6414; https://doi.org/10.3390/s22176414 - 25 Aug 2022
Cited by 1 | Viewed by 1070
Abstract
The depth completion task aims to generate a dense depth map from a sparse depth map and the corresponding RGB image. As a data preprocessing task, obtaining denser depth maps without affecting the real-time performance of downstream tasks is the challenge. In this [...] Read more.
The depth completion task aims to generate a dense depth map from a sparse depth map and the corresponding RGB image. As a data preprocessing task, obtaining denser depth maps without affecting the real-time performance of downstream tasks is the challenge. In this paper, we propose a lightweight depth completion network based on secondary guidance and spatial fusion named SGSNet. We design the image feature extraction module to better extract features from different scales between and within layers in parallel and to generate guidance features. Then, SGSNet uses the secondary guidance to complete the depth completion. The first guidance uses the lightweight guidance module to quickly guide LiDAR feature extraction with the texture features of RGB images. The second guidance uses the depth information completion module for sparse depth map feature completion and inputs it into the DA-CSPN++ module to complete the dense depth map re-guidance. By using a lightweight bootstrap module, the overall network runs ten times faster than the baseline. The overall network is relatively lightweight, up to thirty frames, which is sufficient to meet the speed needs of large SLAM and three-dimensional reconstruction for sensor data extraction. At the time of submission, the accuracy of the algorithm in SGSNet ranked first in the KITTI ranking of lightweight depth completion methods. It was 37.5% faster than the top published algorithms in the rank and was second in the full ranking. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
EVtracker: An Event-Driven Spatiotemporal Method for Dynamic Object Tracking
Sensors 2022, 22(16), 6090; https://doi.org/10.3390/s22166090 - 15 Aug 2022
Cited by 2 | Viewed by 1461
Abstract
An event camera is a novel bio-inspired sensor that effectively compensates for the shortcomings of current frame cameras, which include high latency, low dynamic range, motion blur, etc. Rather than capturing images at a fixed frame rate, an event camera produces an asynchronous [...] Read more.
An event camera is a novel bio-inspired sensor that effectively compensates for the shortcomings of current frame cameras, which include high latency, low dynamic range, motion blur, etc. Rather than capturing images at a fixed frame rate, an event camera produces an asynchronous signal by measuring the brightness change of each pixel. Consequently, an appropriate algorithm framework that can handle the unique data types of event-based vision is required. In this paper, we propose a dynamic object tracking framework using an event camera to achieve long-term stable tracking of event objects. One of the key novel features of our approach is to adopt an adaptive strategy that adjusts the spatiotemporal domain of event data. To achieve this, we reconstruct event images from high-speed asynchronous streaming data via online learning. Additionally, we apply the Siamese network to extract features from event data. In contrast to earlier models that only extract hand-crafted features, our method provides powerful feature description and a more flexible reconstruction strategy for event data. We assess our algorithm in three challenging scenarios: 6-DoF (six degrees of freedom), translation, and rotation. Unlike fixed cameras in traditional object tracking tasks, all three tracking scenarios involve the simultaneous violent rotation and shaking of both the camera and objects. Results from extensive experiments suggest that our proposed approach achieves superior accuracy and robustness compared to other state-of-the-art methods. Without reducing time efficiency, our novel method exhibits a 30% increase in accuracy over other recent models. Furthermore, results indicate that event cameras are capable of robust object tracking, which is a task that conventional cameras cannot adequately perform, especially for super-fast motion tracking and challenging lighting situations. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Accurate Spatial Positioning of Target Based on the Fusion of Uncalibrated Image and GNSS
Remote Sens. 2022, 14(16), 3877; https://doi.org/10.3390/rs14163877 - 10 Aug 2022
Cited by 2 | Viewed by 1056
Abstract
The accurate spatial positioning of the target in a fixed camera image is a critical sensing technique. Conventional visual spatial positioning methods rely on tedious camera calibration and face great challenges in selecting the representative feature points to compute the position of the [...] Read more.
The accurate spatial positioning of the target in a fixed camera image is a critical sensing technique. Conventional visual spatial positioning methods rely on tedious camera calibration and face great challenges in selecting the representative feature points to compute the position of the target, especially when existing occlusion or in remote scenes. In order to avoid these deficiencies, this paper proposes a deep learning approach for accurate visual spatial positioning of the targets with the assistance of Global Navigation Satellite System (GNSS). It contains two stages: the first stage trains a hybrid supervised and unsupervised auto-encoder regression network offline to gain capability of regressing geolocation (longitude and latitude) directly from the fusion of image and GNSS, and learns an error scale factor to evaluate the regression error. The second stage firstly predicts regressed accurate geolocation online from the observed image and GNSS measurement, and then filters the predictive geolocation and the measured GNSS to output the optimal geolocation. The experimental results showed that the proposed approach increased the average positioning accuracy by 56.83%, 37.25%, 41.62% in a simulated scenario and 31.25%, 7.43%, 38.28% in a real-world scenario, compared with GNSS, the Interacting Multiple Model−Unscented Kalman Filters (IMM-UKF) and the supervised deep learning approach, respectively. Other improvements were also achieved in positioning stability, robustness, generalization, and performance in GNSS denied environments. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

Article
Evaluating the Forest Ecosystem through a Semi-Autonomous Quadruped Robot and a Hexacopter UAV
Sensors 2022, 22(15), 5497; https://doi.org/10.3390/s22155497 - 23 Jul 2022
Cited by 6 | Viewed by 1950
Abstract
Accurate and timely monitoring is imperative to the resilience of forests for economic growth and climate regulation. In the UK, forest management depends on citizen science to perform tedious and time-consuming data collection tasks. In this study, an unmanned aerial vehicle (UAV) equipped [...] Read more.
Accurate and timely monitoring is imperative to the resilience of forests for economic growth and climate regulation. In the UK, forest management depends on citizen science to perform tedious and time-consuming data collection tasks. In this study, an unmanned aerial vehicle (UAV) equipped with a light sensor and positioning capabilities is deployed to perform aerial surveying and to observe a series of forest health indicators (FHIs) which are inaccessible from the ground. However, many FHIs such as burrows and deadwood can only be observed from under the tree canopy. Hence, we take the initiative of employing a quadruped robot with an integrated camera as well as an external sensing platform (ESP) equipped with light and infrared cameras, computing, communication and power modules to observe these FHIs from the ground. The forest-monitoring time can be extended by reducing computation and conserving energy. Therefore, we analysed different versions of the YOLO object-detection algorithm in terms of accuracy, deployment and usability by the EXP to accomplish an extensive low-latency detection. In addition, we constructed a series of new datasets to train the YOLOv5x and YOLOv5s for recognising FHIs. Our results reveal that YOLOv5s is lightweight and easy to train for FHI detection while performing close to real-time, cost-effective and autonomous forest monitoring. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
An SAR Ship Object Detection Algorithm Based on Feature Information Efficient Representation Network
Remote Sens. 2022, 14(14), 3489; https://doi.org/10.3390/rs14143489 - 21 Jul 2022
Cited by 5 | Viewed by 1616
Abstract
In the synthetic aperture radar (SAR) ship image, the target size is small and dense, the background is complex and changeable, the ship target is difficult to distinguish from the surrounding background, and there are many ship-like targets in the image. This makes [...] Read more.
In the synthetic aperture radar (SAR) ship image, the target size is small and dense, the background is complex and changeable, the ship target is difficult to distinguish from the surrounding background, and there are many ship-like targets in the image. This makes it difficult for deep-learning-based target detection algorithms to obtain effective feature information, resulting in missed and false detection. The effective expression of the feature information of the target to be detected is the key to the target detection algorithm. How to improve the clear expression of image feature information in the network has always been a difficult point. Aiming at the above problems, this paper proposes a new target detection algorithm, the feature information efficient representation network (FIERNet). The algorithm can extract better feature details, enhance network feature fusion and information expression, and improve model detection capabilities. First, the convolution transformer feature extraction (CTFE) module is proposed, and a convolution transformer feature extraction network (CTFENet) is built with this module as a feature extraction block. The network enables the model to obtain more accurate and comprehensive feature information, weakens the interference of invalid information, and improves the overall performance of the network. Second, a new effective feature information fusion (EFIF) module is proposed to enhance the transfer and fusion of the main information of feature maps. Finally, a new frame-decoding formula is proposed to further improve the coincidence between the predicted frame and the target frame and obtain more accurate picture information. Experiments show that the method achieves 94.14% and 92.01% mean precision (mAP) on SSDD and SAR-ship datasets, and it works well on large-scale SAR ship images. In addition, FIERNet greatly reduces the occurrence of missed detection and false detection in SAR ship detection. Compared to other state-of-the-art object detection algorithms, FIERNet outperforms them on various performance metrics on SAR images. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

Article
Trigger-Based K-Band Microwave Ranging System Thermal Control with Model-Free Learning Process
Electronics 2022, 11(14), 2173; https://doi.org/10.3390/electronics11142173 - 11 Jul 2022
Cited by 1 | Viewed by 901
Abstract
Micron-level accuracy K-band microwave ranging in space relies on the stability of the payload thermal control on-board; however, large quantities of thermal sensors and heating devices around the deployed instruments consume the precious inner communication resources of the central computer. Another problem arises, [...] Read more.
Micron-level accuracy K-band microwave ranging in space relies on the stability of the payload thermal control on-board; however, large quantities of thermal sensors and heating devices around the deployed instruments consume the precious inner communication resources of the central computer. Another problem arises, which is that the payload thermal protection environment can deteriorate gradually through years operating. In this paper, a new trigger-based thermal system controller design is proposed, with consideration of spaceborne communication burden reduction and actuator saturation, which guarantees stable temperature fluctuations of microwave payloads in space missions. The controller combines a nominal constant sampling PID inner loop and a trigger-based outer loop structure under constraints of heating device saturation. Moreover, an iterative model-free reinforcement learning process is adopted that can approximate the estimation of thermal dynamic modeling uncertainty online. Via extensive experiment in a laboratory environment, the performance of the proposed trigger thermal control is verified, with smaller temperature fluctuations compared to the nominal control, and obvious efficiency in system communications. The online learning algorithm is also tested with deliberate thermal conditions that deviate from the original system—the results can quickly converge to normal when the thermal disturbance is removed. Finally, the ranging accuracy is tested for the whole system, and a 25% (RMS) performance improvement can be realized by using a trigger-based control strategy—about 2.2 µm, compared to the nominal control method. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Technical Note
Optimal Sensor Placement Using Learning Models—A Mediterranean Case Study
Remote Sens. 2022, 14(13), 2989; https://doi.org/10.3390/rs14132989 - 22 Jun 2022
Cited by 1 | Viewed by 1275
Abstract
In this paper, we discuss different approaches to optimal sensor placement and propose that an optimal sensor location can be selected using unsupervised learning methods such as self-organising maps, neural gas or the K-means algorithm. We show how each of the algorithms can [...] Read more.
In this paper, we discuss different approaches to optimal sensor placement and propose that an optimal sensor location can be selected using unsupervised learning methods such as self-organising maps, neural gas or the K-means algorithm. We show how each of the algorithms can be used for this purpose and that additional constraints such as distance from shore, which is presumed to be related to deployment and maintenance costs, can be considered. The study uses wind data over the Mediterranean Sea and uses the reconstruction error to evaluate sensor location selection. The reconstruction error shows that results deteriorate when additional constraints are added to the equation. However, it is also shown that a small fraction of the data is sufficient to reconstruct wind data over a larger geographic area with an error comparable to that of a meteorological model. The results are confirmed by several experiments and are consistent with the results of previous studies. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
DPSSD: Dual-Path Single-Shot Detector
Sensors 2022, 22(12), 4616; https://doi.org/10.3390/s22124616 - 18 Jun 2022
Cited by 1 | Viewed by 1273
Abstract
Object detection is one of the most important and challenging branches of computer vision. It has been widely used in people’s lives, such as for surveillance security and autonomous driving. We propose a novel dual-path multi-scale object detection paradigm in order to extract [...] Read more.
Object detection is one of the most important and challenging branches of computer vision. It has been widely used in people’s lives, such as for surveillance security and autonomous driving. We propose a novel dual-path multi-scale object detection paradigm in order to extract more abundant feature information for the object detection task and optimize the multi-scale object detection problem, and based on this, we design a single-stage general object detection algorithm called Dual-Path Single-Shot Detector (DPSSD). The dual path ensures that shallow features, i.e., residual path and concatenation path, can be more easily utilized to improve detection accuracy. Our improved dual-path network is more adaptable to multi-scale object detection tasks, and we combine it with the feature fusion module to generate a multi-scale feature learning paradigm called the “Dual-Path Feature Pyramid”. We trained the models on PASCAL VOC datasets and COCO datasets with 320 pixels and 512 pixels input, respectively, and performed inference experiments to validate the structures in the neural network. The experimental results show that our algorithm has an advantage over anchor-based single-stage object detection algorithms and achieves an advanced level in average accuracy. Researchers can replicate the reported results of this paper. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Encoder-Decoder Structure with Multiscale Receptive Field Block for Unsupervised Depth Estimation from Monocular Video
Remote Sens. 2022, 14(12), 2906; https://doi.org/10.3390/rs14122906 - 17 Jun 2022
Cited by 1 | Viewed by 1297
Abstract
Monocular depth estimation is a fundamental yet challenging task in computer vision as depth information will be lost when 3D scenes are mapped to 2D images. Although deep learning-based methods have led to considerable improvements for this task in a single image, most [...] Read more.
Monocular depth estimation is a fundamental yet challenging task in computer vision as depth information will be lost when 3D scenes are mapped to 2D images. Although deep learning-based methods have led to considerable improvements for this task in a single image, most existing approaches still fail to overcome this limitation. Supervised learning methods model depth estimation as a regression problem and, as a result, require large amounts of ground truth depth data for training in actual scenarios. Unsupervised learning methods treat depth estimation as the synthesis of a new disparity map, which means that rectified stereo image pairs need to be used as the training dataset. Aiming to solve such problem, we present an encoder-decoder based framework, which infers depth maps from monocular video snippets in an unsupervised manner. First, we design an unsupervised learning scheme for the monocular depth estimation task based on the basic principles of structure from motion (SfM) and it only uses adjacent video clips rather than paired training data as supervision. Second, our method predicts two confidence masks to improve the robustness of the depth estimation model to avoid the occlusion problem. Finally, we leverage the largest scale and minimum depth loss instead of the multiscale and average loss to improve the accuracy of depth estimation. The experimental results on the benchmark KITTI dataset for depth estimation show that our method outperforms competing unsupervised methods. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

Communication
A Low-Power Analog Processor-in-Memory-Based Convolutional Neural Network for Biosensor Applications
Sensors 2022, 22(12), 4555; https://doi.org/10.3390/s22124555 - 16 Jun 2022
Cited by 4 | Viewed by 1560
Abstract
This paper presents an on-chip implementation of an analog processor-in-memory (PIM)-based convolutional neural network (CNN) in a biosensor. The operator was designed with low power to implement CNN as an on-chip device on the biosensor, which consists of plates of 32 × 32 [...] Read more.
This paper presents an on-chip implementation of an analog processor-in-memory (PIM)-based convolutional neural network (CNN) in a biosensor. The operator was designed with low power to implement CNN as an on-chip device on the biosensor, which consists of plates of 32 × 32 material. In this paper, 10T SRAM-based analog PIM, which performs multiple and average (MAV) operations with multiplication and accumulation (MAC), is used as a filter to implement CNN at low power. PIM proceeds with MAV operations, with feature extraction as a filter, using an analog method. To prepare the input feature, an input matrix is formed by scanning a 32 × 32 biosensor based on a digital controller operating at 32 MHz frequency. Memory reuse techniques were applied to the analog SRAM filter, which is the core of low power implementation, and in order to accurately grasp the MAC operational efficiency and classification, we modeled and trained numerous input features based on biosignal data, confirming the classification. When the learned weight data was input, 19 mW of power was consumed during analog-based MAC operation. The implementation showed an energy efficiency of 5.38 TOPS/W and was differentiated through the implementation of 8 bits of high resolution in the 180 nm CMOS process. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
A Vision-Based System for Stage Classification of Parkinsonian Gait Using Machine Learning and Synthetic Data
Sensors 2022, 22(12), 4463; https://doi.org/10.3390/s22124463 - 13 Jun 2022
Cited by 2 | Viewed by 1667
Abstract
Parkinson’s disease is characterized by abnormal gait, which worsens as the condition progresses. Although several methods have been able to classify this feature through pose-estimation algorithms and machine-learning classifiers, few studies have been able to analyze its progression to perform stage classification of [...] Read more.
Parkinson’s disease is characterized by abnormal gait, which worsens as the condition progresses. Although several methods have been able to classify this feature through pose-estimation algorithms and machine-learning classifiers, few studies have been able to analyze its progression to perform stage classification of the disease. Moreover, despite the increasing popularity of these systems for gait analysis, the amount of available gait-related data can often be limited, thereby, hindering the progress of the implementation of this technology in the medical field. As such, creating a quantitative prognosis method that can identify the severity levels of a Parkinsonian gait with little data could help facilitate the study of the Parkinsonian gait for rehabilitation. In this contribution, we propose a vision-based system to analyze the Parkinsonian gait at various stages using linear interpolation of Parkinsonian gait models. We present a comparison between the performance of a k-nearest neighbors algorithm (KNN), support-vector machine (SVM) and gradient boosting (GB) algorithms in classifying well-established gait features. Our results show that the proposed system achieved 96–99% accuracy in evaluating the prognosis of Parkinsonian gaits. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Path-Planning System for Radioisotope Identification Devices Using 4π Gamma Imaging Based on Random Forest Analysis
Sensors 2022, 22(12), 4325; https://doi.org/10.3390/s22124325 - 07 Jun 2022
Cited by 2 | Viewed by 1420
Abstract
We developed a path-planning system for radiation source identification devices using 4π gamma imaging. The estimated source location and activity were calculated by an integrated simulation model by using 4π gamma images at multiple measurement positions. Using these calculated values, a prediction model [...] Read more.
We developed a path-planning system for radiation source identification devices using 4π gamma imaging. The estimated source location and activity were calculated by an integrated simulation model by using 4π gamma images at multiple measurement positions. Using these calculated values, a prediction model to estimate the probability of identification at the next measurement position was created by via random forest analysis. The path-planning system based on the prediction model was verified by integrated simulation and experiment for a 137Cs point source. The results showed that 137Cs point sources were identified using the few measurement positions suggested by the path-planning system. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
A Long Short-Term Memory Network for Plasma Diagnosis from Langmuir Probe Data
Sensors 2022, 22(11), 4281; https://doi.org/10.3390/s22114281 - 04 Jun 2022
Cited by 3 | Viewed by 1430
Abstract
Electrostatic probe diagnosis is the main method of plasma diagnosis. However, the traditional diagnosis theory is affected by many factors, and it is difficult to obtain accurate diagnosis results. In this study, a long short-term memory (LSTM) approach is used for plasma probe [...] Read more.
Electrostatic probe diagnosis is the main method of plasma diagnosis. However, the traditional diagnosis theory is affected by many factors, and it is difficult to obtain accurate diagnosis results. In this study, a long short-term memory (LSTM) approach is used for plasma probe diagnosis to derive electron density (Ne) and temperature (Te) more accurately and quickly. The LSTM network uses the data collected by Langmuir probes as input to eliminate the influence of the discharge device on the diagnosis that can be applied to a variety of discharge environments and even space ionospheric diagnosis. In the high-vacuum gas discharge environment, the Langmuir probe is used to obtain current–voltage (I–V) characteristic curves under different Ne and Te. A part of the data input network is selected for training, the other part of the data is used as the test set to test the network, and the parameters are adjusted to make the network obtain better prediction results. Two indexes, namely, mean squared error (MSE) and mean absolute percentage error (MAPE), are evaluated to calculate the prediction accuracy. The results show that using LSTM to diagnose plasma can reduce the impact of probe surface contamination on the traditional diagnosis methods and can accurately diagnose the underdense plasma. In addition, compared with Te, the Ne diagnosis result output by LSTM is more accurate. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
RadArnomaly: Protecting Radar Systems from Data Manipulation Attacks
Sensors 2022, 22(11), 4259; https://doi.org/10.3390/s22114259 - 02 Jun 2022
Cited by 3 | Viewed by 1777
Abstract
Radar systems are mainly used for tracking aircraft, missiles, satellites, and watercraft. In many cases, information regarding the objects detected by a radar system is sent to, and used by, a peripheral consuming system, such as a missile system or a graphical user [...] Read more.
Radar systems are mainly used for tracking aircraft, missiles, satellites, and watercraft. In many cases, information regarding the objects detected by a radar system is sent to, and used by, a peripheral consuming system, such as a missile system or a graphical user interface used by an operator. Those systems process the data stream and make real-time operational decisions based on the data received. Given this, the reliability and availability of information provided by radar systems have grown in importance. Although the field of cyber security has been continuously evolving, no prior research has focused on anomaly detection in radar systems. In this paper, we present an unsupervised deep-learning-based method for detecting anomalies in radar system data streams; we take into consideration the fact that a data stream created by a radar system is heterogeneous, i.e., it contains both numerical and categorical features with non-linear and complex relationships. We propose a novel technique that learns the correlation between numerical features and an embedding representation of categorical features in an unsupervised manner. The proposed technique, which allows for the detection of the malicious manipulation of critical fields in a data stream, is complemented by a timing-interval anomaly-detection mechanism proposed for the detection of message-dropping attempts. Real radar system data were used to evaluate the proposed method. Our experiments demonstrated the method’s high detection accuracy on a variety of data-stream manipulation attacks (an average detection rate of 88% with a false -alarm rate of 1.59%) and message-dropping attacks (an average detection rate of 92% with a false-alarm rate of 2.2%). Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Transfer Learning for Sentiment Analysis Using BERT Based Supervised Fine-Tuning
Sensors 2022, 22(11), 4157; https://doi.org/10.3390/s22114157 - 30 May 2022
Cited by 25 | Viewed by 9140
Abstract
The growth of the Internet has expanded the amount of data expressed by users across multiple platforms. The availability of these different worldviews and individuals’ emotions empowers sentiment analysis. However, sentiment analysis becomes even more challenging due to a scarcity of standardized labeled [...] Read more.
The growth of the Internet has expanded the amount of data expressed by users across multiple platforms. The availability of these different worldviews and individuals’ emotions empowers sentiment analysis. However, sentiment analysis becomes even more challenging due to a scarcity of standardized labeled data in the Bangla NLP domain. The majority of the existing Bangla research has relied on models of deep learning that significantly focus on context-independent word embeddings, such as Word2Vec, GloVe, and fastText, in which each word has a fixed representation irrespective of its context. Meanwhile, context-based pre-trained language models such as BERT have recently revolutionized the state of natural language processing. In this work, we utilized BERT’s transfer learning ability to a deep integrated model CNN-BiLSTM for enhanced performance of decision-making in sentiment analysis. In addition, we also introduced the ability of transfer learning to classical machine learning algorithms for the performance comparison of CNN-BiLSTM. Additionally, we explore various word embedding techniques, such as Word2Vec, GloVe, and fastText, and compare their performance to the BERT transfer learning strategy. As a result, we have shown a state-of-the-art binary classification performance for Bangla sentiment analysis that significantly outperforms all embedding and algorithms. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Dual Projection Fusion for Reference-Based Image Super-Resolution
Sensors 2022, 22(11), 4119; https://doi.org/10.3390/s22114119 - 28 May 2022
Cited by 1 | Viewed by 1372
Abstract
Reference-based image super-resolution (RefSR) methods have achieved performance superior to that of single image super-resolution (SISR) methods by transferring texture details from an additional high-resolution (HR) reference image to the low-resolution (LR) image. However, existing RefSR methods simply add or concatenate the transferred [...] Read more.
Reference-based image super-resolution (RefSR) methods have achieved performance superior to that of single image super-resolution (SISR) methods by transferring texture details from an additional high-resolution (HR) reference image to the low-resolution (LR) image. However, existing RefSR methods simply add or concatenate the transferred texture feature with the LR features, which cannot effectively fuse the information of these two independently extracted features. Therefore, this paper proposes a dual projection fusion for reference-based image super-resolution (DPFSR), which enables the network to focus more on the different information between feature sources through inter-residual projection operations, ensuring effective filling of detailed information in the LR feature. Moreover, this paper also proposes a novel backbone called the deep channel attention connection network (DCACN), which is capable of extracting valuable high-frequency components from the LR space to further facilitate the effectiveness of image reconstruction. Experimental results show that we achieve the best peak signal-to-noise ratio (PSNR) and structure similarity (SSIM) performance compared with the state-of-the-art (SOTA) SISR and RefSR methods. Visual results demonstrate that the proposed method in this paper recovers more natural and realistic texture details. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Deep Learning Regression Approaches Applied to Estimate Tillering in Tropical Forages Using Mobile Phone Images
Sensors 2022, 22(11), 4116; https://doi.org/10.3390/s22114116 - 28 May 2022
Viewed by 1808
Abstract
We assessed the performance of Convolutional Neural Network (CNN)-based approaches using mobile phone images to estimate regrowth density in tropical forages. We generated a dataset composed of 1124 labeled images with 2 mobile phones 7 days after the harvest of the forage plants. [...] Read more.
We assessed the performance of Convolutional Neural Network (CNN)-based approaches using mobile phone images to estimate regrowth density in tropical forages. We generated a dataset composed of 1124 labeled images with 2 mobile phones 7 days after the harvest of the forage plants. Six architectures were evaluated, including AlexNet, ResNet (18, 34, and 50 layers), ResNeXt101, and DarkNet. The best regression model showed a mean absolute error of 7.70 and a correlation of 0.89. Our findings suggest that our proposal using deep learning on mobile phone images can successfully be used to estimate regrowth density in forages. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
TimeREISE: Time Series Randomized Evolving Input Sample Explanation
Sensors 2022, 22(11), 4084; https://doi.org/10.3390/s22114084 - 27 May 2022
Cited by 1 | Viewed by 1217
Abstract
Deep neural networks are one of the most successful classifiers across different domains. However, their use is limited in safety-critical areas due to their limitations concerning interpretability. The research field of explainable artificial intelligence addresses this problem. However, most interpretability methods align to [...] Read more.
Deep neural networks are one of the most successful classifiers across different domains. However, their use is limited in safety-critical areas due to their limitations concerning interpretability. The research field of explainable artificial intelligence addresses this problem. However, most interpretability methods align to the imaging modality by design. The paper introduces TimeREISE, a model agnostic attribution method that shows success in the context of time series classification. The method applies perturbations to the input and considers different attribution map characteristics such as the granularity and density of an attribution map. The approach demonstrates superior performance compared to existing methods concerning different well-established measurements. TimeREISE shows impressive results in the deletion and insertion test, Infidelity, and Sensitivity. Concerning the continuity of an explanation, it showed superior performance while preserving the correctness of the attribution map. Additional sanity checks prove the correctness of the approach and its dependency on the model parameters. TimeREISE scales well with an increasing number of channels and timesteps. TimeREISE applies to any time series classification network and does not rely on prior data knowledge. TimeREISE is suited for any usecase independent of dataset characteristics such as sequence length, channel number, and number of classes. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Review
Recent Trends in AI-Based Intelligent Sensing
Electronics 2022, 11(10), 1661; https://doi.org/10.3390/electronics11101661 - 23 May 2022
Cited by 4 | Viewed by 5319
Abstract
In recent years, intelligent sensing has gained significant attention because of its autonomous decision-making ability to solve complex problems. Today, smart sensors complement and enhance the capabilities of human beings and have been widely embraced in numerous application areas. Artificial intelligence (AI) has [...] Read more.
In recent years, intelligent sensing has gained significant attention because of its autonomous decision-making ability to solve complex problems. Today, smart sensors complement and enhance the capabilities of human beings and have been widely embraced in numerous application areas. Artificial intelligence (AI) has made astounding growth in domains of natural language processing, machine learning (ML), and computer vision. The methods based on AI enable a computer to learn and monitor activities by sensing the source of information in a real-time environment. The combination of these two technologies provides a promising solution in intelligent sensing. This survey provides a comprehensive summary of recent research on AI-based algorithms for intelligent sensing. This work also presents a comparative analysis of algorithms, models, influential parameters, available datasets, applications and projects in the area of intelligent sensing. Furthermore, we present a taxonomy of AI models along with the cutting edge approaches. Finally, we highlight challenges and open issues, followed by the future research directions pertaining to this exciting and fast-moving field. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
TCSPANet: Two-Staged Contrastive Learning and Sub-Patch Attention Based Network for PolSAR Image Classification
Remote Sens. 2022, 14(10), 2451; https://doi.org/10.3390/rs14102451 - 20 May 2022
Cited by 3 | Viewed by 1432
Abstract
Polarimetric synthetic aperture radar (PolSAR) image classification has achieved great progress, but there still exist some obstacles. On the one hand, a large amount of PolSAR data is captured. Nevertheless, most of them are not labeled with land cover categories, which cannot be [...] Read more.
Polarimetric synthetic aperture radar (PolSAR) image classification has achieved great progress, but there still exist some obstacles. On the one hand, a large amount of PolSAR data is captured. Nevertheless, most of them are not labeled with land cover categories, which cannot be fully utilized. On the other hand, annotating PolSAR images relies more on domain knowledge and manpower, which makes pixel-level annotation harder. To alleviate the above problems, by integrating contrastive learning and transformer, we propose a novel patch-level PolSAR image classification, i.e., two-staged contrastive learning and sub-patch attention based network (TCSPANet). Firstly, the two-staged contrastive learning based network (TCNet) is designed for learning the representation information of PolSAR images without supervision, and obtaining the discrimination and comparability for actual land covers. Then, resorting to transformer, we construct the sub-patch attention encoder (SPAE) for modelling the context within patch samples. For training the TCSPANet, two patch-level datasets are built up based on unsupervised and semi-supervised methods. When predicting, the classification algorithm, classifying or splitting, is put forward to realise non-overlapping and coarse-to-fine patch-level classification. The classification results of multi-PolSAR images with one trained model suggests that our proposed model is superior to the compared methods. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Review
Deep Learning-Based Object Detection Techniques for Remote Sensing Images: A Survey
Remote Sens. 2022, 14(10), 2385; https://doi.org/10.3390/rs14102385 - 16 May 2022
Cited by 25 | Viewed by 4816
Abstract
Object detection in remote sensing images (RSIs) requires the locating and classifying of objects of interest, which is a hot topic in RSI analysis research. With the development of deep learning (DL) technology, which has accelerated in recent years, numerous intelligent and efficient [...] Read more.
Object detection in remote sensing images (RSIs) requires the locating and classifying of objects of interest, which is a hot topic in RSI analysis research. With the development of deep learning (DL) technology, which has accelerated in recent years, numerous intelligent and efficient detection algorithms have been proposed. Meanwhile, the performance of remote sensing imaging hardware has also evolved significantly. The detection technology used with high-resolution RSIs has been pushed to unprecedented heights, making important contributions in practical applications such as urban detection, building planning, and disaster prediction. However, although some scholars have authored reviews on DL-based object detection systems, the leading DL-based object detection improvement strategies have never been summarized in detail. In this paper, we first briefly review the recent history of remote sensing object detection (RSOD) techniques, including traditional methods as well as DL-based methods. Then, we systematically summarize the procedures used in DL-based detection algorithms. Most importantly, starting from the problems of complex object features, complex background information, tedious sample annotation that will be faced by high-resolution RSI object detection, we introduce a taxonomy based on various detection methods, which focuses on summarizing and classifying the existing attention mechanisms, multi-scale feature fusion, super-resolution and other major improvement strategies. We also introduce recognized open-source remote sensing detection benchmarks and evaluation metrics. Finally, based on the current state of the technology, we conclude by discussing the challenges and potential trends in the field of RSOD in order to provide a reference for researchers who have just entered the field. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

Article
Extraction of Micro-Doppler Feature Using LMD Algorithm Combined Supplement Feature for UAVs and Birds Classification
Remote Sens. 2022, 14(9), 2196; https://doi.org/10.3390/rs14092196 - 04 May 2022
Cited by 2 | Viewed by 1594
Abstract
In the past few decades, the demand for reliable and robust systems capable of monitoring unmanned aerial vehicles (UAVs) increased significantly due to the security threats from its wide applications. During UAVs surveillance, birds are a typical confuser target. Therefore, discriminating UAVs from [...] Read more.
In the past few decades, the demand for reliable and robust systems capable of monitoring unmanned aerial vehicles (UAVs) increased significantly due to the security threats from its wide applications. During UAVs surveillance, birds are a typical confuser target. Therefore, discriminating UAVs from birds is critical for successful non-cooperative UAVs surveillance. Micro-Doppler signature (m-DS) reflects the scattering characteristics of micro-motion targets and has been utilized for many radar automatic target recognition (RATR) tasks. In this paper, the authors deploy local mean decomposition (LMD) to separate the m-DS of the micro-motion parts from the body returns of the UAVs and birds. After the separation, rotating parts will be obtained without the interference of the body components, and the m-DS features can also be revealed more clearly, which is conducive to feature extraction. What is more, there are some problems in using m-DS only for target classification. Firstly, extracting only m-DS features makes incomplete use of information in the spectrogram. Secondly, m-DS can be observed only for metal rotor UAVs, or large UAVs when they are closer to the radar. Lastly, m-DS cannot be observed when the size of the birds is small, or when it is gliding. The authors thus propose an algorithm for RATR of UAVs and interfering targets under a new system of L band staring radar. In this algorithm, to make full use of the information in the spectrogram and supplement the information in exceptional situations, m-DS, movement, and energy aggregation features of the target are extracted from the spectrogram. On the benchmark dataset, the proposed algorithm demonstrates a better performance than the state-of-the-art algorithms. More specifically, the equal error rate (EER) proposed is 2.56% lower than the existing methods, which demonstrates the effectiveness of the proposed algorithm. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Electrocardiogram Biometrics Using Transformer’s Self-Attention Mechanism for Sequence Pair Feature Extractor and Flexible Enrollment Scope Identification
Sensors 2022, 22(9), 3446; https://doi.org/10.3390/s22093446 - 30 Apr 2022
Cited by 2 | Viewed by 2013
Abstract
The existing electrocardiogram (ECG) biometrics do not perform well when ECG changes after the enrollment phase because the feature extraction is not able to relate ECG collected during enrollment and ECG collected during classification. In this research, we propose the sequence pair feature [...] Read more.
The existing electrocardiogram (ECG) biometrics do not perform well when ECG changes after the enrollment phase because the feature extraction is not able to relate ECG collected during enrollment and ECG collected during classification. In this research, we propose the sequence pair feature extractor, inspired by Bidirectional Encoder Representations from Transformers (BERT)’s sentence pair task, to obtain a dynamic representation of a pair of ECGs. We also propose using the self-attention mechanism of the transformer to draw an inter-identity relationship when performing ECG identification tasks. The model was trained once with datasets built from 10 ECG databases, and then, it was applied to six other ECG databases without retraining. We emphasize the significance of the time separation between enrollment and classification when presenting the results. The model scored 96.20%, 100.0%, 99.91%, 96.09%, 96.35%, and 98.10% identification accuracy on MIT-BIH Atrial Fibrillation Database (AFDB), Combined measurement of ECG, Breathing and Seismocardiograms (CEBSDB), MIT-BIH Normal Sinus Rhythm Database (NSRDB), MIT-BIH ST Change Database (STDB), ECG-ID Database (ECGIDDB), and PTB Diagnostic ECG Database (PTBDB), respectively, over a short time separation. The model scored 92.70% and 64.16% identification accuracy on ECGIDDB and PTBDB, respectively, over a long time separation, which is a significant improvement compared to state-of-the-art methods. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1