sensors-logo

Journal Browser

Journal Browser

Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1404 KiB  
Article
Towards 6G IoT: Tracing Mobile Sensor Nodes with Deep Learning Clustering in UAV Networks
by Yannis Spyridis, Thomas Lagkas, Panagiotis Sarigiannidis, Vasileios Argyriou, Antonios Sarigiannidis, George Eleftherakis and Jie Zhang
Sensors 2021, 21(11), 3936; https://doi.org/10.3390/s21113936 - 07 Jun 2021
Cited by 24 | Viewed by 4256
Abstract
Unmanned aerial vehicles (UAVs) in the role of flying anchor nodes have been proposed to assist the localisation of terrestrial Internet of Things (IoT) sensors and provide relay services in the context of the upcoming 6G networks. This paper considered the objective of [...] Read more.
Unmanned aerial vehicles (UAVs) in the role of flying anchor nodes have been proposed to assist the localisation of terrestrial Internet of Things (IoT) sensors and provide relay services in the context of the upcoming 6G networks. This paper considered the objective of tracing a mobile IoT device of unknown location, using a group of UAVs that were equipped with received signal strength indicator (RSSI) sensors. The UAVs employed measurements of the target’s radio frequency (RF) signal power to approach the target as quickly as possible. A deep learning model performed clustering in the UAV network at regular intervals, based on a graph convolutional network (GCN) architecture, which utilised information about the RSSI and the UAV positions. The number of clusters was determined dynamically at each instant using a heuristic method, and the partitions were determined by optimising an RSSI loss function. The proposed algorithm retained the clusters that approached the RF source more effectively, removing the rest of the UAVs, which returned to the base. Simulation experiments demonstrated the improvement of this method compared to a previous deterministic approach, in terms of the time required to reach the target and the total distance covered by the UAVs. Full article
(This article belongs to the Special Issue 6G Wireless Communication Systems)
Show Figures

Figure 1

21 pages, 5890 KiB  
Article
Hemorrhage Detection Based on 3D CNN Deep Learning Framework and Feature Fusion for Evaluating Retinal Abnormality in Diabetic Patients
by Sarmad Maqsood, Robertas Damaševičius and Rytis Maskeliūnas
Sensors 2021, 21(11), 3865; https://doi.org/10.3390/s21113865 - 03 Jun 2021
Cited by 53 | Viewed by 4629
Abstract
Diabetic retinopathy (DR) is the main cause of blindness in diabetic patients. Early and accurate diagnosis can improve the analysis and prognosis of the disease. One of the earliest symptoms of DR are the hemorrhages in the retina. Therefore, we propose a new [...] Read more.
Diabetic retinopathy (DR) is the main cause of blindness in diabetic patients. Early and accurate diagnosis can improve the analysis and prognosis of the disease. One of the earliest symptoms of DR are the hemorrhages in the retina. Therefore, we propose a new method for accurate hemorrhage detection from the retinal fundus images. First, the proposed method uses the modified contrast enhancement method to improve the edge details from the input retinal fundus images. In the second stage, a new convolutional neural network (CNN) architecture is proposed to detect hemorrhages. A modified pre-trained CNN model is used to extract features from the detected hemorrhages. In the third stage, all extracted feature vectors are fused using the convolutional sparse image decomposition method, and finally, the best features are selected by using the multi-logistic regression controlled entropy variance approach. The proposed method is evaluated on 1509 images from HRF, DRIVE, STARE, MESSIDOR, DIARETDB0, and DIARETDB1 databases and achieves the average accuracy of 97.71%, which is superior to the previous works. Moreover, the proposed hemorrhage detection system attains better performance, in terms of visual quality and quantitative analysis with high accuracy, in comparison with the state-of-the-art methods. Full article
(This article belongs to the Collection Medical Image Classification)
Show Figures

Figure 1

16 pages, 4312 KiB  
Article
Radar Transformer: An Object Classification Network Based on 4D MMW Imaging Radar
by Jie Bai, Lianqing Zheng, Sen Li, Bin Tan, Sihan Chen and Libo Huang
Sensors 2021, 21(11), 3854; https://doi.org/10.3390/s21113854 - 02 Jun 2021
Cited by 28 | Viewed by 8721
Abstract
Automotive millimeter-wave (MMW) radar is essential in autonomous vehicles due to its robustness in all weather conditions. Traditional commercial automotive radars are limited by their resolution, which makes the object classification task difficult. Thus, the concept of a new generation of four-dimensional (4D) [...] Read more.
Automotive millimeter-wave (MMW) radar is essential in autonomous vehicles due to its robustness in all weather conditions. Traditional commercial automotive radars are limited by their resolution, which makes the object classification task difficult. Thus, the concept of a new generation of four-dimensional (4D) imaging radar was proposed. It has high azimuth and elevation resolution and contains Doppler information to produce a high-quality point cloud. In this paper, we propose an object classification network named Radar Transformer. The algorithm takes the attention mechanism as the core and adopts the combination of vector attention and scalar attention to make full use of the spatial information, Doppler information, and reflection intensity information of the radar point cloud to realize the deep fusion of local attention features and global attention features. We generated an imaging radar classification dataset and completed manual annotation. The experimental results show that our proposed method achieved an overall classification accuracy of 94.9%, which is more suitable for processing radar point clouds than the popular deep learning frameworks and shows promising performance. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

21 pages, 11063 KiB  
Article
Monitoring System for Railway Infrastructure Elements Based on Thermal Imaging Analysis
by Krzysztof Stypułkowski, Paweł Gołda, Konrad Lewczuk and Justyna Tomaszewska
Sensors 2021, 21(11), 3819; https://doi.org/10.3390/s21113819 - 31 May 2021
Cited by 21 | Viewed by 5066
Abstract
The safety and reliability of railway transport requires new solutions for monitoring and quick identification of faults in the railway infrastructure. Electric heating devices (EORs) are the crucial element of turnouts. EORs ensure heating during low temperature periods when ice or snow can [...] Read more.
The safety and reliability of railway transport requires new solutions for monitoring and quick identification of faults in the railway infrastructure. Electric heating devices (EORs) are the crucial element of turnouts. EORs ensure heating during low temperature periods when ice or snow can lock the turnout device. Thermal imaging is a response to the need for an EOR inspection tool. After processing, a thermogram is a great support for the manual inspection of an EOR, or the thermogram can be the input for a machine learning algorithm. In this article, the authors review the literature in terms of thermographic analysis and its applications for detecting railroad damage, analysing images through machine learning, and improving railway traffic safety. The EOR device, its components, and technical parameters are discussed, as well as inspection and maintenance requirements. On this base, the authors present the concept of using thermographic imaging to detect EOR failures and malfunctions using a practical example, as well as the concept of using machine learning mechanisms to automatically analyse thermograms. The authors show that the proposed method of analysis can be an effective tool for examining EOR status and that it can be included in the official EOR inspection calendar. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

29 pages, 15926 KiB  
Article
Supervised Machine Learning Methods and Hyperspectral Imaging Techniques Jointly Applied for Brain Cancer Classification
by Gemma Urbanos, Alberto Martín, Guillermo Vázquez, Marta Villanueva, Manuel Villa, Luis Jimenez-Roldan, Miguel Chavarrías, Alfonso Lagares, Eduardo Juárez and César Sanz
Sensors 2021, 21(11), 3827; https://doi.org/10.3390/s21113827 - 31 May 2021
Cited by 42 | Viewed by 5677
Abstract
Hyperspectral imaging techniques (HSI) do not require contact with patients and are non-ionizing as well as non-invasive. As a consequence, they have been extensively applied in the medical field. HSI is being combined with machine learning (ML) processes to obtain models to assist [...] Read more.
Hyperspectral imaging techniques (HSI) do not require contact with patients and are non-ionizing as well as non-invasive. As a consequence, they have been extensively applied in the medical field. HSI is being combined with machine learning (ML) processes to obtain models to assist in diagnosis. In particular, the combination of these techniques has proven to be a reliable aid in the differentiation of healthy and tumor tissue during brain tumor surgery. ML algorithms such as support vector machine (SVM), random forest (RF) and convolutional neural networks (CNN) are used to make predictions and provide in-vivo visualizations that may assist neurosurgeons in being more precise, hence reducing damages to healthy tissue. In this work, thirteen in-vivo hyperspectral images from twelve different patients with high-grade gliomas (grade III and IV) have been selected to train SVM, RF and CNN classifiers. Five different classes have been defined during the experiments: healthy tissue, tumor, venous blood vessel, arterial blood vessel and dura mater. Overall accuracy (OACC) results vary from 60% to 95% depending on the training conditions. Finally, as far as the contribution of each band to the OACC is concerned, the results obtained in this work are 3.81 times greater than those reported in the literature. Full article
(This article belongs to the Special Issue Trends and Prospects in Medical Hyperspectral Imagery)
Show Figures

Figure 1

29 pages, 1317 KiB  
Review
A Review of EEG Signal Features and Their Application in Driver Drowsiness Detection Systems
by Igor Stancin, Mario Cifrek and Alan Jovic
Sensors 2021, 21(11), 3786; https://doi.org/10.3390/s21113786 - 30 May 2021
Cited by 89 | Viewed by 11184
Abstract
Detecting drowsiness in drivers, especially multi-level drowsiness, is a difficult problem that is often approached using neurophysiological signals as the basis for building a reliable system. In this context, electroencephalogram (EEG) signals are the most important source of data to achieve successful detection. [...] Read more.
Detecting drowsiness in drivers, especially multi-level drowsiness, is a difficult problem that is often approached using neurophysiological signals as the basis for building a reliable system. In this context, electroencephalogram (EEG) signals are the most important source of data to achieve successful detection. In this paper, we first review EEG signal features used in the literature for a variety of tasks, then we focus on reviewing the applications of EEG features and deep learning approaches in driver drowsiness detection, and finally we discuss the open challenges and opportunities in improving driver drowsiness detection based on EEG. We show that the number of studies on driver drowsiness detection systems has increased in recent years and that future systems need to consider the wide variety of EEG signal features and deep learning approaches to increase the accuracy of detection. Full article
(This article belongs to the Special Issue Intelligent Biosignal Analysis Methods)
Show Figures

Figure 1

13 pages, 1547 KiB  
Article
Magnetic Lateral Flow Immunoassay for Small Extracellular Vesicles Quantification: Application to Colorectal Cancer Biomarker Detection
by Amanda Moyano, Esther Serrano-Pertierra, José María Duque, Virginia Ramos, Estefanía Teruel-Barandiarán, María Teresa Fernández-Sánchez, María Salvador, José Carlos Martínez-García, Luis Sánchez, Luis García-Flórez, Montserrat Rivas and María del Carmen Blanco-López
Sensors 2021, 21(11), 3756; https://doi.org/10.3390/s21113756 - 28 May 2021
Cited by 13 | Viewed by 4543
Abstract
Colorectal cancer (CRC) is the third leading cause of cancer death and the fourth most common cancer in the world. Colonoscopy is the most sensitive test used for detection of CRC; however, their procedure is invasive and expensive for population mass screening. Currently, [...] Read more.
Colorectal cancer (CRC) is the third leading cause of cancer death and the fourth most common cancer in the world. Colonoscopy is the most sensitive test used for detection of CRC; however, their procedure is invasive and expensive for population mass screening. Currently, the fecal occult blood test has been widely used as a screening tool for CRC but displays low specificity. The lack of rapid and simple methods for mass screening makes the early diagnosis and therapy monitoring difficult. Extracellular vesicles (EVs) have emerged as a novel source of biomarkers due to their contents in proteins and miRNAs. Their detection would not require invasive techniques and could be considered as a liquid biopsy. Specifically, it has been demonstrated that the amount of CD147 expressed in circulating EVs is significant higher for CRC cell lines than for normal colon fibroblast cell lines. Moreover, CD147-containing EVs have been used as a biomarker to monitor response to therapy in patients with CRC. Therefore, this antigen could be used as a non-invasive biomarker for the detection and monitoring of CRC in combination with a Point-of-Care platform as, for example, Lateral Flow Immunoassays (LFIAs). Here, we propose the development of a quantitative lateral flow immunoassay test based on the use of magnetic nanoparticles as labels coupled to inductive sensor for the non-invasive detection of CRC by CD147-positive EVs. The results obtained for quantification of CD147 antigen embedded in EVs isolated from plasma sample have demonstrated that this device could be used as a Point-of-Care tool for CRC screening or therapy monitoring thanks to its rapid response and easy operation. Full article
(This article belongs to the Special Issue Electrochemical Sensors and (Bio)assays for Health Applications)
Show Figures

Graphical abstract

14 pages, 6066 KiB  
Article
A High-Resolution Reflective Microwave Planar Sensor for Sensing of Vanadium Electrolyte
by Nazli Kazemi, Kalvin Schofield and Petr Musilek
Sensors 2021, 21(11), 3759; https://doi.org/10.3390/s21113759 - 28 May 2021
Cited by 39 | Viewed by 3475
Abstract
Microwave planar sensors employ conventional passive complementary split ring resonators (CSRR) as their sensitive region. In this work, a novel planar reflective sensor is introduced that deploys CSRRs as the front-end sensing element at fres=6 GHz with an extra loss-compensating [...] Read more.
Microwave planar sensors employ conventional passive complementary split ring resonators (CSRR) as their sensitive region. In this work, a novel planar reflective sensor is introduced that deploys CSRRs as the front-end sensing element at fres=6 GHz with an extra loss-compensating negative resistance that restores the dissipated power in the sensor that is used in dielectric material characterization. It is shown that the S11 notch of −15 dB can be improved down to −40 dB without loss of sensitivity. An application of this design is shown in discriminating different states of vanadium redox solutions with highly lossy conditions of fully charged V5+ and fully discharged V4+ electrolytes. Full article
(This article belongs to the Special Issue State-of-the-Art Technologies in Microwave Sensors)
Show Figures

Figure 1

21 pages, 4688 KiB  
Review
A Review of Deep Learning-Based Contactless Heart Rate Measurement Methods
by Aoxin Ni, Arian Azarang and Nasser Kehtarnavaz
Sensors 2021, 21(11), 3719; https://doi.org/10.3390/s21113719 - 27 May 2021
Cited by 55 | Viewed by 8766
Abstract
The interest in contactless or remote heart rate measurement has been steadily growing in healthcare and sports applications. Contactless methods involve the utilization of a video camera and image processing algorithms. Recently, deep learning methods have been used to improve the performance of [...] Read more.
The interest in contactless or remote heart rate measurement has been steadily growing in healthcare and sports applications. Contactless methods involve the utilization of a video camera and image processing algorithms. Recently, deep learning methods have been used to improve the performance of conventional contactless methods for heart rate measurement. After providing a review of the related literature, a comparison of the deep learning methods whose codes are publicly available is conducted in this paper. The public domain UBFC dataset is used to compare the performance of these deep learning methods for heart rate measurement. The results obtained show that the deep learning method PhysNet generates the best heart rate measurement outcome among these methods, with a mean absolute error value of 2.57 beats per minute and a mean square error value of 7.56 beats per minute. Full article
(This article belongs to the Special Issue Wearable and Unobtrusive Technologies for Healthcare Monitoring)
Show Figures

Figure 1

22 pages, 7269 KiB  
Article
Diabetic Retinopathy Fundus Image Classification and Lesions Localization System Using Deep Learning
by Wejdan L. Alyoubi, Maysoon F. Abulkhair and Wafaa M. Shalash
Sensors 2021, 21(11), 3704; https://doi.org/10.3390/s21113704 - 26 May 2021
Cited by 128 | Viewed by 13889
Abstract
Diabetic retinopathy (DR) is a disease resulting from diabetes complications, causing non-reversible damage to retina blood vessels. DR is a leading cause of blindness if not detected early. The currently available DR treatments are limited to stopping or delaying the deterioration of sight, [...] Read more.
Diabetic retinopathy (DR) is a disease resulting from diabetes complications, causing non-reversible damage to retina blood vessels. DR is a leading cause of blindness if not detected early. The currently available DR treatments are limited to stopping or delaying the deterioration of sight, highlighting the importance of regular scanning using high-efficiency computer-based systems to diagnose cases early. The current work presented fully automatic diagnosis systems that exceed manual techniques to avoid misdiagnosis, reducing time, effort and cost. The proposed system classifies DR images into five stages—no-DR, mild, moderate, severe and proliferative DR—as well as localizing the affected lesions on retain surface. The system comprises two deep learning-based models. The first model (CNN512) used the whole image as an input to the CNN model to classify it into one of the five DR stages. It achieved an accuracy of 88.6% and 84.1% on the DDR and the APTOS Kaggle 2019 public datasets, respectively, compared to the state-of-the-art results. Simultaneously, the second model used an adopted YOLOv3 model to detect and localize the DR lesions, achieving a 0.216 mAP in lesion localization on the DDR dataset, which improves the current state-of-the-art results. Finally, both of the proposed structures, CNN512 and YOLOv3, were fused to classify DR images and localize DR lesions, obtaining an accuracy of 89% with 89% sensitivity, 97.3 specificity and that exceeds the current state-of-the-art results. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

23 pages, 6226 KiB  
Article
Improved Mutual Understanding for Human-Robot Collaboration: Combining Human-Aware Motion Planning with Haptic Feedback Devices for Communicating Planned Trajectory
by Stefan Grushko, Aleš Vysocký, Petr Oščádal, Michal Vocetka, Petr Novák and Zdenko Bobovský
Sensors 2021, 21(11), 3673; https://doi.org/10.3390/s21113673 - 25 May 2021
Cited by 26 | Viewed by 4383
Abstract
In a collaborative scenario, the communication between humans and robots is a fundamental aspect to achieve good efficiency and ergonomics in the task execution. A lot of research has been made related to enabling a robot system to understand and predict human behaviour, [...] Read more.
In a collaborative scenario, the communication between humans and robots is a fundamental aspect to achieve good efficiency and ergonomics in the task execution. A lot of research has been made related to enabling a robot system to understand and predict human behaviour, allowing the robot to adapt its motion to avoid collisions with human workers. Assuming the production task has a high degree of variability, the robot’s movements can be difficult to predict, leading to a feeling of anxiety in the worker when the robot changes its trajectory and approaches since the worker has no information about the planned movement of the robot. Additionally, without information about the robot’s movement, the human worker cannot effectively plan own activity without forcing the robot to constantly replan its movement. We propose a novel approach to communicating the robot’s intentions to a human worker. The improvement to the collaboration is presented by introducing haptic feedback devices, whose task is to notify the human worker about the currently planned robot’s trajectory and changes in its status. In order to verify the effectiveness of the developed human-machine interface in the conditions of a shared collaborative workspace, a user study was designed and conducted among 16 participants, whose objective was to accurately recognise the goal position of the robot during its movement. Data collected during the experiment included both objective and subjective parameters. Statistically significant results of the experiment indicated that all the participants could improve their task completion time by over 45% and generally were more subjectively satisfied when completing the task with equipped haptic feedback devices. The results also suggest the usefulness of the developed notification system since it improved users’ awareness about the motion plan of the robot. Full article
(This article belongs to the Special Issue Human-Robot Collaborations in Industrial Automation)
Show Figures

Figure 1

27 pages, 8479 KiB  
Article
Evaluation of Misalignment Effect in Vehicle-to-Vehicle Visible Light Communications: Experimental Demonstration of a 75 Meters Link
by Sebastian-Andrei Avătămăniței, Cătălin Beguni, Alin-Mihai Căilean, Mihai Dimian and Valentin Popa
Sensors 2021, 21(11), 3577; https://doi.org/10.3390/s21113577 - 21 May 2021
Cited by 24 | Viewed by 3287
Abstract
The use of visible light communications technology in communication-based vehicle applications is gaining more and more interest as the research community is constantly overcoming challenge after challenge. In this context, this article addresses the issues associated with the use of Visible Light Communications [...] Read more.
The use of visible light communications technology in communication-based vehicle applications is gaining more and more interest as the research community is constantly overcoming challenge after challenge. In this context, this article addresses the issues associated with the use of Visible Light Communications (VLC) technology in Vehicle-to-Vehicle (V2V) communications, while focusing on two crucial issues. On the one hand, it aims to investigate the achievable communication distance in V2V applications while addressing the least favorable case, namely the one when a standard vehicle rear lighting system is used as a VLC emitter. On the other hand, this article investigates another highly unfavorable use case scenario, i.e., the case when two vehicles are located on adjacent lanes, rather than on the same lane. In order to evaluate the compatibility of the VLC technology with the usage in inter-vehicle communication, a VLC prototype is intensively evaluated in outdoor conditions. The experimental results show a record V2V VLC distance of 75 m, while providing a Bit Error Ratio (BER) of 10−7–10−6. The results also show that the VLC technology is able to provide V2V connectivity even in a situation where the vehicles are located on adjacent lanes, without a major impact on the link performances. Nevertheless, this situation generates an initial no-coverage zone, which is determined by the VLC receiver reception angle, whereas in some cases, vehicle misalignment can generate a BER increase that can go up to two orders of magnitude. Full article
(This article belongs to the Special Issue Automotive Visible Light Communications (AutoVLC))
Show Figures

Figure 1

20 pages, 14188 KiB  
Article
Digital Twin-Based Safety Risk Coupling of Prefabricated Building Hoisting
by Zhansheng Liu, Xintong Meng, Zezhong Xing and Antong Jiang
Sensors 2021, 21(11), 3583; https://doi.org/10.3390/s21113583 - 21 May 2021
Cited by 55 | Viewed by 4762
Abstract
Safety management in hoisting is the key issue to determine the development of prefabricated building construction. However, the security management in the hoisting stage lacks a truly effective method of information physical fusion, and the safety risk analysis of hoisting does not consider [...] Read more.
Safety management in hoisting is the key issue to determine the development of prefabricated building construction. However, the security management in the hoisting stage lacks a truly effective method of information physical fusion, and the safety risk analysis of hoisting does not consider the interaction of risk factors. In this paper, a hoisting safety risk management framework based on digital twin (DT) is presented. The digital twin hoisting safety risk coupling model is built. The proposed model integrates the Internet of Things (IoT), Building Information Modeling (BIM), and a security risk analysis method combining the Apriori algorithm and complex network. The real-time perception and virtual–real interaction of multi-source information in the hoisting process are realized, the association rules and coupling relationship among hoisting safety risk factors are mined, and the time-varying data information is visualized. Demonstration in the construction of a large-scale prefabricated building shows that with the proposed framework, it is possible to complete the information fusion between the hoisting site and the virtual model and realize the visual management. The correlative relationship among hoisting construction safety risk factors is analyzed, and the key control factors are found. Moreover, the efficiency of information integration and sharing is improved, the gap of coupling analysis of security risk factors is filled, and effective security management and decision-making are achieved with the proposed approach. Full article
(This article belongs to the Special Issue Smart Sensing in Building and Construction)
Show Figures

Figure 1

24 pages, 6663 KiB  
Article
Evaluating the Single-Shot MultiBox Detector and YOLO Deep Learning Models for the Detection of Tomatoes in a Greenhouse
by Sandro Augusto Magalhães, Luís Castro, Germano Moreira, Filipe Neves dos Santos, Mário Cunha, Jorge Dias and António Paulo Moreira
Sensors 2021, 21(10), 3569; https://doi.org/10.3390/s21103569 - 20 May 2021
Cited by 82 | Viewed by 10527
Abstract
The development of robotic solutions for agriculture requires advanced perception capabilities that can work reliably in any crop stage. For example, to automatise the tomato harvesting process in greenhouses, the visual perception system needs to detect the tomato in any life cycle stage [...] Read more.
The development of robotic solutions for agriculture requires advanced perception capabilities that can work reliably in any crop stage. For example, to automatise the tomato harvesting process in greenhouses, the visual perception system needs to detect the tomato in any life cycle stage (flower to the ripe tomato). The state-of-the-art for visual tomato detection focuses mainly on ripe tomato, which has a distinctive colour from the background. This paper contributes with an annotated visual dataset of green and reddish tomatoes. This kind of dataset is uncommon and not available for research purposes. This will enable further developments in edge artificial intelligence for in situ and in real-time visual tomato detection required for the development of harvesting robots. Considering this dataset, five deep learning models were selected, trained and benchmarked to detect green and reddish tomatoes grown in greenhouses. Considering our robotic platform specifications, only the Single-Shot MultiBox Detector (SSD) and YOLO architectures were considered. The results proved that the system can detect green and reddish tomatoes, even those occluded by leaves. SSD MobileNet v2 had the best performance when compared against SSD Inception v2, SSD ResNet 50, SSD ResNet 101 and YOLOv4 Tiny, reaching an F1-score of 66.15%, an mAP of 51.46% and an inference time of 16.44ms with the NVIDIA Turing Architecture platform, an NVIDIA Tesla T4, with 12 GB. YOLOv4 Tiny also had impressive results, mainly concerning inferring times of about 5 ms. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

31 pages, 3710 KiB  
Review
Recent Advances in Transducers for Intravascular Ultrasound (IVUS) Imaging
by Chang Peng, Huaiyu Wu, Seungsoo Kim, Xuming Dai and Xiaoning Jiang
Sensors 2021, 21(10), 3540; https://doi.org/10.3390/s21103540 - 19 May 2021
Cited by 55 | Viewed by 13923
Abstract
As a well-known medical imaging methodology, intravascular ultrasound (IVUS) imaging plays a critical role in diagnosis, treatment guidance and post-treatment assessment of coronary artery diseases. By cannulating a miniature ultrasound transducer mounted catheter into an artery, the vessel lumen opening, vessel wall morphology [...] Read more.
As a well-known medical imaging methodology, intravascular ultrasound (IVUS) imaging plays a critical role in diagnosis, treatment guidance and post-treatment assessment of coronary artery diseases. By cannulating a miniature ultrasound transducer mounted catheter into an artery, the vessel lumen opening, vessel wall morphology and other associated blood and vessel properties can be precisely assessed in IVUS imaging. Ultrasound transducer, as the key component of an IVUS system, is critical in determining the IVUS imaging performance. In recent years, a wide range of achievements in ultrasound transducers have been reported for IVUS imaging applications. Herein, a comprehensive review is given on recent advances in ultrasound transducers for IVUS imaging. Firstly, a fundamental understanding of IVUS imaging principle, evaluation parameters and IVUS catheter are summarized. Secondly, three different types of ultrasound transducers (piezoelectric ultrasound transducer, piezoelectric micromachined ultrasound transducer and capacitive micromachined ultrasound transducer) for IVUS imaging are presented. Particularly, the recent advances in piezoelectric ultrasound transducer for IVUS imaging are extensively examined according to their different working mechanisms, configurations and materials adopted. Thirdly, IVUS-based multimodality intravascular imaging of atherosclerotic plaque is discussed. Finally, summary and perspectives on the future studies are highlighted for IVUS imaging applications. Full article
(This article belongs to the Special Issue Feature Papers in Physical Sensors Section 2020)
Show Figures

Figure 1

23 pages, 8573 KiB  
Article
UAV Photogrammetry under Poor Lighting Conditions—Accuracy Considerations
by Pawel Burdziakowski and Katarzyna Bobkowska
Sensors 2021, 21(10), 3531; https://doi.org/10.3390/s21103531 - 19 May 2021
Cited by 28 | Viewed by 4387
Abstract
The use of low-level photogrammetry is very broad, and studies in this field are conducted in many aspects. Most research and applications are based on image data acquired during the day, which seems natural and obvious. However, the authors of this paper draw [...] Read more.
The use of low-level photogrammetry is very broad, and studies in this field are conducted in many aspects. Most research and applications are based on image data acquired during the day, which seems natural and obvious. However, the authors of this paper draw attention to the potential and possible use of UAV photogrammetry during the darker time of the day. The potential of night-time images has not been yet widely recognized, since correct scenery lighting or lack of scenery light sources is an obvious issue. The authors have developed typical day- and night-time photogrammetric models. They have also presented an extensive analysis of the geometry, indicated which process element had the greatest impact on degrading night-time photogrammetric product, as well as which measurable factor directly correlated with image accuracy. The reduction in geometry during night-time tests was greatly impacted by the non-uniform distribution of GCPs within the study area. The calibration of non-metric cameras is sensitive to poor lighting conditions, which leads to the generation of a higher determination error for each intrinsic orientation and distortion parameter. As evidenced, uniformly illuminated photos can be used to construct a model with lower reprojection error, and each tie point exhibits greater precision. Furthermore, they have evaluated whether commercial photogrammetric software enabled reaching acceptable image quality and whether the digital camera type impacted interpretative quality. The research paper is concluded with an extended discussion, conclusions, and recommendation on night-time studies. Full article
(This article belongs to the Special Issue Unmanned Aerial Systems and Remote Sensing)
Show Figures

Figure 1

21 pages, 1404 KiB  
Article
Predicting Exact Valence and Arousal Values from EEG
by Filipe Galvão, Soraia M. Alarcão and Manuel J. Fonseca
Sensors 2021, 21(10), 3414; https://doi.org/10.3390/s21103414 - 14 May 2021
Cited by 48 | Viewed by 5783
Abstract
Recognition of emotions from physiological signals, and in particular from electroencephalography (EEG), is a field within affective computing gaining increasing relevance. Although researchers have used these signals to recognize emotions, most of them only identify a limited set of emotional states (e.g., happiness, [...] Read more.
Recognition of emotions from physiological signals, and in particular from electroencephalography (EEG), is a field within affective computing gaining increasing relevance. Although researchers have used these signals to recognize emotions, most of them only identify a limited set of emotional states (e.g., happiness, sadness, anger, etc.) and have not attempted to predict exact values for valence and arousal, which would provide a wider range of emotional states. This paper describes our proposed model for predicting the exact values of valence and arousal in a subject-independent scenario. To create it, we studied the best features, brain waves, and machine learning models that are currently in use for emotion classification. This systematic analysis revealed that the best prediction model uses a KNN regressor (K = 1) with Manhattan distance, features from the alpha, beta and gamma bands, and the differential asymmetry from the alpha band. Results, using the DEAP, AMIGOS and DREAMER datasets, show that our model can predict valence and arousal values with a low error (MAE < 0.06, RMSE < 0.16) and a strong correlation between predicted and expected values (PCC > 0.80), and can identify four emotional classes with an accuracy of 84.4%. The findings of this work show that the features, brain waves and machine learning models, typically used in emotion classification tasks, can be used in more challenging situations, such as the prediction of exact values for valence and arousal. Full article
(This article belongs to the Special Issue Biomedical Signal Acquisition and Processing Using Sensors)
Show Figures

Figure 1

25 pages, 7055 KiB  
Article
Testing the Contribution of Multi-Source Remote Sensing Features for Random Forest Classification of the Greater Amanzule Tropical Peatland
by Alex O. Amoakoh, Paul Aplin, Kwame T. Awuah, Irene Delgado-Fernandez, Cherith Moses, Carolina Peña Alonso, Stephen Kankam and Justice C. Mensah
Sensors 2021, 21(10), 3399; https://doi.org/10.3390/s21103399 - 13 May 2021
Cited by 18 | Viewed by 4706
Abstract
Tropical peatlands such as Ghana’s Greater Amanzule peatland are highly valuable ecosystems and under great pressure from anthropogenic land use activities. Accurate measurement of their occurrence and extent is required to facilitate sustainable management. A key challenge, however, is the high cloud cover [...] Read more.
Tropical peatlands such as Ghana’s Greater Amanzule peatland are highly valuable ecosystems and under great pressure from anthropogenic land use activities. Accurate measurement of their occurrence and extent is required to facilitate sustainable management. A key challenge, however, is the high cloud cover in the tropics that limits optical remote sensing data acquisition. In this work we combine optical imagery with radar and elevation data to optimise land cover classification for the Greater Amanzule tropical peatland. Sentinel-2, Sentinel-1 and Shuttle Radar Topography Mission (SRTM) imagery were acquired and integrated to drive a machine learning land cover classification using a random forest classifier. Recursive feature elimination was used to optimize high-dimensional and correlated feature space and determine the optimal features for the classification. Six datasets were compared, comprising different combinations of optical, radar and elevation features. Results showed that the best overall accuracy (OA) was found for the integrated Sentinel-2, Sentinel-1 and SRTM dataset (S2+S1+DEM), significantly outperforming all the other classifications with an OA of 94%. Assessment of the sensitivity of land cover classes to image features indicated that elevation and the original Sentinel-1 bands contributed the most to separating tropical peatlands from other land cover types. The integration of more features and the removal of redundant features systematically increased classification accuracy. We estimate Ghana’s Greater Amanzule peatland covers 60,187 ha. Our proposed methodological framework contributes a robust workflow for accurate and detailed landscape-scale monitoring of tropical peatlands, while our findings provide timely information critical for the sustainable management of the Greater Amanzule peatland. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

14 pages, 957 KiB  
Article
Calibration and Cross-Validation of Accelerometer Cut-Points to Classify Sedentary Time and Physical Activity from Hip and Non-Dominant and Dominant Wrists in Older Adults
by Jairo H. Migueles, Cristina Cadenas-Sanchez, Juan M. A. Alcantara, Javier Leal-Martín, Asier Mañas, Ignacio Ara, Nancy W. Glynn and Eric J. Shiroma
Sensors 2021, 21(10), 3326; https://doi.org/10.3390/s21103326 - 11 May 2021
Cited by 22 | Viewed by 4175
Abstract
Accelerometers’ accuracy for sedentary time (ST) and moderate-to-vigorous physical activity (MVPA) classification depends on accelerometer placement, data processing, activities, and sample characteristics. As intensities differ by age, this study sought to determine intensity cut-points at various wear locations people more than 70 years [...] Read more.
Accelerometers’ accuracy for sedentary time (ST) and moderate-to-vigorous physical activity (MVPA) classification depends on accelerometer placement, data processing, activities, and sample characteristics. As intensities differ by age, this study sought to determine intensity cut-points at various wear locations people more than 70 years old. Data from 59 older adults were used for calibration and from 21 independent participants for cross-validation purposes. Participants wore accelerometers on their hip and wrists while performing activities and having their energy expenditure measured with portable calorimetry. ST and MVPA were defined as ≤1.5 metabolic equivalents (METs) and ≥3 METs (1 MET = 2.8 mL/kg/min), respectively. Receiver operator characteristic (ROC) analyses showed fair-to-good accuracy (area under the curve [AUC] = 0.62–0.89). ST cut-points were 7 mg (cross-validation: sensitivity = 0.88, specificity = 0.80) and 1 count/5 s (cross-validation: sensitivity = 0.91, specificity = 0.96) for the hip; 18 mg (cross-validation: sensitivity = 0.86, specificity = 0.86) and 102 counts/5 s (cross-validation: sensitivity = 0.91, specificity = 0.92) for the non-dominant wrist; and 22 mg and 175 counts/5 s (not cross-validated) for the dominant wrist. MVPA cut-points were 14 mg (cross-validation: sensitivity = 0.70, specificity = 0.99) and 54 count/5 s (cross-validation: sensitivity = 1.00, specificity = 0.96) for the hip; 60 mg (cross-validation: sensitivity = 0.83, specificity = 0.99) and 182 counts/5 s (cross-validation: sensitivity = 1.00, specificity = 0.89) for the non-dominant wrist; and 64 mg and 268 counts/5 s (not cross-validated) for the dominant wrist. These cut-points can classify ST and MVPA in older adults from hip- and wrist-worn accelerometers. Full article
(This article belongs to the Special Issue Wearable Devices: Applications in Older Adults)
Show Figures

Figure 1

26 pages, 22394 KiB  
Review
3D Printing Techniques and Their Applications to Organ-on-a-Chip Platforms: A Systematic Review
by Violeta Carvalho, Inês Gonçalves, Teresa Lage, Raquel O. Rodrigues, Graça Minas, Senhorinha F. C. F. Teixeira, Ana S. Moita, Takeshi Hori, Hirokazu Kaji and Rui A. Lima
Sensors 2021, 21(9), 3304; https://doi.org/10.3390/s21093304 - 10 May 2021
Cited by 63 | Viewed by 9701
Abstract
Three-dimensional (3D) in vitro models, such as organ-on-a-chip platforms, are an emerging and effective technology that allows the replication of the function of tissues and organs, bridging the gap amid the conventional models based on planar cell cultures or animals and the complex [...] Read more.
Three-dimensional (3D) in vitro models, such as organ-on-a-chip platforms, are an emerging and effective technology that allows the replication of the function of tissues and organs, bridging the gap amid the conventional models based on planar cell cultures or animals and the complex human system. Hence, they have been increasingly used for biomedical research, such as drug discovery and personalized healthcare. A promising strategy for their fabrication is 3D printing, a layer-by-layer fabrication process that allows the construction of complex 3D structures. In contrast, 3D bioprinting, an evolving biofabrication method, focuses on the accurate deposition of hydrogel bioinks loaded with cells to construct tissue-engineered structures. The purpose of the present work is to conduct a systematic review (SR) of the published literature, according to the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses, providing a source of information on the evolution of organ-on-a-chip platforms obtained resorting to 3D printing and bioprinting techniques. In the literature search, PubMed, Scopus, and ScienceDirect databases were used, and two authors independently performed the search, study selection, and data extraction. The goal of this SR is to highlight the importance and advantages of using 3D printing techniques in obtaining organ-on-a-chip platforms, and also to identify potential gaps and future perspectives in this research field. Additionally, challenges in integrating sensors in organs-on-chip platforms are briefly investigated and discussed. Full article
(This article belongs to the Special Issue Organ-on-a-Chip and Biosensors)
Show Figures

Graphical abstract

16 pages, 7256 KiB  
Article
Deep Supervised Residual Dense Network for Underwater Image Enhancement
by Yanling Han, Lihua Huang, Zhonghua Hong, Shouqi Cao, Yun Zhang and Jing Wang
Sensors 2021, 21(9), 3289; https://doi.org/10.3390/s21093289 - 10 May 2021
Cited by 26 | Viewed by 3043
Abstract
Underwater images are important carriers and forms of underwater information, playing a vital role in exploring and utilizing marine resources. However, underwater images have characteristics of low contrast and blurred details because of the absorption and scattering of light. In recent years, deep [...] Read more.
Underwater images are important carriers and forms of underwater information, playing a vital role in exploring and utilizing marine resources. However, underwater images have characteristics of low contrast and blurred details because of the absorption and scattering of light. In recent years, deep learning has been widely used in underwater image enhancement and restoration because of its powerful feature learning capabilities, but there are still shortcomings in detailed enhancement. To address the problem, this paper proposes a deep supervised residual dense network (DS_RD_Net), which is used to better learn the mapping relationship between clear in-air images and synthetic underwater degraded images. DS_RD_Net first uses residual dense blocks to extract features to enhance feature utilization; then, it adds residual path blocks between the encoder and decoder to reduce the semantic differences between the low-level features and high-level features; finally, it employs a deep supervision mechanism to guide network training to improve gradient propagation. Experiments results (PSNR was 36.2, SSIM was 96.5%, and UCIQE was 0.53) demonstrated that the proposed method can fully retain the local details of the image while performing color restoration and defogging compared with other image enhancement methods, achieving good qualitative and quantitative effects. Full article
(This article belongs to the Special Issue Image Sensing and Processing with Convolutional Neural Networks)
Show Figures

Figure 1

21 pages, 7855 KiB  
Article
A Low-Cost IoT System for Real-Time Monitoring of Climatic Variables and Photovoltaic Generation for Smart Grid Application
by Gustavo Costa Gomes de Melo, Igor Cavalcante Torres, Ícaro Bezzera Queiroz de Araújo, Davi Bibiano Brito and Erick de Andrade Barboza
Sensors 2021, 21(9), 3293; https://doi.org/10.3390/s21093293 - 10 May 2021
Cited by 30 | Viewed by 4966
Abstract
Monitoring and data acquisition are essential to recognize the renewable resources available on-site, evaluate electrical conversion efficiency, detect failures, and optimize electrical production. Commercial monitoring systems for the photovoltaic system are generally expensive and closed for modifications. This work proposes a low-cost real-time [...] Read more.
Monitoring and data acquisition are essential to recognize the renewable resources available on-site, evaluate electrical conversion efficiency, detect failures, and optimize electrical production. Commercial monitoring systems for the photovoltaic system are generally expensive and closed for modifications. This work proposes a low-cost real-time internet of things system for micro and mini photovoltaic generation systems that can monitor continuous voltage, continuous current, alternating power, and seven meteorological variables. The proposed system measures all relevant meteorological variables and directly acquires photovoltaic generation data from the plant (not from the inverter). The system is implemented using open software, connects to the internet without cables, stores data locally and in the cloud, and uses the network time protocol to synchronize the devices’ clocks. To the best of our knowledge, no work reported in the literature presents these features altogether. Furthermore, experiments carried out with the proposed system showed good effectiveness and reliability. This system enables fog and cloud computing in a photovoltaic system, creating a time series measurements data set, enabling the future use of machine learning to create smart photovoltaic systems. Full article
(This article belongs to the Special Issue Smart IoT System for Renewable Energy Resource)
Show Figures

Figure 1

31 pages, 3659 KiB  
Review
High Temperature Ultrasonic Transducers: A Review
by Rymantas Kazys and Vaida Vaskeliene
Sensors 2021, 21(9), 3200; https://doi.org/10.3390/s21093200 - 05 May 2021
Cited by 50 | Viewed by 9166
Abstract
There are many fields such as online monitoring of manufacturing processes, non-destructive testing in nuclear plants, or corrosion rate monitoring techniques of steel pipes in which measurements must be performed at elevated temperatures. For that high temperature ultrasonic transducers are necessary. In the [...] Read more.
There are many fields such as online monitoring of manufacturing processes, non-destructive testing in nuclear plants, or corrosion rate monitoring techniques of steel pipes in which measurements must be performed at elevated temperatures. For that high temperature ultrasonic transducers are necessary. In the presented paper, a literature review on the main types of such transducers, piezoelectric materials, backings, and the bonding techniques of transducers elements suitable for high temperatures, is presented. In this review, the main focus is on ultrasonic transducers with piezoelectric elements suitable for operation at temperatures higher than of the most commercially available transducers, i.e., 150 °C. The main types of the ultrasonic transducers that are discussed are the transducers with thin protectors, which may serve as matching layers, transducers with high temperature delay lines, wedges, and waveguide type transducers. The piezoelectric materials suitable for high temperature applications such as aluminum nitride, lithium niobate, gallium orthophosphate, bismuth titanate, oxyborate crystals, lead metaniobate, and other piezoceramics are analyzed. Bonding techniques used for joining of the transducer elements such as joining with glue, soldering, brazing, dry contact, and diffusion bonding are discussed. Special attention is paid to efficient diffusion and thermo-sonic diffusion bonding techniques. Various types of backings necessary for improving a bandwidth and to obtain a short pulse response are described. Full article
(This article belongs to the Special Issue Ultrasonic Transducers for High Temperature Applications)
Show Figures

Figure 1

27 pages, 2497 KiB  
Review
Biosensing Applications Using Nanostructure-Based Localized Surface Plasmon Resonance Sensors
by Dong Min Kim, Jong Seong Park, Seung-Woon Jung, Jinho Yeom and Seung Min Yoo
Sensors 2021, 21(9), 3191; https://doi.org/10.3390/s21093191 - 04 May 2021
Cited by 49 | Viewed by 5518
Abstract
Localized surface plasmon resonance (LSPR)-based biosensors have recently garnered increasing attention due to their potential to allow label-free, portable, low-cost, and real-time monitoring of diverse analytes. Recent developments in this technology have focused on biochemical markers in clinical and environmental settings coupled with [...] Read more.
Localized surface plasmon resonance (LSPR)-based biosensors have recently garnered increasing attention due to their potential to allow label-free, portable, low-cost, and real-time monitoring of diverse analytes. Recent developments in this technology have focused on biochemical markers in clinical and environmental settings coupled with advances in nanostructure technology. Therefore, this review focuses on the recent advances in LSPR-based biosensor technology for the detection of diverse chemicals and biomolecules. Moreover, we also provide recent examples of sensing strategies based on diverse nanostructure platforms, in addition to their advantages and limitations. Finally, this review discusses potential strategies for the development of biosensors with enhanced sensing performance. Full article
Show Figures

Figure 1

26 pages, 4352 KiB  
Article
Non-Contact Monitoring and Classification of Breathing Pattern for the Supervision of People Infected by COVID-19
by Ariana Tulus Purnomo, Ding-Bing Lin, Tjahjo Adiprabowo and Willy Fitra Hendria
Sensors 2021, 21(9), 3172; https://doi.org/10.3390/s21093172 - 03 May 2021
Cited by 36 | Viewed by 5834
Abstract
During the pandemic of coronavirus disease-2019 (COVID-19), medical practitioners need non-contact devices to reduce the risk of spreading the virus. People with COVID-19 usually experience fever and have difficulty breathing. Unsupervised care to patients with respiratory problems will be the main reason for [...] Read more.
During the pandemic of coronavirus disease-2019 (COVID-19), medical practitioners need non-contact devices to reduce the risk of spreading the virus. People with COVID-19 usually experience fever and have difficulty breathing. Unsupervised care to patients with respiratory problems will be the main reason for the rising death rate. Periodic linearly increasing frequency chirp, known as frequency-modulated continuous wave (FMCW), is one of the radar technologies with a low-power operation and high-resolution detection which can detect any tiny movement. In this study, we use FMCW to develop a non-contact medical device that monitors and classifies the breathing pattern in real time. Patients with a breathing disorder have an unusual breathing characteristic that cannot be represented using the breathing rate. Thus, we created an Xtreme Gradient Boosting (XGBoost) classification model and adopted Mel-frequency cepstral coefficient (MFCC) feature extraction to classify the breathing pattern behavior. XGBoost is an ensemble machine-learning technique with a fast execution time and good scalability for predictions. In this study, MFCC feature extraction assists machine learning in extracting the features of the breathing signal. Based on the results, the system obtained an acceptable accuracy. Thus, our proposed system could potentially be used to detect and monitor the presence of respiratory problems in patients with COVID-19, asthma, etc. Full article
(This article belongs to the Special Issue Intelligent Sensors for Human Motion Analysis)
Show Figures

Figure 1

34 pages, 12122 KiB  
Review
A Review of RFID Sensors, the New Frontier of Internet of Things
by Filippo Costa, Simone Genovesi, Michele Borgese, Andrea Michel, Francesco Alessio Dicandia and Giuliano Manara
Sensors 2021, 21(9), 3138; https://doi.org/10.3390/s21093138 - 30 Apr 2021
Cited by 112 | Viewed by 19389
Abstract
A review of technological solutions for RFID sensing and their current or envisioned applications is presented. The fundamentals of the wireless sensing technology are summarized in the first part of the work, and the benefits of adopting RFID sensors for replacing standard sensor-equipped [...] Read more.
A review of technological solutions for RFID sensing and their current or envisioned applications is presented. The fundamentals of the wireless sensing technology are summarized in the first part of the work, and the benefits of adopting RFID sensors for replacing standard sensor-equipped Wi-Fi nodes are discussed. Emphasis is put on the absence of batteries and the lower cost of RFID sensors with respect to other sensor solutions available on the market. RFID sensors are critically compared by separating them into chipped and chipless configurations. Both categories are further analyzed with reference to their working mechanism (electronic, electromagnetic, and acoustic). RFID sensing through chip-equipped tags is now a mature technological solution, which is continuously increasing its presence on the market and in several applicative scenarios. On the other hand, chipless RFID sensing represents a relatively new concept, which could become a disruptive solution in the market, but further research in this field is necessary for customizing its employment in specific scenarios. The benefits and limitations of several tag configurations are shown and discussed. A summary of the most suitable applicative scenarios for RFID sensors are finally illustrated. Finally, a look at some sensing solutions available on the market are described and compared. Full article
(This article belongs to the Special Issue RFID and Zero-Power Backscatter Sensors)
Show Figures

Graphical abstract

23 pages, 1506 KiB  
Article
From the Laboratory to the Field: IMU-Based Shot and Pass Detection in Football Training and Game Scenarios Using Deep Learning
by Maike Stoeve, Dominik Schuldhaus, Axel Gamp, Constantin Zwick and Bjoern M. Eskofier
Sensors 2021, 21(9), 3071; https://doi.org/10.3390/s21093071 - 28 Apr 2021
Cited by 22 | Viewed by 4305
Abstract
The applicability of sensor-based human activity recognition in sports has been repeatedly shown for laboratory settings. However, the transferability to real-world scenarios cannot be granted due to limitations on data and evaluation methods. On the example of football shot and pass detection against [...] Read more.
The applicability of sensor-based human activity recognition in sports has been repeatedly shown for laboratory settings. However, the transferability to real-world scenarios cannot be granted due to limitations on data and evaluation methods. On the example of football shot and pass detection against a null class we explore the influence of those factors for real-world event classification in field sports. For this purpose we compare the performance of an established Support Vector Machine (SVM) for laboratory settings from literature to the performance in three evaluation scenarios gradually evolving from laboratory settings to real-world scenarios. In addition, three different types of neural networks, namely a convolutional neural net (CNN), a long short term memory net (LSTM) and a convolutional LSTM (convLSTM) are compared. Results indicate that the SVM is not able to reliably solve the investigated three-class problem. In contrast, all deep learning models reach high classification scores showing the general feasibility of event detection in real-world sports scenarios using deep learning. The maximum performance with a weighted f1-score of 0.93 was reported by the CNN. The study provides valuable insights for sports assessment under practically relevant conditions. In particular, it shows that (1) the discriminative power of established features needs to be reevaluated when real-world conditions are assessed, (2) the selection of an appropriate dataset and evaluation method are both required to evaluate real-world applicability and (3) deep learning-based methods yield promising results for real-world HAR in sports despite high variations in the execution of activities. Full article
Show Figures

Figure 1

16 pages, 4232 KiB  
Article
Deep-Emotion: Facial Expression Recognition Using Attentional Convolutional Network
by Shervin Minaee, Mehdi Minaei and Amirali Abdolrashidi
Sensors 2021, 21(9), 3046; https://doi.org/10.3390/s21093046 - 27 Apr 2021
Cited by 291 | Viewed by 16924
Abstract
Facial expression recognition has been an active area of research over the past few decades, and it is still challenging due to the high intra-class variation. Traditional approaches for this problem rely on hand-crafted features such as SIFT, HOG, and LBP, followed by [...] Read more.
Facial expression recognition has been an active area of research over the past few decades, and it is still challenging due to the high intra-class variation. Traditional approaches for this problem rely on hand-crafted features such as SIFT, HOG, and LBP, followed by a classifier trained on a database of images or videos. Most of these works perform reasonably well on datasets of images captured in a controlled condition but fail to perform as well on more challenging datasets with more image variation and partial faces. In recent years, several works proposed an end-to-end framework for facial expression recognition using deep learning models. Despite the better performance of these works, there are still much room for improvement. In this work, we propose a deep learning approach based on attentional convolutional network that is able to focus on important parts of the face and achieves significant improvement over previous models on multiple datasets, including FER-2013, CK+, FERG, and JAFFE. We also use a visualization technique that is able to find important facial regions to detect different emotions based on the classifier’s output. Through experimental results, we show that different emotions are sensitive to different parts of the face. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

26 pages, 5142 KiB  
Technical Note
Localized Bioimpedance Measurements with the MAX3000x Integrated Circuit: Characterization and Demonstration
by Shelby Critcher and Todd J. Freeborn
Sensors 2021, 21(9), 3013; https://doi.org/10.3390/s21093013 - 25 Apr 2021
Cited by 14 | Viewed by 5808
Abstract
The commercial availability of integrated circuits with bioimpedance sensing functionality is advancing the opportunity for practical wearable systems that monitor the electrical impedance properties of tissues to identify physiological features in support of health-focused applications. This technical note characterizes the performance of the [...] Read more.
The commercial availability of integrated circuits with bioimpedance sensing functionality is advancing the opportunity for practical wearable systems that monitor the electrical impedance properties of tissues to identify physiological features in support of health-focused applications. This technical note characterizes the performance of the MAX3000x (resistance/reactance accuracy, power modes, filtering, gains) and is available for on-board processing (electrode detection) for localized bioimpedance measurements. Measurements of discrete impedances that are representative of localized tissue bioimpedance support that this IC has a relative error of <10% for the resistance component of complex impedance measurements, but can also measure relative alterations in the 250 mΩ range. The application of the MAX3000x for monitoring localized bicep tissues during activity is presented to highlight its functionality, as well as its limitations, for multi-frequency measurements. This device is a very-small-form-factor single-chip solution for measuring multi-frequency bioimpedance with significant on-board processing with potential for wearable applications. Full article
(This article belongs to the Special Issue Bioimpedance Sensors: Instrumentation, Models, and Applications)
Show Figures

Figure 1

30 pages, 6342 KiB  
Article
Adversarial Gaussian Denoiser for Multiple-Level Image Denoising
by Aamir Khan, Weidong Jin, Amir Haider, MuhibUr Rahman and Desheng Wang
Sensors 2021, 21(9), 2998; https://doi.org/10.3390/s21092998 - 24 Apr 2021
Cited by 13 | Viewed by 3633
Abstract
Image denoising is a challenging task that is essential in numerous computer vision and image processing problems. This study proposes and applies a generative adversarial network-based image denoising training architecture to multiple-level Gaussian image denoising tasks. Convolutional neural network-based denoising approaches come across [...] Read more.
Image denoising is a challenging task that is essential in numerous computer vision and image processing problems. This study proposes and applies a generative adversarial network-based image denoising training architecture to multiple-level Gaussian image denoising tasks. Convolutional neural network-based denoising approaches come across a blurriness issue that produces denoised images blurry on texture details. To resolve the blurriness issue, we first performed a theoretical study of the cause of the problem. Subsequently, we proposed an adversarial Gaussian denoiser network, which uses the generative adversarial network-based adversarial learning process for image denoising tasks. This framework resolves the blurriness problem by encouraging the denoiser network to find the distribution of sharp noise-free images instead of blurry images. Experimental results demonstrate that the proposed framework can effectively resolve the blurriness problem and achieve significant denoising efficiency than the state-of-the-art denoising methods. Full article
Show Figures

Figure 1

32 pages, 755 KiB  
Article
Quantization and Deployment of Deep Neural Networks on Microcontrollers
by Pierre-Emmanuel Novac, Ghouthi Boukli Hacene, Alain Pegatoquet, Benoît Miramond and Vincent Gripon
Sensors 2021, 21(9), 2984; https://doi.org/10.3390/s21092984 - 23 Apr 2021
Cited by 70 | Viewed by 20661
Abstract
Embedding Artificial Intelligence onto low-power devices is a challenging task that has been partly overcome with recent advances in machine learning and hardware design. Presently, deep neural networks can be deployed on embedded targets to perform different tasks such as speech recognition, object [...] Read more.
Embedding Artificial Intelligence onto low-power devices is a challenging task that has been partly overcome with recent advances in machine learning and hardware design. Presently, deep neural networks can be deployed on embedded targets to perform different tasks such as speech recognition, object detection or Human Activity Recognition. However, there is still room for optimization of deep neural networks onto embedded devices. These optimizations mainly address power consumption, memory and real-time constraints, but also an easier deployment at the edge. Moreover, there is still a need for a better understanding of what can be achieved for different use cases. This work focuses on quantization and deployment of deep neural networks onto low-power 32-bit microcontrollers. The quantization methods, relevant in the context of an embedded execution onto a microcontroller, are first outlined. Then, a new framework for end-to-end deep neural networks training, quantization and deployment is presented. This framework, called MicroAI, is designed as an alternative to existing inference engines (TensorFlow Lite for Microcontrollers and STM32Cube.AI). Our framework can indeed be easily adjusted and/or extended for specific use cases. Execution using single precision 32-bit floating-point as well as fixed-point on 8- and 16 bits integers are supported. The proposed quantization method is evaluated with three different datasets (UCI-HAR, Spoken MNIST and GTSRB). Finally, a comparison study between MicroAI and both existing embedded inference engines is provided in terms of memory and power efficiency. On-device evaluation is done using ARM Cortex-M4F-based microcontrollers (Ambiq Apollo3 and STM32L452RE). Full article
Show Figures

Figure 1

17 pages, 2657 KiB  
Article
Study of Low Terahertz Radar Signal Backscattering for Surface Identification
by Shahrzad Minooee Sabery, Aleksandr Bystrov, Miguel Navarro-Cía, Peter Gardner and Marina Gashinova
Sensors 2021, 21(9), 2954; https://doi.org/10.3390/s21092954 - 23 Apr 2021
Cited by 20 | Viewed by 3553
Abstract
This study explores the scattering of signals within the mm and low Terahertz frequency range, represented by frequencies 79 GHz, 150 GHz, 300 GHz, and 670 GHz, from surfaces with different roughness, to demonstrate advantages of low THz radar for surface discrimination for [...] Read more.
This study explores the scattering of signals within the mm and low Terahertz frequency range, represented by frequencies 79 GHz, 150 GHz, 300 GHz, and 670 GHz, from surfaces with different roughness, to demonstrate advantages of low THz radar for surface discrimination for automotive sensing. The responses of four test surfaces of different roughness were measured and their normalized radar cross sections were estimated as a function of grazing angle and polarization. The Fraunhofer criterion was used as a guideline for determining the type of backscattering (specular and diffuse). The proposed experimental technique provides high accuracy of backscattering coefficient measurement depending on the frequency of the signal, polarization, and grazing angle. An empirical scattering model was used to provide a reference. To compare theoretical and experimental results of the signal scattering on test surfaces, the permittivity of sandpaper has been measured using time-domain spectroscopy. It was shown that the empirical methods for diffuse radar signal scattering developed for lower radar frequencies can be extended for the low THz range with sufficient accuracy. The results obtained will provide reference information for creating remote surface identification systems for automotive use, which will be of particular advantage in surface classification, object classification, and path determination in autonomous automotive vehicle operation. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Graphical abstract

20 pages, 4542 KiB  
Review
Sensitivity of Field-Effect Transistor-Based Terahertz Detectors
by Elham Javadi, Dmytro B. But, Kęstutis Ikamas, Justinas Zdanevičius, Wojciech Knap and Alvydas Lisauskas
Sensors 2021, 21(9), 2909; https://doi.org/10.3390/s21092909 - 21 Apr 2021
Cited by 56 | Viewed by 5418
Abstract
This paper presents an overview of the different methods used for sensitivity (i.e., responsivity and noise equivalent power) determination of state-of-the-art field-effect transistor-based THz detectors/sensors. We point out that the reported result may depend very much on the method used to determine the [...] Read more.
This paper presents an overview of the different methods used for sensitivity (i.e., responsivity and noise equivalent power) determination of state-of-the-art field-effect transistor-based THz detectors/sensors. We point out that the reported result may depend very much on the method used to determine the effective area of the sensor, often leading to discrepancies of up to orders of magnitude. The challenges that arise when selecting a proper method for characterisation are demonstrated using the example of a 2×7 detector array. This array utilises field-effect transistors and monolithically integrated patch antennas at 620 GHz. The directivities of the individual antennas were simulated and determined from the measured angle dependence of the rectified voltage, as a function of tilting in the E- and H-planes. Furthermore, this study shows that the experimentally determined directivity and simulations imply that the part of radiation might still propagate in the substrate, resulting in modification of the sensor effective area. Our work summarises the methods for determining sensitivity which are paving the way towards the unified scientific metrology of FET-based THz sensors, which is important for both researchers competing for records, potential users, and system designers. Full article
(This article belongs to the Special Issue Terahertz Imaging and Sensors)
Show Figures

Figure 1

24 pages, 7713 KiB  
Article
Automatic Pixel-Level Pavement Crack Recognition Using a Deep Feature Aggregation Segmentation Network with a scSE Attention Mechanism Module
by Wenting Qiao, Qiangwei Liu, Xiaoguang Wu, Biao Ma and Gang Li
Sensors 2021, 21(9), 2902; https://doi.org/10.3390/s21092902 - 21 Apr 2021
Cited by 28 | Viewed by 3026
Abstract
Pavement crack detection is essential for safe driving. The traditional manual crack detection method is highly subjective and time-consuming. Hence, an automatic pavement crack detection system is needed to facilitate this progress. However, this is still a challenging task due to the complex [...] Read more.
Pavement crack detection is essential for safe driving. The traditional manual crack detection method is highly subjective and time-consuming. Hence, an automatic pavement crack detection system is needed to facilitate this progress. However, this is still a challenging task due to the complex topology and large noise interference of crack images. Recently, although deep learning-based technologies have achieved breakthrough progress in crack detection, there are still some challenges, such as large parameters and low detection efficiency. Besides, most deep learning-based crack detection algorithms find it difficult to establish good balance between detection accuracy and detection speed. Inspired by the latest deep learning technology in the field of image processing, this paper proposes a novel crack detection algorithm based on the deep feature aggregation network with the spatial-channel squeeze & excitation (scSE) attention mechanism module, which calls CrackDFANet. Firstly, we cut the collected crack images into 512 × 512 pixel image blocks to establish a crack dataset. Then through iterative optimization on the training and validation sets, we obtained a crack detection model with good robustness. Finally, the CrackDFANet model verified on a total of 3516 images in five datasets with different sizes and containing different noise interferences. Experimental results show that the trained CrackDFANet has strong anti-interference ability, and has better robustness and generalization ability under the interference of light interference, parking line, water stains, plant disturbance, oil stains, and shadow conditions. Furthermore, the CrackDFANet is found to be better than other state-of-the-art algorithms with more accurate detection effect and faster detection speed. Meanwhile, our algorithm model parameters and error rates are significantly reduced. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

31 pages, 4662 KiB  
Review
A Review of Corrosion in Aircraft Structures and Graphene-Based Sensors for Advanced Corrosion Monitoring
by Lucy Li, Mounia Chakik and Ravi Prakash
Sensors 2021, 21(9), 2908; https://doi.org/10.3390/s21092908 - 21 Apr 2021
Cited by 28 | Viewed by 7068
Abstract
Corrosion is an ever-present phenomena of material deterioration that affects all metal structures. Timely and accurate detection of corrosion is required for structural maintenance and effective management of structural components during their life cycle. The usage of aircraft materials has been primarily driven [...] Read more.
Corrosion is an ever-present phenomena of material deterioration that affects all metal structures. Timely and accurate detection of corrosion is required for structural maintenance and effective management of structural components during their life cycle. The usage of aircraft materials has been primarily driven by the need for lighter, stronger, and more robust metal alloys, rather than mitigation of corrosion. As such, the overall cost of corrosion management and aircraft downtime remains high. To illustrate, $5.67 billion or 23.6% of total sustainment costs was spent on aircraft corrosion management, as well as 14.1% of total NAD for the US Air Force aviation and missiles in the fiscal year of 2018. The ability to detect and monitor corrosion will allow for a more efficient and cost-effective corrosion management strategy, and will therefore, minimize maintenance costs and downtime, and to avoid unexpected failure associated with corrosion. Conventional and commercial efforts in corrosion detection on aircrafts have focused on visual and other field detection approaches which are time- and usage-based rather than condition-based; they are also less effective in cases where the corroded area is inaccessible (e.g., fuel tank) or hidden (rivets). The ability to target and detect specific corrosion by-products associated with the metals/metal alloys (chloride ions, fluoride ions, iron oxides, aluminum chlorides etc.), corrosion environment (pH, wetness, temperature), along with conventional approaches for physical detection of corrosion can provide early corrosion detection as well as enhanced reliability of corrosion detection. The paper summarizes the state-of-art of corrosion sensing and measurement technologies for schedule-based inspection or continuous monitoring of physical, environmental and chemical presence associated with corrosion. The challenges are reviewed with regards to current gaps of corrosion detection and the complex task of corrosion management of an aircraft, with a focused overview of the corrosion factors and corrosion forms that are pertinent to the aviation industry. A comprehensive overview of thin film sensing techniques for corrosion detection and monitoring on aircrafts are being conducted. Particular attention is paid to innovative new materials, especially graphene-derived thin film sensors which rely on their ability to be configured as a conductor, semiconductor, or a functionally sensitive layer that responds to corrosion factors. Several thin film sensors have been detailed in this review as highly suited candidates for detecting corrosion through direct sensing of corrosion by-products in conjunction with the aforementioned physical and environmental corrosion parameters. The ability to print/pattern these thin film materials directly onto specific aircraft components, or deposit them onto rigid and flexible sensor surfaces and interfaces (fibre optics, microelectrode structures) makes them highly suited for corrosion monitoring applications. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

31 pages, 11468 KiB  
Article
Application of MEMS Sensors for Evaluation of the Dynamics for Cargo Securing on Road Vehicles
by Jozef Gnap, Juraj Jagelčák, Peter Marienka, Marcel Frančák and Mariusz Kostrzewski
Sensors 2021, 21(8), 2881; https://doi.org/10.3390/s21082881 - 20 Apr 2021
Cited by 25 | Viewed by 3590
Abstract
Safety is one of the key aspects of the successful transport of cargo. In the case of road transport, the dynamics of a vehicle during normal events such as braking, steering, and evasive maneuver are variable in different places in the vehicle. Several [...] Read more.
Safety is one of the key aspects of the successful transport of cargo. In the case of road transport, the dynamics of a vehicle during normal events such as braking, steering, and evasive maneuver are variable in different places in the vehicle. Several manufacturers provide different dataloggers with acceleration sensors, but the results are not comparable due to different sensor parameters, measurement ranges, sampling frequencies, data filtration, and evaluation of different periods of acceleration. The position of the sensor in the loading area is also important. The accelerations are not the same at all points in the vehicle. The article deals with the measurement of these dynamic events with MEMS sensors on selected points of a vehicle loaded with cargo and with changes in dynamics after certain events that could occur during regular road transport of cargo to analyze the possibilities for monitoring accelerations and the related forces acting on the cargo during transport. The article uses evaluation times of 80, 300, and 1000 ms for accelerations. With the measured values, it is possible to determine the places with a higher risk of cargo damage and not only to adjust the packaging and securing of the cargo, but also to modify the transport routes. Concerning the purposes of securing the cargo in relation to EN 12195-1 and the minimum values of forces for securing the cargo, we focused primarily on the places where the acceleration of 0.5 g was exceeded when analyzing the monitored route. There were 32 of these points in total, all of which were measured by a sensor located at the rear of the semi-trailer. In 31 cases, the limit of 0.5 g was exceeded for an 80-ms evaluation time, and in one case, the value of 0.51 g was reached in the transverse direction for a 300-ms evaluation time. Full article
Show Figures

Figure 1

18 pages, 3225 KiB  
Article
HRV Features as Viable Physiological Markers for Stress Detection Using Wearable Devices
by Kayisan M. Dalmeida and Giovanni L. Masala
Sensors 2021, 21(8), 2873; https://doi.org/10.3390/s21082873 - 19 Apr 2021
Cited by 67 | Viewed by 7156
Abstract
Stress has been identified as one of the major causes of automobile crashes which then lead to high rates of fatalities and injuries each year. Stress can be measured via physiological measurements and in this study the focus will be based on the [...] Read more.
Stress has been identified as one of the major causes of automobile crashes which then lead to high rates of fatalities and injuries each year. Stress can be measured via physiological measurements and in this study the focus will be based on the features that can be extracted by common wearable devices. Hence, the study will be mainly focusing on heart rate variability (HRV). This study is aimed at investigating the role of HRV-derived features as stress markers. This is achieved by developing a good predictive model that can accurately classify stress levels from ECG-derived HRV features, obtained from automobile drivers, by testing different machine learning methodologies such as K-Nearest Neighbor (KNN), Support Vector Machines (SVM), Multilayer Perceptron (MLP), Random Forest (RF) and Gradient Boosting (GB). Moreover, the models obtained with highest predictive power will be used as reference for the development of a machine learning model that would be used to classify stress from HRV features derived from heart rate measurements obtained from wearable devices. We demonstrate that HRV features constitute good markers for stress detection as the best machine learning model developed achieved a Recall of 80%. Furthermore, this study indicates that HRV metrics such as the Average of normal-to-normal (NN) intervals (AVNN), Standard deviation of the average NN intervals (SDNN) and the Root mean square differences of successive NN intervals (RMSSD) were important features for stress detection. The proposed method can be also used on all applications in which is important to monitor the stress levels in a non-invasive manner, e.g., in physical rehabilitation, anxiety relief or mental wellbeing. Full article
Show Figures

Figure 1

27 pages, 6248 KiB  
Article
Classification of Skin Disease Using Deep Learning Neural Networks with MobileNet V2 and LSTM
by Parvathaneni Naga Srinivasu, Jalluri Gnana SivaSai, Muhammad Fazal Ijaz, Akash Kumar Bhoi, Wonjoon Kim and James Jin Kang
Sensors 2021, 21(8), 2852; https://doi.org/10.3390/s21082852 - 18 Apr 2021
Cited by 376 | Viewed by 23685
Abstract
Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved [...] Read more.
Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region’s image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity. Full article
Show Figures

Figure 1

16 pages, 6888 KiB  
Article
The Utilization of Artificial Neural Network Equalizer in Optical Camera Communications
by Othman Isam Younus, Navid Bani Hassan, Zabih Ghassemlooy, Stanislav Zvanovec, Luis Nero Alves and Hoa Le-Minh
Sensors 2021, 21(8), 2826; https://doi.org/10.3390/s21082826 - 16 Apr 2021
Cited by 16 | Viewed by 3061
Abstract
In this paper, we propose and validate an artificial neural network-based equalizer for the constant power 4-level pulse amplitude modulation in an optical camera communications system. We introduce new terminology to measure the quality of the communications link in terms of the number [...] Read more.
In this paper, we propose and validate an artificial neural network-based equalizer for the constant power 4-level pulse amplitude modulation in an optical camera communications system. We introduce new terminology to measure the quality of the communications link in terms of the number of row pixels per symbol Npps, which allows a fair comparison considering the progress made in the development of the current image sensors in terms of the frame rates and the resolutions of each frame. Using the proposed equalizer, we experimentally demonstrate a non-flickering system using a single light-emitting diode (LED) with Npps of 20 and 30 pixels/symbol for the unequalized and equalized systems, respectively. Potential transmission rates of up to 18.6 and 24.4 kbps are achieved with and without the equalization, respectively. The quality of the received signal is assessed using the eye-diagram opening and its linearity and the bit error rate performance. An acceptable bit error rate (below the forward error correction limit) and an improvement of ~66% in the eye linearity are achieved using a single LED and a typical commercial camera with equalization. Full article
(This article belongs to the Special Issue Visible Light Communication, Networking, and Sensing)
Show Figures

Figure 1

25 pages, 5502 KiB  
Article
An Enhanced Indoor Positioning Algorithm Based on Fingerprint Using Fine-Grained CSI and RSSI Measurements of IEEE 802.11n WLAN
by Jingjing Wang and Joongoo Park
Sensors 2021, 21(8), 2769; https://doi.org/10.3390/s21082769 - 14 Apr 2021
Cited by 41 | Viewed by 4793
Abstract
Received signal strength indication (RSSI) obtained by Medium Access Control (MAC) layer is widely used in range-based and fingerprint location systems due to its low cost and low complexity. However, RSS is affected by noise signals and multi-path, and its positioning performance is [...] Read more.
Received signal strength indication (RSSI) obtained by Medium Access Control (MAC) layer is widely used in range-based and fingerprint location systems due to its low cost and low complexity. However, RSS is affected by noise signals and multi-path, and its positioning performance is not stable. In recent years, many commercial WiFi devices support the acquisition of physical layer channel state information (CSI). CSI is an index that can characterize the signal characteristics with more fine granularity than RSS. Compared with RSS, CSI can avoid the effects of multi-path and noise by analyzing the characteristics of multi-channel sub-carriers. To improve the indoor location accuracy and algorithm efficiency, this paper proposes a hybrid fingerprint location technology based on RSS and CSI. In the off-line phase, to overcome the problems of low positioning accuracy and fingerprint drift caused by signal instability, a methodology based on the Kalman filter and a Gaussian function is proposed to preprocess the RSSI value and CSI amplitude value, and the improved CSI phase is incorporated after the linear transformation. The mutation and noisy data are then effectively eliminated, and the accurate and smoother outputs of the RSSI and CSI values can be achieved. Then, the accurate hybrid fingerprint database is established after dimensionality reduction of the obtained high-dimensional data values. The weighted k-nearest neighbor (WKNN) algorithm is applied to reduce the complexity of the algorithm during the online positioning stage, and the accurate indoor positioning algorithm is accomplished. Experimental results show that the proposed algorithm exhibits good performance on anti-noise ability, fusion positioning accuracy, and real-time filtering. Compared with CSI-MIMO, FIFS, and RSSI-based methods, the proposed fusion correction method has higher positioning accuracy and smaller positioning error. Full article
(This article belongs to the Special Issue Indoor Positioning and Navigation)
Show Figures

Figure 1

27 pages, 2821 KiB  
Review
Soft Grippers for Automatic Crop Harvesting: A Review
by Eduardo Navas, Roemi Fernández, Delia Sepúlveda, Manuel Armada and Pablo Gonzalez-de-Santos
Sensors 2021, 21(8), 2689; https://doi.org/10.3390/s21082689 - 11 Apr 2021
Cited by 84 | Viewed by 10635
Abstract
Agriculture 4.0 is transforming farming livelihoods thanks to the development and adoption of technologies such as artificial intelligence, the Internet of Things and robotics, traditionally used in other productive sectors. Soft robotics and soft grippers in particular are promising approaches to lead to [...] Read more.
Agriculture 4.0 is transforming farming livelihoods thanks to the development and adoption of technologies such as artificial intelligence, the Internet of Things and robotics, traditionally used in other productive sectors. Soft robotics and soft grippers in particular are promising approaches to lead to new solutions in this field due to the need to meet hygiene and manipulation requirements in unstructured environments and in operation with delicate products. This review aims to provide an in-depth look at soft end-effectors for agricultural applications, with a special emphasis on robotic harvesting. To that end, the current state of automatic picking tasks for several crops is analysed, identifying which of them lack automatic solutions, and which methods are commonly used based on the botanical characteristics of the fruits. The latest advances in the design and implementation of soft grippers are also presented and discussed, studying the properties of their materials, their manufacturing processes, the gripping technologies and the proposed control methods. Finally, the challenges that have to be overcome to boost its definitive implementation in the real world are highlighted. Therefore, this review intends to serve as a guide for those researchers working in the field of soft robotics for Agriculture 4.0, and more specifically, in the design of soft grippers for fruit harvesting robots. Full article
(This article belongs to the Collection Sensors and Robotics for Digital Agriculture)
Show Figures

Figure 1

17 pages, 1189 KiB  
Article
Automated IoT Device Identification Based on Full Packet Information Using Real-Time Network Traffic
by Narges Yousefnezhad, Avleen Malhi and Kary Främling
Sensors 2021, 21(8), 2660; https://doi.org/10.3390/s21082660 - 10 Apr 2021
Cited by 22 | Viewed by 6234
Abstract
In an Internet of Things (IoT) environment, a large volume of potentially confidential data might be leaked from sensors installed everywhere. To ensure the authenticity of such sensitive data, it is important to initially verify the source of data and its identity. Practically, [...] Read more.
In an Internet of Things (IoT) environment, a large volume of potentially confidential data might be leaked from sensors installed everywhere. To ensure the authenticity of such sensitive data, it is important to initially verify the source of data and its identity. Practically, IoT device identification is the primary step toward a secure IoT system. An appropriate device identification approach can counteract malicious activities such as sending false data that trigger irreparable security issues in vital or emergency situations. Recent research indicates that primary identity metrics such as Internet Protocol (IP) or Media Access Control (MAC) addresses are insufficient due to their instability or easy accessibility. Thus, to identify an IoT device, analysis of the header information of packets by the sensors is of imperative consideration. This paper proposes a combination of sensor measurement and statistical feature sets in addition to a header feature set using a classification-based device identification framework. Various machine Learning algorithms have been adopted to identify different combinations of these feature sets to provide enhanced security in IoT devices. The proposed method has been evaluated through normal and under-attack circumstances by collecting real-time data from IoT devices connected in a lab setting to show the system robustness. Full article
(This article belongs to the Special Issue Selected Papers from the Global IoT Summit GIoTS 2020)
Show Figures

Figure 1

23 pages, 4617 KiB  
Article
Multi-Sensor Fault Detection, Identification, Isolation and Health Forecasting for Autonomous Vehicles
by Saeid Safavi, Mohammad Amin Safavi, Hossein Hamid and Saber Fallah
Sensors 2021, 21(7), 2547; https://doi.org/10.3390/s21072547 - 05 Apr 2021
Cited by 41 | Viewed by 8435
Abstract
The primary focus of autonomous driving research is to improve driving accuracy and reliability. While great progress has been made, state-of-the-art algorithms still fail at times and some of these failures are due to the faults in sensors. Such failures may have fatal [...] Read more.
The primary focus of autonomous driving research is to improve driving accuracy and reliability. While great progress has been made, state-of-the-art algorithms still fail at times and some of these failures are due to the faults in sensors. Such failures may have fatal consequences. It therefore is important that automated cars foresee problems ahead as early as possible. By using real-world data and artificial injection of different types of sensor faults to the healthy signals, data models can be trained using machine learning techniques. This paper proposes a novel fault detection, isolation, identification and prediction (based on detection) architecture for multi-fault in multi-sensor systems, such as autonomous vehicles.Our detection, identification and isolation platform uses two distinct and efficient deep neural network architectures and obtained very impressive performance. Utilizing the sensor fault detection system’s output, we then introduce our health index measure and use it to train the health index forecasting network. Full article
(This article belongs to the Special Issue Artificial Intelligence and Internet of Things in Autonomous Vehicles)
Show Figures

Figure 1

25 pages, 2801 KiB  
Review
Smart Wearables for Cardiac Monitoring—Real-World Use beyond Atrial Fibrillation
by David Duncker, Wern Yew Ding, Susan Etheridge, Peter A. Noseworthy, Christian Veltmann, Xiaoxi Yao, T. Jared Bunch and Dhiraj Gupta
Sensors 2021, 21(7), 2539; https://doi.org/10.3390/s21072539 - 05 Apr 2021
Cited by 58 | Viewed by 13493
Abstract
The possibilities and implementation of wearable cardiac monitoring beyond atrial fibrillation are increasing continuously. This review focuses on the real-world use and evolution of these devices for other arrhythmias, cardiovascular diseases and some of their risk factors beyond atrial fibrillation. The management of [...] Read more.
The possibilities and implementation of wearable cardiac monitoring beyond atrial fibrillation are increasing continuously. This review focuses on the real-world use and evolution of these devices for other arrhythmias, cardiovascular diseases and some of their risk factors beyond atrial fibrillation. The management of nonatrial fibrillation arrhythmias represents a broad field of wearable technologies in cardiology using Holter, event recorder, electrocardiogram (ECG) patches, wristbands and textiles. Implementation in other patient cohorts, such as ST-elevation myocardial infarction (STEMI), heart failure or sleep apnea, is feasible and expanding. In addition to appropriate accuracy, clinical studies must address the validation of clinical pathways including the appropriate device and clinical decisions resulting from the surrogate assessed. Full article
(This article belongs to the Special Issue Smart Wearables for Cardiac Monitoring)
Show Figures

Figure 1

14 pages, 4283 KiB  
Article
Ultrasensitive Strain Sensor Based on Pre-Generated Crack Networks Using Ag Nanoparticles/Single-Walled Carbon Nanotube (SWCNT) Hybrid Fillers and a Polyester Woven Elastic Band
by Yelin Ko, Ji-seon Kim, Chi Cuong Vu and Jooyong Kim
Sensors 2021, 21(7), 2531; https://doi.org/10.3390/s21072531 - 04 Apr 2021
Cited by 24 | Viewed by 4595
Abstract
Flexible strain sensors are receiving a great deal of interest owing to their prospective applications in monitoring various human activities. Among various efforts to enhance the sensitivity of strain sensors, pre-crack generation has been well explored for elastic polymers but rarely on textile [...] Read more.
Flexible strain sensors are receiving a great deal of interest owing to their prospective applications in monitoring various human activities. Among various efforts to enhance the sensitivity of strain sensors, pre-crack generation has been well explored for elastic polymers but rarely on textile substrates. Herein, a highly sensitive textile-based strain sensor was fabricated via a dip-coat-stretch approach: a polyester woven elastic band was dipped into ink containing single-walled carbon nanotubes coated with silver paste and pre-stretched to generate prebuilt cracks on the surface. Our sensor demonstrated outstanding sensitivity (a gauge factor of up to 3550 within a strain range of 1.5–5%), high stability and durability, and low hysteresis. The high performance of this sensor is attributable to the excellent elasticity and woven structure of the fabric substrate, effectively generating and propagating the prebuilt cracks. The strain sensor integrated into firefighting gloves detected detailed finger angles and cyclic finger motions, demonstrating its capability for subtle human motion monitoring. It is also noteworthy that this novel strategy is a very quick, straightforward, and scalable method of fabricating strain sensors, which is extremely beneficial for practical applications. Full article
(This article belongs to the Special Issue Textile-Based Sensors: E-textiles, Devices, and Integrated Systems)
Show Figures

Figure 1

23 pages, 1905 KiB  
Review
Molecularly Imprinted Polymer-Based Sensors for Priority Pollutants
by Mashaalah Zarejousheghani, Parvaneh Rahimi, Helko Borsdorf, Stefan Zimmermann and Yvonne Joseph
Sensors 2021, 21(7), 2406; https://doi.org/10.3390/s21072406 - 31 Mar 2021
Cited by 28 | Viewed by 4411
Abstract
Globally, there is growing concern about the health risks of water and air pollution. The U.S. Environmental Protection Agency (EPA) has developed a list of priority pollutants containing 129 different chemical compounds. All of these chemicals are of significant interest due to their [...] Read more.
Globally, there is growing concern about the health risks of water and air pollution. The U.S. Environmental Protection Agency (EPA) has developed a list of priority pollutants containing 129 different chemical compounds. All of these chemicals are of significant interest due to their serious health and safety issues. Permanent exposure to some concentrations of these chemicals can cause severe and irrecoverable health effects, which can be easily prevented by their early identification. Molecularly imprinted polymers (MIPs) offer great potential for selective adsorption of chemicals from water and air samples. These selective artificial bio(mimetic) receptors are promising candidates for modification of sensors, especially disposable sensors, due to their low-cost, long-term stability, ease of engineering, simplicity of production and their applicability for a wide range of targets. Herein, innovative strategies used to develop MIP-based sensors for EPA priority pollutants will be reviewed. Full article
(This article belongs to the Special Issue Molecularly Imprinted Polymer Sensing Platforms)
Show Figures

Figure 1

20 pages, 8500 KiB  
Article
Assessment of Vineyard Canopy Characteristics from Vigour Maps Obtained Using UAV and Satellite Imagery
by Javier Campos, Francisco García-Ruíz and Emilio Gil
Sensors 2021, 21(7), 2363; https://doi.org/10.3390/s21072363 - 29 Mar 2021
Cited by 28 | Viewed by 4054
Abstract
Canopy characterisation is a key factor for the success and efficiency of the pesticide application process in vineyards. Canopy measurements to determine the optimal volume rate are currently conducted manually, which is time-consuming and limits the adoption of precise methods for volume rate [...] Read more.
Canopy characterisation is a key factor for the success and efficiency of the pesticide application process in vineyards. Canopy measurements to determine the optimal volume rate are currently conducted manually, which is time-consuming and limits the adoption of precise methods for volume rate selection. Therefore, automated methods for canopy characterisation must be established using a rapid and reliable technology capable of providing precise information about crop structure. This research providedregression models for obtaining canopy characteristics of vineyards from unmanned aerial vehicle (UAV) and satellite images collected in three significant growth stages. Between 2018 and 2019, a total of 1400 vines were characterised manually and remotely using a UAV and a satellite-based technology. The information collected from the sampled vines was analysed by two different procedures. First, a linear relationship between the manual and remote sensing data was investigated considering every single vine as a data point. Second, the vines were clustered based on three vigour levels in the parcel, and regression models were fitted to the average values of the ground-based and remote sensing-estimated canopy parameters. Remote sensing could detect the changes in canopy characteristics associated with vegetation growth. The combination of normalised differential vegetation index (NDVI) and projected area extracted from the UAV images is correlated with the tree row volume (TRV) when raw point data were used. This relationship was improved and extended to canopy height, width, leaf wall area, and TRV when the data were clustered. Similarly, satellite-based NDVI yielded moderate coefficients of determination for canopy width with raw point data, and for canopy width, height, and TRV when the vines were clustered according to the vigour. The proposed approach should facilitate the estimation of canopy characteristics in each area of a field using a cost-effective, simple, and reliable technology, allowing variable rate application in vineyards. Full article
Show Figures

Figure 1

15 pages, 3214 KiB  
Article
A Deep-Learning Framework for the Detection of Oil Spills from SAR Data
by Mohamed Shaban, Reem Salim, Hadil Abu Khalifeh, Adel Khelifi, Ahmed Shalaby, Shady El-Mashad, Ali Mahmoud, Mohammed Ghazal and Ayman El-Baz
Sensors 2021, 21(7), 2351; https://doi.org/10.3390/s21072351 - 28 Mar 2021
Cited by 46 | Viewed by 5553
Abstract
Oil leaks onto water surfaces from big tankers, ships, and pipeline cracks cause considerable damage and harm to the marine environment. Synthetic Aperture Radar (SAR) images provide an approximate representation for target scenes, including sea and land surfaces, ships, oil spills, and look-alikes. [...] Read more.
Oil leaks onto water surfaces from big tankers, ships, and pipeline cracks cause considerable damage and harm to the marine environment. Synthetic Aperture Radar (SAR) images provide an approximate representation for target scenes, including sea and land surfaces, ships, oil spills, and look-alikes. Detection and segmentation of oil spills from SAR images are crucial to aid in leak cleanups and protecting the environment. This paper introduces a two-stage deep-learning framework for the identification of oil spill occurrences based on a highly unbalanced dataset. The first stage classifies patches based on the percentage of oil spill pixels using a novel 23-layer Convolutional Neural Network. In contrast, the second stage performs semantic segmentation using a five-stage U-Net structure. The generalized Dice loss is minimized to account for the reduced oil spill representation in the patches. The results of this study are very promising and provide a comparable improved precision and Dice score compared to related work. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

18 pages, 4119 KiB  
Article
Towards Continuous Camera-Based Respiration Monitoring in Infants
by Ilde Lorato, Sander Stuijk, Mohammed Meftah, Deedee Kommers, Peter Andriessen, Carola van Pul and Gerard de Haan
Sensors 2021, 21(7), 2268; https://doi.org/10.3390/s21072268 - 24 Mar 2021
Cited by 24 | Viewed by 3013
Abstract
Aiming at continuous unobtrusive respiration monitoring, motion robustness is paramount. However, some types of motion can completely hide the respiration information and the detection of these events is required to avoid incorrect rate estimations. Therefore, this work proposes a motion detector optimized to [...] Read more.
Aiming at continuous unobtrusive respiration monitoring, motion robustness is paramount. However, some types of motion can completely hide the respiration information and the detection of these events is required to avoid incorrect rate estimations. Therefore, this work proposes a motion detector optimized to specifically detect severe motion of infants combined with a respiration rate detection strategy based on automatic pixels selection, which proved to be robust to motion of the infants involving head and limbs. A dataset including both thermal and RGB (Red Green Blue) videos was used amounting to a total of 43 h acquired on 17 infants. The method was successfully applied to both RGB and thermal videos and compared to the chest impedance signal. The Mean Absolute Error (MAE) in segments where some motion is present was 1.16 and 1.97 breaths/min higher than the MAE in the ideal moments where the infants were still for testing and validation set, respectively. Overall, the average MAE on the testing and validation set are 3.31 breaths/min and 5.36 breaths/min, using 64.00% and 69.65% of the included video segments (segments containing events such as interventions were excluded based on a manual annotation), respectively. Moreover, we highlight challenges that need to be overcome for continuous camera-based respiration monitoring. The method can be applied to different camera modalities, does not require skin visibility, and is robust to some motion of the infants. Full article
(This article belongs to the Special Issue Contactless Sensors for Healthcare)
Show Figures

Figure 1

26 pages, 4991 KiB  
Review
An Outlook of Recent Advances in Chemiresistive Sensor-Based Electronic Nose Systems for Food Quality and Environmental Monitoring
by Alishba T. John, Krishnan Murugappan, David R. Nisbet and Antonio Tricoli
Sensors 2021, 21(7), 2271; https://doi.org/10.3390/s21072271 - 24 Mar 2021
Cited by 51 | Viewed by 6787
Abstract
An electronic nose (Enose) relies on the use of an array of partially selective chemical gas sensors for identification of various chemical compounds, including volatile organic compounds in gas mixtures. They have been proposed as a portable low-cost technology to analyse complex odours [...] Read more.
An electronic nose (Enose) relies on the use of an array of partially selective chemical gas sensors for identification of various chemical compounds, including volatile organic compounds in gas mixtures. They have been proposed as a portable low-cost technology to analyse complex odours in the food industry and for environmental monitoring. Recent advances in nanofabrication, sensor and microcircuitry design, neural networks, and system integration have considerably improved the efficacy of Enose devices. Here, we highlight different types of semiconducting metal oxides as well as their sensing mechanism and integration into Enose systems, including different pattern recognition techniques employed for data analysis. We offer a critical perspective of state-of-the-art commercial and custom-made Enoses, identifying current challenges for the broader uptake and use of Enose systems in a variety of applications. Full article
(This article belongs to the Special Issue Metal Oxides Sensors: Innovation and Quality of Life)
Show Figures

Figure 1

Back to TopTop