sensors-logo

Journal Browser

Journal Browser

Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 6148 KiB  
Article
A Clinically Interpretable Computer-Vision Based Method for Quantifying Gait in Parkinson’s Disease
by Samuel Rupprechter, Gareth Morinan, Yuwei Peng, Thomas Foltynie, Krista Sibley, Rimona S. Weil, Louise-Ann Leyland, Fahd Baig, Francesca Morgante, Ro’ee Gilron, Robert Wilt, Philip Starr, Robert A. Hauser and Jonathan O’Keeffe
Sensors 2021, 21(16), 5437; https://doi.org/10.3390/s21165437 - 12 Aug 2021
Cited by 28 | Viewed by 5673
Abstract
Gait is a core motor function and is impaired in numerous neurological diseases, including Parkinson’s disease (PD). Treatment changes in PD are frequently driven by gait assessments in the clinic, commonly rated as part of the Movement Disorder Society (MDS) Unified PD Rating [...] Read more.
Gait is a core motor function and is impaired in numerous neurological diseases, including Parkinson’s disease (PD). Treatment changes in PD are frequently driven by gait assessments in the clinic, commonly rated as part of the Movement Disorder Society (MDS) Unified PD Rating Scale (UPDRS) assessment (item 3.10). We proposed and evaluated a novel approach for estimating severity of gait impairment in Parkinson’s disease using a computer vision-based methodology. The system we developed can be used to obtain an estimate for a rating to catch potential errors, or to gain an initial rating in the absence of a trained clinician—for example, during remote home assessments. Videos (n=729) were collected as part of routine MDS-UPDRS gait assessments of Parkinson’s patients, and a deep learning library was used to extract body key-point coordinates for each frame. Data were recorded at five clinical sites using commercially available mobile phones or tablets, and had an associated severity rating from a trained clinician. Six features were calculated from time-series signals of the extracted key-points. These features characterized key aspects of the movement including speed (step frequency, estimated using a novel Gamma-Poisson Bayesian model), arm swing, postural control and smoothness (or roughness) of movement. An ordinal random forest classification model (with one class for each of the possible ratings) was trained and evaluated using 10-fold cross validation. Step frequency point estimates from the Bayesian model were highly correlated with manually labelled step frequencies of 606 video clips showing patients walking towards or away from the camera (Pearson’s r=0.80, p<0.001). Our classifier achieved a balanced accuracy of 50% (chance = 25%). Estimated UPDRS ratings were within one of the clinicians’ ratings in 95% of cases. There was a significant correlation between clinician labels and model estimates (Spearman’s ρ=0.52, p<0.001). We show how the interpretability of the feature values could be used by clinicians to support their decision-making and provide insight into the model’s objective UPDRS rating estimation. The severity of gait impairment in Parkinson’s disease can be estimated using a single patient video, recorded using a consumer mobile device and within standard clinical settings; i.e., videos were recorded in various hospital hallways and offices rather than gait laboratories. This approach can support clinicians during routine assessments by providing an objective rating (or second opinion), and has the potential to be used for remote home assessments, which would allow for more frequent monitoring. Full article
Show Figures

Figure 1

17 pages, 2786 KiB  
Article
Non-Invasive Detection and Staging of Colorectal Cancer Using a Portable Electronic Nose
by Heena Tyagi, Emma Daulton, Ayman S. Bannaga, Ramesh P. Arasaradnam and James A. Covington
Sensors 2021, 21(16), 5440; https://doi.org/10.3390/s21165440 - 12 Aug 2021
Cited by 23 | Viewed by 4440
Abstract
Electronic noses (e-nose) offer potential for the detection of cancer in its early stages. The ability to analyse samples in real time, at a low cost, applying easy–to-use and portable equipment, gives e-noses advantages over other technologies, such as Gas Chromatography-Mass Spectrometry (GC-MS). [...] Read more.
Electronic noses (e-nose) offer potential for the detection of cancer in its early stages. The ability to analyse samples in real time, at a low cost, applying easy–to-use and portable equipment, gives e-noses advantages over other technologies, such as Gas Chromatography-Mass Spectrometry (GC-MS). For diseases such as cancer with a high mortality, a technology that can provide fast results for use in routine clinical applications is important. Colorectal cancer (CRC) is among the highest occurring cancers and has high mortality rates, if diagnosed late. In our study, we investigated the use of portable electronic nose (PEN3), with further analysis using GC-TOF-MS, for the analysis of gases and volatile organic compounds (VOCs) to profile the urinary metabolome of colorectal cancer. We also compared the different cancer stages with non-cancers using the PEN3 and GC-TOF-MS. Results obtained from PEN3, and GC-TOF-MS demonstrated high accuracy for the separation of CRC and non-cancer. PEN3 separated CRC from non-cancerous group with 0.81 AUC (Area Under the Curve). We used data from GC-TOF-MS to obtain a VOC profile for CRC, which identified 23 potential biomarker VOCs for CRC. Thus, the PEN3 and GC-TOF-MS were found to successfully separate the cancer group from the non-cancer group. Full article
Show Figures

Graphical abstract

16 pages, 3517 KiB  
Article
The Algorithm of Determining an Anti-Collision Manoeuvre Trajectory Based on the Interpolation of Ship’s State Vector
by Piotr Borkowski, Zbigniew Pietrzykowski and Janusz Magaj
Sensors 2021, 21(16), 5332; https://doi.org/10.3390/s21165332 - 6 Aug 2021
Cited by 16 | Viewed by 2988
Abstract
The determination of a ship’s safe trajectory in collision situations at sea is one of the basic functions in autonomous navigation of ships. While planning a collision avoiding manoeuvre in open waters, the navigator has to take into account the ships manoeuvrability and [...] Read more.
The determination of a ship’s safe trajectory in collision situations at sea is one of the basic functions in autonomous navigation of ships. While planning a collision avoiding manoeuvre in open waters, the navigator has to take into account the ships manoeuvrability and hydrometeorological conditions. To this end, the ship’s state vector is predicted—position coordinates, speed, heading, and other movement parameters—at fixed time intervals for different steering scenarios. One possible way to solve this problem is a method using the interpolation of the ship’s state vector based on the data from measurements conducted during the sea trials of the ship. This article presents the interpolating function within any convex quadrilateral with the nodes being its vertices. The proposed function interpolates the parameters of the ship’s state vector for the specified point of a plane, where the values in the interpolation nodes are data obtained from measurements performed during a series of turning circle tests, conducted for different starting conditions and various rudder settings. The proposed method of interpolation was used in the process of determining the anti-collision manoeuvre trajectory. The mechanism is based on the principles of a modified Dijkstra algorithm, in which the graph takes the form of a regular network of points. The transition between the graph vertices depends on the safe passing level of other objects and the degree of departure from the planned route. The determined shortest path between the starting vertex and the target vertex is the optimal solution for the discrete space of solutions. The algorithm for determining the trajectory of the anti-collision manoeuvre was implemented in autonomous sea-going vessel technology. This article presents the results of laboratory tests and tests conducted under quasi-real conditions using physical ship models. The experiments confirmed the effective operation of the developed algorithm of the determination of the anti-collision manoeuvre trajectory in the technological framework of autonomous ship navigation. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

30 pages, 2696 KiB  
Review
Surface Plasmonic Sensors: Sensing Mechanism and Recent Applications
by Qilin Duan, Yineng Liu, Shanshan Chang, Huanyang Chen and Jin-hui Chen
Sensors 2021, 21(16), 5262; https://doi.org/10.3390/s21165262 - 4 Aug 2021
Cited by 68 | Viewed by 13177
Abstract
Surface plasmonic sensors have been widely used in biology, chemistry, and environment monitoring. These sensors exhibit extraordinary sensitivity based on surface plasmon resonance (SPR) or localized surface plasmon resonance (LSPR) effects, and they have found commercial applications. In this review, we present recent [...] Read more.
Surface plasmonic sensors have been widely used in biology, chemistry, and environment monitoring. These sensors exhibit extraordinary sensitivity based on surface plasmon resonance (SPR) or localized surface plasmon resonance (LSPR) effects, and they have found commercial applications. In this review, we present recent progress in the field of surface plasmonic sensors, mainly in the configurations of planar metastructures and optical-fiber waveguides. In the metastructure platform, the optical sensors based on LSPR, hyperbolic dispersion, Fano resonance, and two-dimensional (2D) materials integration are introduced. The optical-fiber sensors integrated with LSPR/SPR structures and 2D materials are summarized. We also introduce the recent advances in quantum plasmonic sensing beyond the classical shot noise limit. The challenges and opportunities in this field are discussed. Full article
(This article belongs to the Special Issue Surface Plasmon Sensors)
Show Figures

Figure 1

27 pages, 6343 KiB  
Article
Attention-Based Multi-Scale Convolutional Neural Network (A+MCNN) for Multi-Class Classification in Road Images
by Elham Eslami and Hae-Bum Yun
Sensors 2021, 21(15), 5137; https://doi.org/10.3390/s21155137 - 29 Jul 2021
Cited by 31 | Viewed by 6691
Abstract
Automated pavement distress recognition is a key step in smart infrastructure assessment. Advances in deep learning and computer vision have improved the automated recognition of pavement distresses in road surface images. This task remains challenging due to the high variation of defects in [...] Read more.
Automated pavement distress recognition is a key step in smart infrastructure assessment. Advances in deep learning and computer vision have improved the automated recognition of pavement distresses in road surface images. This task remains challenging due to the high variation of defects in shapes and sizes, demanding a better incorporation of contextual information into deep networks. In this paper, we show that an attention-based multi-scale convolutional neural network (A+MCNN) improves the automated classification of common distress and non-distress objects in pavement images by (i) encoding contextual information through multi-scale input tiles and (ii) employing a mid-fusion approach with an attention module for heterogeneous image contexts from different input scales. A+MCNN is trained and tested with four distress classes (crack, crack seal, patch, pothole), five non-distress classes (joint, marker, manhole cover, curbing, shoulder), and two pavement classes (asphalt, concrete). A+MCNN is compared with four deep classifiers that are widely used in transportation applications and a generic CNN classifier (as the control model). The results show that A+MCNN consistently outperforms the baselines by 1∼26% on average in terms of the F-score. A comprehensive discussion is also presented regarding how these classifiers perform differently on different road objects, which has been rarely addressed in the existing literature. Full article
(This article belongs to the Special Issue Artificial Intelligence and Their Applications in Smart Cities)
Show Figures

Figure 1

16 pages, 5957 KiB  
Article
Non-Contact Respiratory Monitoring Using an RGB Camera for Real-World Applications
by Chiara Romano, Emiliano Schena, Sergio Silvestri and Carlo Massaroni
Sensors 2021, 21(15), 5126; https://doi.org/10.3390/s21155126 - 29 Jul 2021
Cited by 28 | Viewed by 3705
Abstract
Respiratory monitoring is receiving growing interest in different fields of use, ranging from healthcare to occupational settings. Only recently, non-contact measuring systems have been developed to measure the respiratory rate (fR) over time, even in unconstrained environments. Promising methods rely [...] Read more.
Respiratory monitoring is receiving growing interest in different fields of use, ranging from healthcare to occupational settings. Only recently, non-contact measuring systems have been developed to measure the respiratory rate (fR) over time, even in unconstrained environments. Promising methods rely on the analysis of video-frames features recorded from cameras. In this work, a low-cost and unobtrusive measuring system for respiratory pattern monitoring based on the analysis of RGB images recorded from a consumer-grade camera is proposed. The system allows (i) the automatized tracking of the chest movements caused by breathing, (ii) the extraction of the breathing signal from images with methods based on optical flow (FO) and RGB analysis, (iii) the elimination of breathing-unrelated events from the signal, (iv) the identification of possible apneas and, (v) the calculation of fR value every second. Unlike most of the work in the literature, the performances of the system have been tested in an unstructured environment considering user-camera distance and user posture as influencing factors. A total of 24 healthy volunteers were enrolled for the validation tests. Better performances were obtained when the users were in sitting position. FO method outperforms in all conditions. In the fR range 6 to 60 breaths/min (bpm), the FO allows measuring fR values with bias of −0.03 ± 1.38 bpm and −0.02 ± 1.92 bpm when compared to a reference wearable system with the user at 2 and 0.5 m from the camera, respectively. Full article
(This article belongs to the Special Issue Wearable and Unobtrusive Technologies for Healthcare Monitoring)
Show Figures

Figure 1

28 pages, 9839 KiB  
Article
Monitoring Soil and Ambient Parameters in the IoT Precision Agriculture Scenario: An Original Modeling Approach Dedicated to Low-Cost Soil Water Content Sensors
by Pisana Placidi, Renato Morbidelli, Diego Fortunati, Nicola Papini, Francesco Gobbi and Andrea Scorzoni
Sensors 2021, 21(15), 5110; https://doi.org/10.3390/s21155110 - 28 Jul 2021
Cited by 70 | Viewed by 7715
Abstract
A low power wireless sensor network based on LoRaWAN protocol was designed with a focus on the IoT low-cost Precision Agriculture applications, such as greenhouse sensing and actuation. All subsystems used in this research are designed by using commercial components and free or [...] Read more.
A low power wireless sensor network based on LoRaWAN protocol was designed with a focus on the IoT low-cost Precision Agriculture applications, such as greenhouse sensing and actuation. All subsystems used in this research are designed by using commercial components and free or open-source software libraries. The whole system was implemented to demonstrate the feasibility of a modular system built with cheap off-the-shelf components, including sensors. The experimental outputs were collected and stored in a database managed by a virtual machine running in a cloud service. The collected data can be visualized in real time by the user with a graphical interface. The reliability of the whole system was proven during a continued experiment with two natural soils, Loamy Sand and Silty Loam. Regarding soil parameters, the system performance has been compared with that of a reference sensor from Sentek. Measurements highlighted a good agreement for the temperature within the supposed accuracy of the adopted sensors and a non-constant sensitivity for the low-cost volumetric water contents (VWC) sensor. Finally, for the low-cost VWC sensor we implemented a novel procedure to optimize the parameters of the non-linear fitting equation correlating its analog voltage output with the reference VWC. Full article
(This article belongs to the Special Issue Wireless Sensing and Networking for the Internet of Things)
Show Figures

Figure 1

13 pages, 3051 KiB  
Article
Printed Circuit Board Defect Detection Using Deep Learning via A Skip-Connected Convolutional Autoencoder
by Jungsuk Kim, Jungbeom Ko, Hojong Choi and Hyunchul Kim
Sensors 2021, 21(15), 4968; https://doi.org/10.3390/s21154968 - 21 Jul 2021
Cited by 84 | Viewed by 13573
Abstract
As technology evolves, more components are integrated into printed circuit boards (PCBs) and the PCB layout increases. Because small defects on signal trace can cause significant damage to the system, PCB surface inspection is one of the most important quality control processes. Owing [...] Read more.
As technology evolves, more components are integrated into printed circuit boards (PCBs) and the PCB layout increases. Because small defects on signal trace can cause significant damage to the system, PCB surface inspection is one of the most important quality control processes. Owing to the limitations of manual inspection, significant efforts have been made to automate the inspection by utilizing high resolution CCD or CMOS sensors. Despite the advanced sensor technology, setting the pass/fail criteria based on small failure samples has always been challenging in traditional machine vision approaches. To overcome these problems, we propose an advanced PCB inspection system based on a skip-connected convolutional autoencoder. The deep autoencoder model was trained to decode the original non-defect images from the defect images. The decoded images were then compared with the input image to identify the defect location. To overcome the small and imbalanced dataset in the early manufacturing stage, we applied appropriate image augmentation to improve the model training performance. The experimental results reveal that a simple unsupervised autoencoder model delivers promising performance, with a detection rate of up to 98% and a false pass rate below 1.7% for the test data, containing 3900 defect and non-defect images. Full article
Show Figures

Figure 1

19 pages, 1712 KiB  
Article
Regulatory, Legal, and Market Aspects of Smart Wearables for Cardiac Monitoring
by Jan Benedikt Brönneke, Jennifer Müller, Konstantinos Mouratis, Julia Hagen and Ariel Dora Stern
Sensors 2021, 21(14), 4937; https://doi.org/10.3390/s21144937 - 20 Jul 2021
Cited by 18 | Viewed by 8008
Abstract
In the area of cardiac monitoring, the use of digitally driven technologies is on the rise. While the development of medical products is advancing rapidly, allowing for new use-cases in cardiac monitoring and other areas, regulatory and legal requirements that govern market access [...] Read more.
In the area of cardiac monitoring, the use of digitally driven technologies is on the rise. While the development of medical products is advancing rapidly, allowing for new use-cases in cardiac monitoring and other areas, regulatory and legal requirements that govern market access are often evolving slowly, sometimes creating market barriers. This article gives a brief overview of the existing clinical studies regarding the use of smart wearables in cardiac monitoring and provides insight into the main regulatory and legal aspects that need to be considered when such products are intended to be used in a health care setting. Based on this brief overview, the article elaborates on the specific requirements in the main areas of authorization/certification and reimbursement/compensation, as well as data protection and data security. Three case studies are presented as examples of specific market access procedures: the USA, Germany, and Belgium. This article concludes that, despite the differences in specific requirements, market access pathways in most countries are characterized by a number of similarities, which should be considered early on in product development. The article also elaborates on how regulatory and legal requirements are currently being adapted for digitally driven wearables and proposes an ongoing evolution of these requirements to facilitate market access for beneficial medical technology in the future. Full article
(This article belongs to the Special Issue Smart Wearables for Cardiac Monitoring)
Show Figures

Figure 1

32 pages, 19990 KiB  
Article
Real Time Pear Fruit Detection and Counting Using YOLOv4 Models and Deep SORT
by Addie Ira Borja Parico and Tofael Ahamed
Sensors 2021, 21(14), 4803; https://doi.org/10.3390/s21144803 - 14 Jul 2021
Cited by 98 | Viewed by 14131
Abstract
This study aimed to produce a robust real-time pear fruit counter for mobile applications using only RGB data, the variants of the state-of-the-art object detection model YOLOv4, and the multiple object-tracking algorithm Deep SORT. This study also provided a systematic and pragmatic methodology [...] Read more.
This study aimed to produce a robust real-time pear fruit counter for mobile applications using only RGB data, the variants of the state-of-the-art object detection model YOLOv4, and the multiple object-tracking algorithm Deep SORT. This study also provided a systematic and pragmatic methodology for choosing the most suitable model for a desired application in agricultural sciences. In terms of accuracy, YOLOv4-CSP was observed as the optimal model, with an AP@0.50 of 98%. In terms of speed and computational cost, YOLOv4-tiny was found to be the ideal model, with a speed of more than 50 FPS and FLOPS of 6.8–14.5. If considering the balance in terms of accuracy, speed and computational cost, YOLOv4 was found to be most suitable and had the highest accuracy metrics while satisfying a real time speed of greater than or equal to 24 FPS. Between the two methods of counting with Deep SORT, the unique ID method was found to be more reliable, with an F1count of 87.85%. This was because YOLOv4 had a very low false negative in detecting pear fruits. The ROI line is more reliable because of its more restrictive nature, but due to flickering in detection it was not able to count some pears despite their being detected. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

42 pages, 2541 KiB  
Review
Wearable Devices for Environmental Monitoring in the Built Environment: A Systematic Review
by Francesco Salamone, Massimiliano Masullo and Sergio Sibilio
Sensors 2021, 21(14), 4727; https://doi.org/10.3390/s21144727 - 10 Jul 2021
Cited by 35 | Viewed by 6570
Abstract
The so-called Internet of Things (IoT), which is rapidly increasing the number of network-connected and interconnected objects, could have a far-reaching impact in identifying the link between human health, well-being, and environmental concerns. In line with the IoT concept, many commercial wearables have [...] Read more.
The so-called Internet of Things (IoT), which is rapidly increasing the number of network-connected and interconnected objects, could have a far-reaching impact in identifying the link between human health, well-being, and environmental concerns. In line with the IoT concept, many commercial wearables have been introduced in recent years, which differ from the usual devices in that they use the term “smart” alongside the terms “watches”, “glasses”, and “jewellery”. Commercially available wearables aim to enhance smartphone functionality by enabling payment for commercial items or monitoring physical activity. However, what is the trend of scientific production about the concept of wearables regarding environmental monitoring issues? What are the main areas of interest covered by scientific production? What are the main findings and limitations of the developed solution in this field? The methodology used to answer the above questions is based on a systematic review. The data were acquired following a reproducible methodology. The main result is that, among the thermal, visual, acoustic, and air quality environmental factors, the last one is the most considered when using wearables even though in combination with some others. Another relevant finding is that of the acquired studies; in only one, the authors shared their wearables as an open-source device, and it will probably be necessary to encourage researchers to consider open-source as a means to promote scalability and proliferation of new wearables customized to cover different domains. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

23 pages, 3522 KiB  
Article
TIP4.0: Industrial Internet of Things Platform for Predictive Maintenance
by Carlos Resende, Duarte Folgado, João Oliveira, Bernardo Franco, Waldir Moreira, Antonio Oliveira-Jr, Armando Cavaleiro and Ricardo Carvalho
Sensors 2021, 21(14), 4676; https://doi.org/10.3390/s21144676 - 8 Jul 2021
Cited by 33 | Viewed by 6119
Abstract
Industry 4.0, allied with the growth and democratization of Artificial Intelligence (AI) and the advent of IoT, is paving the way for the complete digitization and automation of industrial processes. Maintenance is one of these processes, where the introduction of a predictive approach, [...] Read more.
Industry 4.0, allied with the growth and democratization of Artificial Intelligence (AI) and the advent of IoT, is paving the way for the complete digitization and automation of industrial processes. Maintenance is one of these processes, where the introduction of a predictive approach, as opposed to the traditional techniques, is expected to considerably improve the industry maintenance strategies with gains such as reduced downtime, improved equipment effectiveness, lower maintenance costs, increased return on assets, risk mitigation, and, ultimately, profitable growth. With predictive maintenance, dedicated sensors monitor the critical points of assets. The sensor data then feed into machine learning algorithms that can infer the asset health status and inform operators and decision-makers. With this in mind, in this paper, we present TIP4.0, a platform for predictive maintenance based on a modular software solution for edge computing gateways. TIP4.0 is built around Yocto, which makes it readily available and compliant with Commercial Off-the-Shelf (COTS) or proprietary hardware. TIP4.0 was conceived with an industry mindset with communication interfaces that allow it to serve sensor networks in the shop floor and modular software architecture that allows it to be easily adjusted to new deployment scenarios. To showcase its potential, the TIP4.0 platform was validated over COTS hardware, and we considered a public data-set for the simulation of predictive maintenance scenarios. We used a Convolution Neural Network (CNN) architecture, which provided competitive performance over the state-of-the-art approaches, while being approximately four-times and two-times faster than the uncompressed model inference on the Central Processing Unit (CPU) and Graphical Processing Unit, respectively. These results highlight the capabilities of distributed large-scale edge computing over industrial scenarios. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

26 pages, 2381 KiB  
Review
Recent Advances in Enzymatic and Non-Enzymatic Electrochemical Glucose Sensing
by Mohamed H. Hassan, Cian Vyas, Bruce Grieve and Paulo Bartolo
Sensors 2021, 21(14), 4672; https://doi.org/10.3390/s21144672 - 8 Jul 2021
Cited by 149 | Viewed by 11631
Abstract
The detection of glucose is crucial in the management of diabetes and other medical conditions but also crucial in a wide range of industries such as food and beverages. The development of glucose sensors in the past century has allowed diabetic patients to [...] Read more.
The detection of glucose is crucial in the management of diabetes and other medical conditions but also crucial in a wide range of industries such as food and beverages. The development of glucose sensors in the past century has allowed diabetic patients to effectively manage their disease and has saved lives. First-generation glucose sensors have considerable limitations in sensitivity and selectivity which has spurred the development of more advanced approaches for both the medical and industrial sectors. The wide range of application areas has resulted in a range of materials and fabrication techniques to produce novel glucose sensors that have higher sensitivity and selectivity, lower cost, and are simpler to use. A major focus has been on the development of enzymatic electrochemical sensors, typically using glucose oxidase. However, non-enzymatic approaches using direct electrochemistry of glucose on noble metals are now a viable approach in glucose biosensor design. This review discusses the mechanisms of electrochemical glucose sensing with a focus on the different generations of enzymatic-based sensors, their recent advances, and provides an overview of the next generation of non-enzymatic sensors. Advancements in manufacturing techniques and materials are key in propelling the field of glucose sensing, however, significant limitations remain which are highlighted in this review and requires addressing to obtain a more stable, sensitive, selective, cost efficient, and real-time glucose sensor. Full article
(This article belongs to the Special Issue Electrochemical (Bio)sensors for Biomedical Applications)
Show Figures

Figure 1

15 pages, 2029 KiB  
Article
A Vision-Based Social Distancing and Critical Density Detection System for COVID-19
by Dongfang Yang, Ekim Yurtsever, Vishnu Renganathan, Keith A. Redmill and Ümit Özgüner
Sensors 2021, 21(13), 4608; https://doi.org/10.3390/s21134608 - 5 Jul 2021
Cited by 70 | Viewed by 9511
Abstract
Social distancing (SD) is an effective measure to prevent the spread of the infectious Coronavirus Disease 2019 (COVID-19). However, a lack of spatial awareness may cause unintentional violations of this new measure. Against this backdrop, we propose an active surveillance system to slow [...] Read more.
Social distancing (SD) is an effective measure to prevent the spread of the infectious Coronavirus Disease 2019 (COVID-19). However, a lack of spatial awareness may cause unintentional violations of this new measure. Against this backdrop, we propose an active surveillance system to slow the spread of COVID-19 by warning individuals in a region-of-interest. Our contribution is twofold. First, we introduce a vision-based real-time system that can detect SD violations and send non-intrusive audio-visual cues using state-of-the-art deep-learning models. Second, we define a novel critical social density value and show that the chance of SD violation occurrence can be held near zero if the pedestrian density is kept under this value. The proposed system is also ethically fair: it does not record data nor target individuals, and no human supervisor is present during the operation. The proposed system was evaluated across real-world datasets. Full article
(This article belongs to the Special Issue Machine Learning in Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

17 pages, 8294 KiB  
Article
Filtering Biomechanical Signals in Movement Analysis
by Francesco Crenna, Giovanni Battista Rossi and Marta Berardengo
Sensors 2021, 21(13), 4580; https://doi.org/10.3390/s21134580 - 4 Jul 2021
Cited by 28 | Viewed by 5014
Abstract
Biomechanical analysis of human movement is based on dynamic measurements of reference points on the subject’s body and orientation measurements of body segments. Collected data include positions’ measurement, in a three-dimensional space. Signal enhancement by proper filtering is often recommended. Velocity and acceleration [...] Read more.
Biomechanical analysis of human movement is based on dynamic measurements of reference points on the subject’s body and orientation measurements of body segments. Collected data include positions’ measurement, in a three-dimensional space. Signal enhancement by proper filtering is often recommended. Velocity and acceleration signal must be obtained from position/angular measurement records, needing numerical processing effort. In this paper, we propose a comparative filtering method study procedure, based on measurement uncertainty related parameters’ set, based upon simulated and experimental signals. The final aim is to propose guidelines to optimize dynamic biomechanical measurement, considering the measurement uncertainty contribution due to the processing method. Performance of the considered methods are examined and compared with an analytical signal, considering both stationary and transient conditions. Finally, four experimental test cases are evaluated at best filtering conditions for measurement uncertainty contributions. Full article
(This article belongs to the Special Issue Sensors and Methods for Dynamic Measurement)
Show Figures

Figure 1

25 pages, 93653 KiB  
Article
Optimization of X-ray Investigations in Dentistry Using Optical Coherence Tomography
by Ralph-Alexandru Erdelyi, Virgil-Florin Duma, Cosmin Sinescu, George Mihai Dobre, Adrian Bradu and Adrian Podoleanu
Sensors 2021, 21(13), 4554; https://doi.org/10.3390/s21134554 - 2 Jul 2021
Cited by 23 | Viewed by 6283
Abstract
The most common imaging technique for dental diagnoses and treatment monitoring is X-ray imaging, which evolved from the first intraoral radiographs to high-quality three-dimensional (3D) Cone Beam Computed Tomography (CBCT). Other imaging techniques have shown potential, such as Optical Coherence Tomography (OCT). We [...] Read more.
The most common imaging technique for dental diagnoses and treatment monitoring is X-ray imaging, which evolved from the first intraoral radiographs to high-quality three-dimensional (3D) Cone Beam Computed Tomography (CBCT). Other imaging techniques have shown potential, such as Optical Coherence Tomography (OCT). We have recently reported on the boundaries of these two types of techniques, regarding. the dental fields where each one is more appropriate or where they should be both used. The aim of the present study is to explore the unique capabilities of the OCT technique to optimize X-ray units imaging (i.e., in terms of image resolution, radiation dose, or contrast). Two types of commercially available and widely used X-ray units are considered. To adjust their parameters, a protocol is developed to employ OCT images of dental conditions that are documented on high (i.e., less than 10 μm) resolution OCT images (both B-scans/cross sections and 3D reconstructions) but are hardly identified on the 200 to 75 μm resolution panoramic or CBCT radiographs. The optimized calibration of the X-ray unit includes choosing appropriate values for the anode voltage and current intensity of the X-ray tube, as well as the patient’s positioning, in order to reach the highest possible X-rays resolution at a radiation dose that is safe for the patient. The optimization protocol is developed in vitro on OCT images of extracted teeth and is further applied in vivo for each type of dental investigation. Optimized radiographic results are compared with un-optimized previously performed radiographs. Also, we show that OCT can permit a rigorous comparison between two (types of) X-ray units. In conclusion, high-quality dental images are possible using low radiation doses if an optimized protocol, developed using OCT, is applied for each type of dental investigation. Also, there are situations when the X-ray technology has drawbacks for dental diagnosis or treatment assessment. In such situations, OCT proves capable to provide qualitative images. Full article
(This article belongs to the Special Issue Feature Papers in the Sensing and Imaging Section 2021)
Show Figures

Figure 1

17 pages, 4184 KiB  
Review
Recent Advances in ZnO-Based Carbon Monoxide Sensors: Role of Doping
by Ana María Pineda-Reyes, María R. Herrera-Rivera, Hugo Rojas-Chávez, Heriberto Cruz-Martínez and Dora I. Medina
Sensors 2021, 21(13), 4425; https://doi.org/10.3390/s21134425 - 28 Jun 2021
Cited by 38 | Viewed by 4993
Abstract
Monitoring and detecting carbon monoxide (CO) are critical because this gas is toxic and harmful to the ecosystem. In this respect, designing high-performance gas sensors for CO detection is necessary. Zinc oxide-based materials are promising for use as CO sensors, owing to their [...] Read more.
Monitoring and detecting carbon monoxide (CO) are critical because this gas is toxic and harmful to the ecosystem. In this respect, designing high-performance gas sensors for CO detection is necessary. Zinc oxide-based materials are promising for use as CO sensors, owing to their good sensing response, electrical performance, cost-effectiveness, long-term stability, low power consumption, ease of manufacturing, chemical stability, and non-toxicity. Nevertheless, further progress in gas sensing requires improving the selectivity and sensitivity, and lowering the operating temperature. Recently, different strategies have been implemented to improve the sensitivity and selectivity of ZnO to CO, highlighting the doping of ZnO. Many studies concluded that doped ZnO demonstrates better sensing properties than those of undoped ZnO in detecting CO. Therefore, in this review, we analyze and discuss, in detail, the recent advances in doped ZnO for CO sensing applications. First, experimental studies on ZnO doped with transition metals, boron group elements, and alkaline earth metals as CO sensors are comprehensively reviewed. We then focused on analyzing theoretical and combined experimental–theoretical studies. Finally, we present the conclusions and some perspectives for future investigations in the context of advancements in CO sensing using doped ZnO, which include room-temperature gas sensing. Full article
(This article belongs to the Special Issue Semiconductor Materials and Nanostructures for Sensors and Devices)
Show Figures

Figure 1

22 pages, 44079 KiB  
Review
Chiroptical Metasurfaces: Principles, Classification, and Applications
by Joohoon Kim, Ahsan Sarwar Rana, Yeseul Kim, Inki Kim, Trevon Badloe, Muhammad Zubair, Muhammad Qasim Mehmood and Junsuk Rho
Sensors 2021, 21(13), 4381; https://doi.org/10.3390/s21134381 - 26 Jun 2021
Cited by 45 | Viewed by 7202
Abstract
Chiral materials, which show different optical behaviors when illuminated by left or right circularly polarized light due to broken mirror symmetry, have greatly impacted the field of optical sensing over the past decade. To improve the sensitivity of chiral sensing platforms, enhancing the [...] Read more.
Chiral materials, which show different optical behaviors when illuminated by left or right circularly polarized light due to broken mirror symmetry, have greatly impacted the field of optical sensing over the past decade. To improve the sensitivity of chiral sensing platforms, enhancing the chiroptical response is necessary. Metasurfaces, which are two-dimensional metamaterials consisting of periodic subwavelength artificial structures, have recently attracted significant attention because of their ability to enhance the chiroptical response by manipulating amplitude, phase, and polarization of electromagnetic fields. Here, we reviewed the fundamentals of chiroptical metasurfaces as well as categorized types of chiroptical metasurfaces by their intrinsic or extrinsic chirality. Finally, we introduced applications of chiral metasurfaces such as multiplexing metaholograms, metalenses, and sensors. Full article
(This article belongs to the Special Issue Metasurfaces in Depth Sensing and 3D Display)
Show Figures

Figure 1

22 pages, 2679 KiB  
Article
Anomaly Detection and Automatic Labeling for Solar Cell Quality Inspection Based on Generative Adversarial Network
by Julen Balzategui, Luka Eciolaza and Daniel Maestro-Watson
Sensors 2021, 21(13), 4361; https://doi.org/10.3390/s21134361 - 25 Jun 2021
Cited by 17 | Viewed by 2960
Abstract
Quality inspection applications in industry are required to move towards a zero-defect manufacturing scenario, with non-destructive inspection and traceability of 100% of produced parts. Developing robust fault detection and classification models from the start-up of the lines is challenging due to the difficulty [...] Read more.
Quality inspection applications in industry are required to move towards a zero-defect manufacturing scenario, with non-destructive inspection and traceability of 100% of produced parts. Developing robust fault detection and classification models from the start-up of the lines is challenging due to the difficulty in getting enough representative samples of the faulty patterns and the need to manually label them. This work presents a methodology to develop a robust inspection system, targeting these peculiarities, in the context of solar cell manufacturing. The methodology is divided into two phases: In the first phase, an anomaly detection model based on a Generative Adversarial Network (GAN) is employed. This model enables the detection and localization of anomalous patterns within the solar cells from the beginning, using only non-defective samples for training and without any manual labeling involved. In a second stage, as defective samples arise, the detected anomalies will be used as automatically generated annotations for the supervised training of a Fully Convolutional Network that is capable of detecting multiple types of faults. The experimental results using 1873 Electroluminescence (EL) images of monocrystalline cells show that (a) the anomaly detection scheme can be used to start detecting features with very little available data, (b) the anomaly detection may serve as automatic labeling in order to train a supervised model, and (c) segmentation and classification results of supervised models trained with automatic labels are comparable to the ones obtained from the models trained with manual labels. Full article
Show Figures

Figure 1

14 pages, 2596 KiB  
Article
An Online Data-Driven Fault Diagnosis Method for Air Handling Units by Rule and Convolutional Neural Networks
by Huanyue Liao, Wenjian Cai, Fanyong Cheng, Swapnil Dubey and Pudupadi Balachander Rajesh
Sensors 2021, 21(13), 4358; https://doi.org/10.3390/s21134358 - 25 Jun 2021
Cited by 21 | Viewed by 5449
Abstract
The stable operation of air handling units (AHU) is critical to ensure high efficiency and to extend the lifetime of the heating, ventilation, and air conditioning (HVAC) systems of buildings. In this paper, an online data-driven diagnosis method for AHU in an HVAC [...] Read more.
The stable operation of air handling units (AHU) is critical to ensure high efficiency and to extend the lifetime of the heating, ventilation, and air conditioning (HVAC) systems of buildings. In this paper, an online data-driven diagnosis method for AHU in an HVAC system is proposed and elaborated. The rule-based method can roughly detect the sensor condition by setting threshold values according to prior experience. Then, an efficient feature selection method using 1D convolutional neural networks (CNNs) is proposed for fault diagnosis of AHU in HVAC systems according to the system’s historical data obtained from the building management system. The new framework combines the rule-based method and CNNs-based method (RACNN) for sensor fault and complicated fault. The fault type of AHU can be accurately identified via the offline test results with an accuracy of 99.15% and fast online detection within 2 min. In the lab, the proposed RACNN method was validated on a real AHU system. The experimental results show that the proposed RACNN improves the performance of fault diagnosis. Full article
Show Figures

Figure 1

21 pages, 5227 KiB  
Article
The Promise of Sleep: A Multi-Sensor Approach for Accurate Sleep Stage Detection Using the Oura Ring
by Marco Altini and Hannu Kinnunen
Sensors 2021, 21(13), 4302; https://doi.org/10.3390/s21134302 - 23 Jun 2021
Cited by 69 | Viewed by 43631
Abstract
Consumer-grade sleep trackers represent a promising tool for large scale studies and health management. However, the potential and limitations of these devices remain less well quantified. Addressing this issue, we aim at providing a comprehensive analysis of the impact of accelerometer, autonomic nervous [...] Read more.
Consumer-grade sleep trackers represent a promising tool for large scale studies and health management. However, the potential and limitations of these devices remain less well quantified. Addressing this issue, we aim at providing a comprehensive analysis of the impact of accelerometer, autonomic nervous system (ANS)-mediated peripheral signals, and circadian features for sleep stage detection on a large dataset. Four hundred and forty nights from 106 individuals, for a total of 3444 h of combined polysomnography (PSG) and physiological data from a wearable ring, were acquired. Features were extracted to investigate the relative impact of different data streams on 2-stage (sleep and wake) and 4-stage classification accuracy (light NREM sleep, deep NREM sleep, REM sleep, and wake). Machine learning models were evaluated using a 5-fold cross-validation and a standardized framework for sleep stage classification assessment. Accuracy for 2-stage detection (sleep, wake) was 94% for a simple accelerometer-based model and 96% for a full model that included ANS-derived and circadian features. Accuracy for 4-stage detection was 57% for the accelerometer-based model and 79% when including ANS-derived and circadian features. Combining the compact form factor of a finger ring, multidimensional biometric sensory streams, and machine learning, high accuracy wake-sleep detection and sleep staging can be accomplished. Full article
Show Figures

Figure 1

24 pages, 24258 KiB  
Review
Towards Supply Chain Visibility Using Internet of Things: A Dyadic Analysis Review
by Shehzad Ahmed, Tahera Kalsoom, Naeem Ramzan, Zeeshan Pervez, Muhammad Azmat, Bassam Zeb and Masood Ur Rehman
Sensors 2021, 21(12), 4158; https://doi.org/10.3390/s21124158 - 17 Jun 2021
Cited by 48 | Viewed by 9364
Abstract
The Internet of Things (IoT) and its benefits and challenges are the most emergent research topics among academics and practitioners. With supply chains (SCs) gaining rapid complexity, having high supply chain visibility (SCV) would help companies ease the processes and reduce complexity by [...] Read more.
The Internet of Things (IoT) and its benefits and challenges are the most emergent research topics among academics and practitioners. With supply chains (SCs) gaining rapid complexity, having high supply chain visibility (SCV) would help companies ease the processes and reduce complexity by improving inaccuracies. Extant literature has given attention to the organisation’s capability to collect and evaluate information to balance between strategy and goals. The majority of studies focus on investigating IoT’s impact on different areas such as sustainability, organisational structure, lean manufacturing, product development, and strategic management. However, research investigating the relationships and impact of IoT on SCV is minimal. This study closes this gap using a structured literature review to critically analyse existing literature to synthesise the use of IoT applications in SCs to gain visibility, and the SC. We found key IoT technologies that help SCs gain visibility, and seven benefits and three key challenges of these technologies. We also found the concept of Supply 4.0 that grasps the element of Industry 4.0 within the SC context. This paper contributes by combining IoT application synthesis, enablers, and challenges in SCV by highlighting key IoT technologies used in the SCs to gain visibility. Finally, the authors propose an empirical research agenda to address the identified gaps. Full article
(This article belongs to the Special Issue Industry 4.0 and Smart Manufacturing)
Show Figures

Figure 1

50 pages, 3716 KiB  
Review
A Review of Nanocomposite-Modified Electrochemical Sensors for Water Quality Monitoring
by Olfa Kanoun, Tamara Lazarević-Pašti, Igor Pašti, Salem Nasraoui, Malak Talbi, Amina Brahem, Anurag Adiraju, Evgeniya Sheremet, Raul D. Rodriguez, Mounir Ben Ali and Ammar Al-Hamry
Sensors 2021, 21(12), 4131; https://doi.org/10.3390/s21124131 - 16 Jun 2021
Cited by 66 | Viewed by 12003
Abstract
Electrochemical sensors play a significant role in detecting chemical ions, molecules, and pathogens in water and other applications. These sensors are sensitive, portable, fast, inexpensive, and suitable for online and in-situ measurements compared to other methods. They can provide the detection for any [...] Read more.
Electrochemical sensors play a significant role in detecting chemical ions, molecules, and pathogens in water and other applications. These sensors are sensitive, portable, fast, inexpensive, and suitable for online and in-situ measurements compared to other methods. They can provide the detection for any compound that can undergo certain transformations within a potential window. It enables applications in multiple ion detection, mainly since these sensors are primarily non-specific. In this paper, we provide a survey of electrochemical sensors for the detection of water contaminants, i.e., pesticides, nitrate, nitrite, phosphorus, water hardeners, disinfectant, and other emergent contaminants (phenol, estrogen, gallic acid etc.). We focus on the influence of surface modification of the working electrodes by carbon nanomaterials, metallic nanostructures, imprinted polymers and evaluate the corresponding sensing performance. Especially for pesticides, which are challenging and need special care, we highlight biosensors, such as enzymatic sensors, immunobiosensor, aptasensors, and biomimetic sensors. We discuss the sensors’ overall performance, especially concerning real-sample performance and the capability for actual field application. Full article
(This article belongs to the Special Issue Sensors for Environmental and Life Science Applications)
Show Figures

Graphical abstract

51 pages, 13680 KiB  
Review
Roadmap of Terahertz Imaging 2021
by Gintaras Valušis, Alvydas Lisauskas, Hui Yuan, Wojciech Knap and Hartmut G. Roskos
Sensors 2021, 21(12), 4092; https://doi.org/10.3390/s21124092 - 14 Jun 2021
Cited by 157 | Viewed by 13603
Abstract
In this roadmap article, we have focused on the most recent advances in terahertz (THz) imaging with particular attention paid to the optimization and miniaturization of the THz imaging systems. Such systems entail enhanced functionality, reduced power consumption, and increased convenience, thus being [...] Read more.
In this roadmap article, we have focused on the most recent advances in terahertz (THz) imaging with particular attention paid to the optimization and miniaturization of the THz imaging systems. Such systems entail enhanced functionality, reduced power consumption, and increased convenience, thus being geared toward the implementation of THz imaging systems in real operational conditions. The article will touch upon the advanced solid-state-based THz imaging systems, including room temperature THz sensors and arrays, as well as their on-chip integration with diffractive THz optical components. We will cover the current-state of compact room temperature THz emission sources, both optolectronic and electrically driven; particular emphasis is attributed to the beam-forming role in THz imaging, THz holography and spatial filtering, THz nano-imaging, and computational imaging. A number of advanced THz techniques, such as light-field THz imaging, homodyne spectroscopy, and phase sensitive spectrometry, THz modulated continuous wave imaging, room temperature THz frequency combs, and passive THz imaging, as well as the use of artificial intelligence in THz data processing and optics development, will be reviewed. This roadmap presents a structured snapshot of current advances in THz imaging as of 2021 and provides an opinion on contemporary scientific and technological challenges in this field, as well as extrapolations of possible further evolution in THz imaging. Full article
(This article belongs to the Special Issue Terahertz Imaging and Sensors)
Show Figures

Figure 1

33 pages, 2356 KiB  
Review
Data-Driven Fault Diagnosis for Electric Drives: A Review
by David Gonzalez-Jimenez, Jon del-Olmo, Javier Poza, Fernando Garramiola and Patxi Madina
Sensors 2021, 21(12), 4024; https://doi.org/10.3390/s21124024 - 10 Jun 2021
Cited by 53 | Viewed by 6518
Abstract
The need to manufacture more competitive equipment, together with the emergence of the digital technologies from the so-called Industry 4.0, have changed many paradigms of the industrial sector. Presently, the trend has shifted to massively acquire operational data, which can be processed to [...] Read more.
The need to manufacture more competitive equipment, together with the emergence of the digital technologies from the so-called Industry 4.0, have changed many paradigms of the industrial sector. Presently, the trend has shifted to massively acquire operational data, which can be processed to extract really valuable information with the help of Machine Learning or Deep Learning techniques. As a result, classical Condition Monitoring methodologies, such as model- and signal-based ones are being overcome by data-driven approaches. Therefore, the current paper provides a review of these data-driven active supervision strategies implemented in electric drives for fault detection and diagnosis (FDD). Hence, first, an overview of the main FDD methods is presented. Then, some basic guidelines to implement the Machine Learning workflow on which most data-driven strategies are based, are explained. In addition, finally, the review of scientific articles related to the topic is provided, together with a discussion which tries to identify the main research gaps and opportunities. Full article
Show Figures

Figure 1

21 pages, 9233 KiB  
Article
Multi-Sensor and Decision-Level Fusion-Based Structural Damage Detection Using a One-Dimensional Convolutional Neural Network
by Shuai Teng, Gongfa Chen, Zongchao Liu, Li Cheng and Xiaoli Sun
Sensors 2021, 21(12), 3950; https://doi.org/10.3390/s21123950 - 8 Jun 2021
Cited by 42 | Viewed by 3609
Abstract
This paper presents a novel approach to substantially improve the detection accuracy of structural damage via a one-dimensional convolutional neural network (1-D CNN) and a decision-level fusion strategy. As structural damage usually induces changes in the dynamic responses of a structure, a CNN [...] Read more.
This paper presents a novel approach to substantially improve the detection accuracy of structural damage via a one-dimensional convolutional neural network (1-D CNN) and a decision-level fusion strategy. As structural damage usually induces changes in the dynamic responses of a structure, a CNN can effectively extract structural damage information from the vibration signals and classify them into the corresponding damage categories. However, it is difficult to build a large-scale sensor system in practical engineering; the collected vibration signals are usually non-synchronous and contain incomplete structure information, resulting in some evident errors in the decision stage of the CNN. In this study, the acceleration signals of multiple acquisition points were obtained, and the signals of each acquisition point were used to train a 1-D CNN, and their performances were evaluated by using the corresponding testing samples. Subsequently, the prediction results of all CNNs were fused (decision-level fusion) to obtain the integrated detection results. This method was validated using both numerical and experimental models and compared with a control experiment (data-level fusion) in which all the acceleration signals were used to train a CNN. The results confirmed that: by fusing the prediction results of multiple CNN models, the detection accuracy was significantly improved; for the numerical and experimental models, the detection accuracy was 10% and 16–30%, respectively, higher than that of the control experiment. It was demonstrated that: training a CNN using the acceleration signals of each acquisition point and making its own decision (the CNN output) and then fusing these decisions could effectively improve the accuracy of damage detection of the CNN. Full article
(This article belongs to the Special Issue Sensors for Structural Damage Identification)
Show Figures

Figure 1

16 pages, 1404 KiB  
Article
Towards 6G IoT: Tracing Mobile Sensor Nodes with Deep Learning Clustering in UAV Networks
by Yannis Spyridis, Thomas Lagkas, Panagiotis Sarigiannidis, Vasileios Argyriou, Antonios Sarigiannidis, George Eleftherakis and Jie Zhang
Sensors 2021, 21(11), 3936; https://doi.org/10.3390/s21113936 - 7 Jun 2021
Cited by 24 | Viewed by 4359
Abstract
Unmanned aerial vehicles (UAVs) in the role of flying anchor nodes have been proposed to assist the localisation of terrestrial Internet of Things (IoT) sensors and provide relay services in the context of the upcoming 6G networks. This paper considered the objective of [...] Read more.
Unmanned aerial vehicles (UAVs) in the role of flying anchor nodes have been proposed to assist the localisation of terrestrial Internet of Things (IoT) sensors and provide relay services in the context of the upcoming 6G networks. This paper considered the objective of tracing a mobile IoT device of unknown location, using a group of UAVs that were equipped with received signal strength indicator (RSSI) sensors. The UAVs employed measurements of the target’s radio frequency (RF) signal power to approach the target as quickly as possible. A deep learning model performed clustering in the UAV network at regular intervals, based on a graph convolutional network (GCN) architecture, which utilised information about the RSSI and the UAV positions. The number of clusters was determined dynamically at each instant using a heuristic method, and the partitions were determined by optimising an RSSI loss function. The proposed algorithm retained the clusters that approached the RF source more effectively, removing the rest of the UAVs, which returned to the base. Simulation experiments demonstrated the improvement of this method compared to a previous deterministic approach, in terms of the time required to reach the target and the total distance covered by the UAVs. Full article
(This article belongs to the Special Issue 6G Wireless Communication Systems)
Show Figures

Figure 1

21 pages, 5890 KiB  
Article
Hemorrhage Detection Based on 3D CNN Deep Learning Framework and Feature Fusion for Evaluating Retinal Abnormality in Diabetic Patients
by Sarmad Maqsood, Robertas Damaševičius and Rytis Maskeliūnas
Sensors 2021, 21(11), 3865; https://doi.org/10.3390/s21113865 - 3 Jun 2021
Cited by 53 | Viewed by 4813
Abstract
Diabetic retinopathy (DR) is the main cause of blindness in diabetic patients. Early and accurate diagnosis can improve the analysis and prognosis of the disease. One of the earliest symptoms of DR are the hemorrhages in the retina. Therefore, we propose a new [...] Read more.
Diabetic retinopathy (DR) is the main cause of blindness in diabetic patients. Early and accurate diagnosis can improve the analysis and prognosis of the disease. One of the earliest symptoms of DR are the hemorrhages in the retina. Therefore, we propose a new method for accurate hemorrhage detection from the retinal fundus images. First, the proposed method uses the modified contrast enhancement method to improve the edge details from the input retinal fundus images. In the second stage, a new convolutional neural network (CNN) architecture is proposed to detect hemorrhages. A modified pre-trained CNN model is used to extract features from the detected hemorrhages. In the third stage, all extracted feature vectors are fused using the convolutional sparse image decomposition method, and finally, the best features are selected by using the multi-logistic regression controlled entropy variance approach. The proposed method is evaluated on 1509 images from HRF, DRIVE, STARE, MESSIDOR, DIARETDB0, and DIARETDB1 databases and achieves the average accuracy of 97.71%, which is superior to the previous works. Moreover, the proposed hemorrhage detection system attains better performance, in terms of visual quality and quantitative analysis with high accuracy, in comparison with the state-of-the-art methods. Full article
(This article belongs to the Collection Medical Image Classification)
Show Figures

Figure 1

16 pages, 4312 KiB  
Article
Radar Transformer: An Object Classification Network Based on 4D MMW Imaging Radar
by Jie Bai, Lianqing Zheng, Sen Li, Bin Tan, Sihan Chen and Libo Huang
Sensors 2021, 21(11), 3854; https://doi.org/10.3390/s21113854 - 2 Jun 2021
Cited by 30 | Viewed by 8925
Abstract
Automotive millimeter-wave (MMW) radar is essential in autonomous vehicles due to its robustness in all weather conditions. Traditional commercial automotive radars are limited by their resolution, which makes the object classification task difficult. Thus, the concept of a new generation of four-dimensional (4D) [...] Read more.
Automotive millimeter-wave (MMW) radar is essential in autonomous vehicles due to its robustness in all weather conditions. Traditional commercial automotive radars are limited by their resolution, which makes the object classification task difficult. Thus, the concept of a new generation of four-dimensional (4D) imaging radar was proposed. It has high azimuth and elevation resolution and contains Doppler information to produce a high-quality point cloud. In this paper, we propose an object classification network named Radar Transformer. The algorithm takes the attention mechanism as the core and adopts the combination of vector attention and scalar attention to make full use of the spatial information, Doppler information, and reflection intensity information of the radar point cloud to realize the deep fusion of local attention features and global attention features. We generated an imaging radar classification dataset and completed manual annotation. The experimental results show that our proposed method achieved an overall classification accuracy of 94.9%, which is more suitable for processing radar point clouds than the popular deep learning frameworks and shows promising performance. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

21 pages, 11063 KiB  
Article
Monitoring System for Railway Infrastructure Elements Based on Thermal Imaging Analysis
by Krzysztof Stypułkowski, Paweł Gołda, Konrad Lewczuk and Justyna Tomaszewska
Sensors 2021, 21(11), 3819; https://doi.org/10.3390/s21113819 - 31 May 2021
Cited by 22 | Viewed by 5276
Abstract
The safety and reliability of railway transport requires new solutions for monitoring and quick identification of faults in the railway infrastructure. Electric heating devices (EORs) are the crucial element of turnouts. EORs ensure heating during low temperature periods when ice or snow can [...] Read more.
The safety and reliability of railway transport requires new solutions for monitoring and quick identification of faults in the railway infrastructure. Electric heating devices (EORs) are the crucial element of turnouts. EORs ensure heating during low temperature periods when ice or snow can lock the turnout device. Thermal imaging is a response to the need for an EOR inspection tool. After processing, a thermogram is a great support for the manual inspection of an EOR, or the thermogram can be the input for a machine learning algorithm. In this article, the authors review the literature in terms of thermographic analysis and its applications for detecting railroad damage, analysing images through machine learning, and improving railway traffic safety. The EOR device, its components, and technical parameters are discussed, as well as inspection and maintenance requirements. On this base, the authors present the concept of using thermographic imaging to detect EOR failures and malfunctions using a practical example, as well as the concept of using machine learning mechanisms to automatically analyse thermograms. The authors show that the proposed method of analysis can be an effective tool for examining EOR status and that it can be included in the official EOR inspection calendar. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

29 pages, 15926 KiB  
Article
Supervised Machine Learning Methods and Hyperspectral Imaging Techniques Jointly Applied for Brain Cancer Classification
by Gemma Urbanos, Alberto Martín, Guillermo Vázquez, Marta Villanueva, Manuel Villa, Luis Jimenez-Roldan, Miguel Chavarrías, Alfonso Lagares, Eduardo Juárez and César Sanz
Sensors 2021, 21(11), 3827; https://doi.org/10.3390/s21113827 - 31 May 2021
Cited by 44 | Viewed by 5876
Abstract
Hyperspectral imaging techniques (HSI) do not require contact with patients and are non-ionizing as well as non-invasive. As a consequence, they have been extensively applied in the medical field. HSI is being combined with machine learning (ML) processes to obtain models to assist [...] Read more.
Hyperspectral imaging techniques (HSI) do not require contact with patients and are non-ionizing as well as non-invasive. As a consequence, they have been extensively applied in the medical field. HSI is being combined with machine learning (ML) processes to obtain models to assist in diagnosis. In particular, the combination of these techniques has proven to be a reliable aid in the differentiation of healthy and tumor tissue during brain tumor surgery. ML algorithms such as support vector machine (SVM), random forest (RF) and convolutional neural networks (CNN) are used to make predictions and provide in-vivo visualizations that may assist neurosurgeons in being more precise, hence reducing damages to healthy tissue. In this work, thirteen in-vivo hyperspectral images from twelve different patients with high-grade gliomas (grade III and IV) have been selected to train SVM, RF and CNN classifiers. Five different classes have been defined during the experiments: healthy tissue, tumor, venous blood vessel, arterial blood vessel and dura mater. Overall accuracy (OACC) results vary from 60% to 95% depending on the training conditions. Finally, as far as the contribution of each band to the OACC is concerned, the results obtained in this work are 3.81 times greater than those reported in the literature. Full article
(This article belongs to the Special Issue Trends and Prospects in Medical Hyperspectral Imagery)
Show Figures

Figure 1

29 pages, 1317 KiB  
Review
A Review of EEG Signal Features and Their Application in Driver Drowsiness Detection Systems
by Igor Stancin, Mario Cifrek and Alan Jovic
Sensors 2021, 21(11), 3786; https://doi.org/10.3390/s21113786 - 30 May 2021
Cited by 92 | Viewed by 11600
Abstract
Detecting drowsiness in drivers, especially multi-level drowsiness, is a difficult problem that is often approached using neurophysiological signals as the basis for building a reliable system. In this context, electroencephalogram (EEG) signals are the most important source of data to achieve successful detection. [...] Read more.
Detecting drowsiness in drivers, especially multi-level drowsiness, is a difficult problem that is often approached using neurophysiological signals as the basis for building a reliable system. In this context, electroencephalogram (EEG) signals are the most important source of data to achieve successful detection. In this paper, we first review EEG signal features used in the literature for a variety of tasks, then we focus on reviewing the applications of EEG features and deep learning approaches in driver drowsiness detection, and finally we discuss the open challenges and opportunities in improving driver drowsiness detection based on EEG. We show that the number of studies on driver drowsiness detection systems has increased in recent years and that future systems need to consider the wide variety of EEG signal features and deep learning approaches to increase the accuracy of detection. Full article
(This article belongs to the Special Issue Intelligent Biosignal Analysis Methods)
Show Figures

Figure 1

13 pages, 1547 KiB  
Article
Magnetic Lateral Flow Immunoassay for Small Extracellular Vesicles Quantification: Application to Colorectal Cancer Biomarker Detection
by Amanda Moyano, Esther Serrano-Pertierra, José María Duque, Virginia Ramos, Estefanía Teruel-Barandiarán, María Teresa Fernández-Sánchez, María Salvador, José Carlos Martínez-García, Luis Sánchez, Luis García-Flórez, Montserrat Rivas and María del Carmen Blanco-López
Sensors 2021, 21(11), 3756; https://doi.org/10.3390/s21113756 - 28 May 2021
Cited by 13 | Viewed by 4687
Abstract
Colorectal cancer (CRC) is the third leading cause of cancer death and the fourth most common cancer in the world. Colonoscopy is the most sensitive test used for detection of CRC; however, their procedure is invasive and expensive for population mass screening. Currently, [...] Read more.
Colorectal cancer (CRC) is the third leading cause of cancer death and the fourth most common cancer in the world. Colonoscopy is the most sensitive test used for detection of CRC; however, their procedure is invasive and expensive for population mass screening. Currently, the fecal occult blood test has been widely used as a screening tool for CRC but displays low specificity. The lack of rapid and simple methods for mass screening makes the early diagnosis and therapy monitoring difficult. Extracellular vesicles (EVs) have emerged as a novel source of biomarkers due to their contents in proteins and miRNAs. Their detection would not require invasive techniques and could be considered as a liquid biopsy. Specifically, it has been demonstrated that the amount of CD147 expressed in circulating EVs is significant higher for CRC cell lines than for normal colon fibroblast cell lines. Moreover, CD147-containing EVs have been used as a biomarker to monitor response to therapy in patients with CRC. Therefore, this antigen could be used as a non-invasive biomarker for the detection and monitoring of CRC in combination with a Point-of-Care platform as, for example, Lateral Flow Immunoassays (LFIAs). Here, we propose the development of a quantitative lateral flow immunoassay test based on the use of magnetic nanoparticles as labels coupled to inductive sensor for the non-invasive detection of CRC by CD147-positive EVs. The results obtained for quantification of CD147 antigen embedded in EVs isolated from plasma sample have demonstrated that this device could be used as a Point-of-Care tool for CRC screening or therapy monitoring thanks to its rapid response and easy operation. Full article
(This article belongs to the Special Issue Electrochemical Sensors and (Bio)assays for Health Applications)
Show Figures

Graphical abstract

14 pages, 6066 KiB  
Article
A High-Resolution Reflective Microwave Planar Sensor for Sensing of Vanadium Electrolyte
by Nazli Kazemi, Kalvin Schofield and Petr Musilek
Sensors 2021, 21(11), 3759; https://doi.org/10.3390/s21113759 - 28 May 2021
Cited by 40 | Viewed by 3546
Abstract
Microwave planar sensors employ conventional passive complementary split ring resonators (CSRR) as their sensitive region. In this work, a novel planar reflective sensor is introduced that deploys CSRRs as the front-end sensing element at fres=6 GHz with an extra loss-compensating [...] Read more.
Microwave planar sensors employ conventional passive complementary split ring resonators (CSRR) as their sensitive region. In this work, a novel planar reflective sensor is introduced that deploys CSRRs as the front-end sensing element at fres=6 GHz with an extra loss-compensating negative resistance that restores the dissipated power in the sensor that is used in dielectric material characterization. It is shown that the S11 notch of −15 dB can be improved down to −40 dB without loss of sensitivity. An application of this design is shown in discriminating different states of vanadium redox solutions with highly lossy conditions of fully charged V5+ and fully discharged V4+ electrolytes. Full article
(This article belongs to the Special Issue State-of-the-Art Technologies in Microwave Sensors)
Show Figures

Figure 1

21 pages, 4688 KiB  
Review
A Review of Deep Learning-Based Contactless Heart Rate Measurement Methods
by Aoxin Ni, Arian Azarang and Nasser Kehtarnavaz
Sensors 2021, 21(11), 3719; https://doi.org/10.3390/s21113719 - 27 May 2021
Cited by 59 | Viewed by 9036
Abstract
The interest in contactless or remote heart rate measurement has been steadily growing in healthcare and sports applications. Contactless methods involve the utilization of a video camera and image processing algorithms. Recently, deep learning methods have been used to improve the performance of [...] Read more.
The interest in contactless or remote heart rate measurement has been steadily growing in healthcare and sports applications. Contactless methods involve the utilization of a video camera and image processing algorithms. Recently, deep learning methods have been used to improve the performance of conventional contactless methods for heart rate measurement. After providing a review of the related literature, a comparison of the deep learning methods whose codes are publicly available is conducted in this paper. The public domain UBFC dataset is used to compare the performance of these deep learning methods for heart rate measurement. The results obtained show that the deep learning method PhysNet generates the best heart rate measurement outcome among these methods, with a mean absolute error value of 2.57 beats per minute and a mean square error value of 7.56 beats per minute. Full article
(This article belongs to the Special Issue Wearable and Unobtrusive Technologies for Healthcare Monitoring)
Show Figures

Figure 1

22 pages, 7269 KiB  
Article
Diabetic Retinopathy Fundus Image Classification and Lesions Localization System Using Deep Learning
by Wejdan L. Alyoubi, Maysoon F. Abulkhair and Wafaa M. Shalash
Sensors 2021, 21(11), 3704; https://doi.org/10.3390/s21113704 - 26 May 2021
Cited by 132 | Viewed by 14447
Abstract
Diabetic retinopathy (DR) is a disease resulting from diabetes complications, causing non-reversible damage to retina blood vessels. DR is a leading cause of blindness if not detected early. The currently available DR treatments are limited to stopping or delaying the deterioration of sight, [...] Read more.
Diabetic retinopathy (DR) is a disease resulting from diabetes complications, causing non-reversible damage to retina blood vessels. DR is a leading cause of blindness if not detected early. The currently available DR treatments are limited to stopping or delaying the deterioration of sight, highlighting the importance of regular scanning using high-efficiency computer-based systems to diagnose cases early. The current work presented fully automatic diagnosis systems that exceed manual techniques to avoid misdiagnosis, reducing time, effort and cost. The proposed system classifies DR images into five stages—no-DR, mild, moderate, severe and proliferative DR—as well as localizing the affected lesions on retain surface. The system comprises two deep learning-based models. The first model (CNN512) used the whole image as an input to the CNN model to classify it into one of the five DR stages. It achieved an accuracy of 88.6% and 84.1% on the DDR and the APTOS Kaggle 2019 public datasets, respectively, compared to the state-of-the-art results. Simultaneously, the second model used an adopted YOLOv3 model to detect and localize the DR lesions, achieving a 0.216 mAP in lesion localization on the DDR dataset, which improves the current state-of-the-art results. Finally, both of the proposed structures, CNN512 and YOLOv3, were fused to classify DR images and localize DR lesions, obtaining an accuracy of 89% with 89% sensitivity, 97.3 specificity and that exceeds the current state-of-the-art results. Full article
(This article belongs to the Special Issue Computer Aided Diagnosis Sensors)
Show Figures

Figure 1

23 pages, 6226 KiB  
Article
Improved Mutual Understanding for Human-Robot Collaboration: Combining Human-Aware Motion Planning with Haptic Feedback Devices for Communicating Planned Trajectory
by Stefan Grushko, Aleš Vysocký, Petr Oščádal, Michal Vocetka, Petr Novák and Zdenko Bobovský
Sensors 2021, 21(11), 3673; https://doi.org/10.3390/s21113673 - 25 May 2021
Cited by 26 | Viewed by 4561
Abstract
In a collaborative scenario, the communication between humans and robots is a fundamental aspect to achieve good efficiency and ergonomics in the task execution. A lot of research has been made related to enabling a robot system to understand and predict human behaviour, [...] Read more.
In a collaborative scenario, the communication between humans and robots is a fundamental aspect to achieve good efficiency and ergonomics in the task execution. A lot of research has been made related to enabling a robot system to understand and predict human behaviour, allowing the robot to adapt its motion to avoid collisions with human workers. Assuming the production task has a high degree of variability, the robot’s movements can be difficult to predict, leading to a feeling of anxiety in the worker when the robot changes its trajectory and approaches since the worker has no information about the planned movement of the robot. Additionally, without information about the robot’s movement, the human worker cannot effectively plan own activity without forcing the robot to constantly replan its movement. We propose a novel approach to communicating the robot’s intentions to a human worker. The improvement to the collaboration is presented by introducing haptic feedback devices, whose task is to notify the human worker about the currently planned robot’s trajectory and changes in its status. In order to verify the effectiveness of the developed human-machine interface in the conditions of a shared collaborative workspace, a user study was designed and conducted among 16 participants, whose objective was to accurately recognise the goal position of the robot during its movement. Data collected during the experiment included both objective and subjective parameters. Statistically significant results of the experiment indicated that all the participants could improve their task completion time by over 45% and generally were more subjectively satisfied when completing the task with equipped haptic feedback devices. The results also suggest the usefulness of the developed notification system since it improved users’ awareness about the motion plan of the robot. Full article
(This article belongs to the Special Issue Human-Robot Collaborations in Industrial Automation)
Show Figures

Figure 1

27 pages, 8479 KiB  
Article
Evaluation of Misalignment Effect in Vehicle-to-Vehicle Visible Light Communications: Experimental Demonstration of a 75 Meters Link
by Sebastian-Andrei Avătămăniței, Cătălin Beguni, Alin-Mihai Căilean, Mihai Dimian and Valentin Popa
Sensors 2021, 21(11), 3577; https://doi.org/10.3390/s21113577 - 21 May 2021
Cited by 25 | Viewed by 3385
Abstract
The use of visible light communications technology in communication-based vehicle applications is gaining more and more interest as the research community is constantly overcoming challenge after challenge. In this context, this article addresses the issues associated with the use of Visible Light Communications [...] Read more.
The use of visible light communications technology in communication-based vehicle applications is gaining more and more interest as the research community is constantly overcoming challenge after challenge. In this context, this article addresses the issues associated with the use of Visible Light Communications (VLC) technology in Vehicle-to-Vehicle (V2V) communications, while focusing on two crucial issues. On the one hand, it aims to investigate the achievable communication distance in V2V applications while addressing the least favorable case, namely the one when a standard vehicle rear lighting system is used as a VLC emitter. On the other hand, this article investigates another highly unfavorable use case scenario, i.e., the case when two vehicles are located on adjacent lanes, rather than on the same lane. In order to evaluate the compatibility of the VLC technology with the usage in inter-vehicle communication, a VLC prototype is intensively evaluated in outdoor conditions. The experimental results show a record V2V VLC distance of 75 m, while providing a Bit Error Ratio (BER) of 10−7–10−6. The results also show that the VLC technology is able to provide V2V connectivity even in a situation where the vehicles are located on adjacent lanes, without a major impact on the link performances. Nevertheless, this situation generates an initial no-coverage zone, which is determined by the VLC receiver reception angle, whereas in some cases, vehicle misalignment can generate a BER increase that can go up to two orders of magnitude. Full article
(This article belongs to the Special Issue Automotive Visible Light Communications (AutoVLC))
Show Figures

Figure 1

20 pages, 14188 KiB  
Article
Digital Twin-Based Safety Risk Coupling of Prefabricated Building Hoisting
by Zhansheng Liu, Xintong Meng, Zezhong Xing and Antong Jiang
Sensors 2021, 21(11), 3583; https://doi.org/10.3390/s21113583 - 21 May 2021
Cited by 56 | Viewed by 4900
Abstract
Safety management in hoisting is the key issue to determine the development of prefabricated building construction. However, the security management in the hoisting stage lacks a truly effective method of information physical fusion, and the safety risk analysis of hoisting does not consider [...] Read more.
Safety management in hoisting is the key issue to determine the development of prefabricated building construction. However, the security management in the hoisting stage lacks a truly effective method of information physical fusion, and the safety risk analysis of hoisting does not consider the interaction of risk factors. In this paper, a hoisting safety risk management framework based on digital twin (DT) is presented. The digital twin hoisting safety risk coupling model is built. The proposed model integrates the Internet of Things (IoT), Building Information Modeling (BIM), and a security risk analysis method combining the Apriori algorithm and complex network. The real-time perception and virtual–real interaction of multi-source information in the hoisting process are realized, the association rules and coupling relationship among hoisting safety risk factors are mined, and the time-varying data information is visualized. Demonstration in the construction of a large-scale prefabricated building shows that with the proposed framework, it is possible to complete the information fusion between the hoisting site and the virtual model and realize the visual management. The correlative relationship among hoisting construction safety risk factors is analyzed, and the key control factors are found. Moreover, the efficiency of information integration and sharing is improved, the gap of coupling analysis of security risk factors is filled, and effective security management and decision-making are achieved with the proposed approach. Full article
(This article belongs to the Special Issue Smart Sensing in Building and Construction)
Show Figures

Figure 1

24 pages, 6663 KiB  
Article
Evaluating the Single-Shot MultiBox Detector and YOLO Deep Learning Models for the Detection of Tomatoes in a Greenhouse
by Sandro Augusto Magalhães, Luís Castro, Germano Moreira, Filipe Neves dos Santos, Mário Cunha, Jorge Dias and António Paulo Moreira
Sensors 2021, 21(10), 3569; https://doi.org/10.3390/s21103569 - 20 May 2021
Cited by 84 | Viewed by 10862
Abstract
The development of robotic solutions for agriculture requires advanced perception capabilities that can work reliably in any crop stage. For example, to automatise the tomato harvesting process in greenhouses, the visual perception system needs to detect the tomato in any life cycle stage [...] Read more.
The development of robotic solutions for agriculture requires advanced perception capabilities that can work reliably in any crop stage. For example, to automatise the tomato harvesting process in greenhouses, the visual perception system needs to detect the tomato in any life cycle stage (flower to the ripe tomato). The state-of-the-art for visual tomato detection focuses mainly on ripe tomato, which has a distinctive colour from the background. This paper contributes with an annotated visual dataset of green and reddish tomatoes. This kind of dataset is uncommon and not available for research purposes. This will enable further developments in edge artificial intelligence for in situ and in real-time visual tomato detection required for the development of harvesting robots. Considering this dataset, five deep learning models were selected, trained and benchmarked to detect green and reddish tomatoes grown in greenhouses. Considering our robotic platform specifications, only the Single-Shot MultiBox Detector (SSD) and YOLO architectures were considered. The results proved that the system can detect green and reddish tomatoes, even those occluded by leaves. SSD MobileNet v2 had the best performance when compared against SSD Inception v2, SSD ResNet 50, SSD ResNet 101 and YOLOv4 Tiny, reaching an F1-score of 66.15%, an mAP of 51.46% and an inference time of 16.44ms with the NVIDIA Turing Architecture platform, an NVIDIA Tesla T4, with 12 GB. YOLOv4 Tiny also had impressive results, mainly concerning inferring times of about 5 ms. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

31 pages, 3710 KiB  
Review
Recent Advances in Transducers for Intravascular Ultrasound (IVUS) Imaging
by Chang Peng, Huaiyu Wu, Seungsoo Kim, Xuming Dai and Xiaoning Jiang
Sensors 2021, 21(10), 3540; https://doi.org/10.3390/s21103540 - 19 May 2021
Cited by 57 | Viewed by 14415
Abstract
As a well-known medical imaging methodology, intravascular ultrasound (IVUS) imaging plays a critical role in diagnosis, treatment guidance and post-treatment assessment of coronary artery diseases. By cannulating a miniature ultrasound transducer mounted catheter into an artery, the vessel lumen opening, vessel wall morphology [...] Read more.
As a well-known medical imaging methodology, intravascular ultrasound (IVUS) imaging plays a critical role in diagnosis, treatment guidance and post-treatment assessment of coronary artery diseases. By cannulating a miniature ultrasound transducer mounted catheter into an artery, the vessel lumen opening, vessel wall morphology and other associated blood and vessel properties can be precisely assessed in IVUS imaging. Ultrasound transducer, as the key component of an IVUS system, is critical in determining the IVUS imaging performance. In recent years, a wide range of achievements in ultrasound transducers have been reported for IVUS imaging applications. Herein, a comprehensive review is given on recent advances in ultrasound transducers for IVUS imaging. Firstly, a fundamental understanding of IVUS imaging principle, evaluation parameters and IVUS catheter are summarized. Secondly, three different types of ultrasound transducers (piezoelectric ultrasound transducer, piezoelectric micromachined ultrasound transducer and capacitive micromachined ultrasound transducer) for IVUS imaging are presented. Particularly, the recent advances in piezoelectric ultrasound transducer for IVUS imaging are extensively examined according to their different working mechanisms, configurations and materials adopted. Thirdly, IVUS-based multimodality intravascular imaging of atherosclerotic plaque is discussed. Finally, summary and perspectives on the future studies are highlighted for IVUS imaging applications. Full article
(This article belongs to the Special Issue Feature Papers in Physical Sensors Section 2020)
Show Figures

Figure 1

23 pages, 8573 KiB  
Article
UAV Photogrammetry under Poor Lighting Conditions—Accuracy Considerations
by Pawel Burdziakowski and Katarzyna Bobkowska
Sensors 2021, 21(10), 3531; https://doi.org/10.3390/s21103531 - 19 May 2021
Cited by 30 | Viewed by 4509
Abstract
The use of low-level photogrammetry is very broad, and studies in this field are conducted in many aspects. Most research and applications are based on image data acquired during the day, which seems natural and obvious. However, the authors of this paper draw [...] Read more.
The use of low-level photogrammetry is very broad, and studies in this field are conducted in many aspects. Most research and applications are based on image data acquired during the day, which seems natural and obvious. However, the authors of this paper draw attention to the potential and possible use of UAV photogrammetry during the darker time of the day. The potential of night-time images has not been yet widely recognized, since correct scenery lighting or lack of scenery light sources is an obvious issue. The authors have developed typical day- and night-time photogrammetric models. They have also presented an extensive analysis of the geometry, indicated which process element had the greatest impact on degrading night-time photogrammetric product, as well as which measurable factor directly correlated with image accuracy. The reduction in geometry during night-time tests was greatly impacted by the non-uniform distribution of GCPs within the study area. The calibration of non-metric cameras is sensitive to poor lighting conditions, which leads to the generation of a higher determination error for each intrinsic orientation and distortion parameter. As evidenced, uniformly illuminated photos can be used to construct a model with lower reprojection error, and each tie point exhibits greater precision. Furthermore, they have evaluated whether commercial photogrammetric software enabled reaching acceptable image quality and whether the digital camera type impacted interpretative quality. The research paper is concluded with an extended discussion, conclusions, and recommendation on night-time studies. Full article
(This article belongs to the Special Issue Unmanned Aerial Systems and Remote Sensing)
Show Figures

Figure 1

21 pages, 1404 KiB  
Article
Predicting Exact Valence and Arousal Values from EEG
by Filipe Galvão, Soraia M. Alarcão and Manuel J. Fonseca
Sensors 2021, 21(10), 3414; https://doi.org/10.3390/s21103414 - 14 May 2021
Cited by 48 | Viewed by 5964
Abstract
Recognition of emotions from physiological signals, and in particular from electroencephalography (EEG), is a field within affective computing gaining increasing relevance. Although researchers have used these signals to recognize emotions, most of them only identify a limited set of emotional states (e.g., happiness, [...] Read more.
Recognition of emotions from physiological signals, and in particular from electroencephalography (EEG), is a field within affective computing gaining increasing relevance. Although researchers have used these signals to recognize emotions, most of them only identify a limited set of emotional states (e.g., happiness, sadness, anger, etc.) and have not attempted to predict exact values for valence and arousal, which would provide a wider range of emotional states. This paper describes our proposed model for predicting the exact values of valence and arousal in a subject-independent scenario. To create it, we studied the best features, brain waves, and machine learning models that are currently in use for emotion classification. This systematic analysis revealed that the best prediction model uses a KNN regressor (K = 1) with Manhattan distance, features from the alpha, beta and gamma bands, and the differential asymmetry from the alpha band. Results, using the DEAP, AMIGOS and DREAMER datasets, show that our model can predict valence and arousal values with a low error (MAE < 0.06, RMSE < 0.16) and a strong correlation between predicted and expected values (PCC > 0.80), and can identify four emotional classes with an accuracy of 84.4%. The findings of this work show that the features, brain waves and machine learning models, typically used in emotion classification tasks, can be used in more challenging situations, such as the prediction of exact values for valence and arousal. Full article
(This article belongs to the Special Issue Biomedical Signal Acquisition and Processing Using Sensors)
Show Figures

Figure 1

25 pages, 7055 KiB  
Article
Testing the Contribution of Multi-Source Remote Sensing Features for Random Forest Classification of the Greater Amanzule Tropical Peatland
by Alex O. Amoakoh, Paul Aplin, Kwame T. Awuah, Irene Delgado-Fernandez, Cherith Moses, Carolina Peña Alonso, Stephen Kankam and Justice C. Mensah
Sensors 2021, 21(10), 3399; https://doi.org/10.3390/s21103399 - 13 May 2021
Cited by 19 | Viewed by 4816
Abstract
Tropical peatlands such as Ghana’s Greater Amanzule peatland are highly valuable ecosystems and under great pressure from anthropogenic land use activities. Accurate measurement of their occurrence and extent is required to facilitate sustainable management. A key challenge, however, is the high cloud cover [...] Read more.
Tropical peatlands such as Ghana’s Greater Amanzule peatland are highly valuable ecosystems and under great pressure from anthropogenic land use activities. Accurate measurement of their occurrence and extent is required to facilitate sustainable management. A key challenge, however, is the high cloud cover in the tropics that limits optical remote sensing data acquisition. In this work we combine optical imagery with radar and elevation data to optimise land cover classification for the Greater Amanzule tropical peatland. Sentinel-2, Sentinel-1 and Shuttle Radar Topography Mission (SRTM) imagery were acquired and integrated to drive a machine learning land cover classification using a random forest classifier. Recursive feature elimination was used to optimize high-dimensional and correlated feature space and determine the optimal features for the classification. Six datasets were compared, comprising different combinations of optical, radar and elevation features. Results showed that the best overall accuracy (OA) was found for the integrated Sentinel-2, Sentinel-1 and SRTM dataset (S2+S1+DEM), significantly outperforming all the other classifications with an OA of 94%. Assessment of the sensitivity of land cover classes to image features indicated that elevation and the original Sentinel-1 bands contributed the most to separating tropical peatlands from other land cover types. The integration of more features and the removal of redundant features systematically increased classification accuracy. We estimate Ghana’s Greater Amanzule peatland covers 60,187 ha. Our proposed methodological framework contributes a robust workflow for accurate and detailed landscape-scale monitoring of tropical peatlands, while our findings provide timely information critical for the sustainable management of the Greater Amanzule peatland. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

14 pages, 957 KiB  
Article
Calibration and Cross-Validation of Accelerometer Cut-Points to Classify Sedentary Time and Physical Activity from Hip and Non-Dominant and Dominant Wrists in Older Adults
by Jairo H. Migueles, Cristina Cadenas-Sanchez, Juan M. A. Alcantara, Javier Leal-Martín, Asier Mañas, Ignacio Ara, Nancy W. Glynn and Eric J. Shiroma
Sensors 2021, 21(10), 3326; https://doi.org/10.3390/s21103326 - 11 May 2021
Cited by 23 | Viewed by 4308
Abstract
Accelerometers’ accuracy for sedentary time (ST) and moderate-to-vigorous physical activity (MVPA) classification depends on accelerometer placement, data processing, activities, and sample characteristics. As intensities differ by age, this study sought to determine intensity cut-points at various wear locations people more than 70 years [...] Read more.
Accelerometers’ accuracy for sedentary time (ST) and moderate-to-vigorous physical activity (MVPA) classification depends on accelerometer placement, data processing, activities, and sample characteristics. As intensities differ by age, this study sought to determine intensity cut-points at various wear locations people more than 70 years old. Data from 59 older adults were used for calibration and from 21 independent participants for cross-validation purposes. Participants wore accelerometers on their hip and wrists while performing activities and having their energy expenditure measured with portable calorimetry. ST and MVPA were defined as ≤1.5 metabolic equivalents (METs) and ≥3 METs (1 MET = 2.8 mL/kg/min), respectively. Receiver operator characteristic (ROC) analyses showed fair-to-good accuracy (area under the curve [AUC] = 0.62–0.89). ST cut-points were 7 mg (cross-validation: sensitivity = 0.88, specificity = 0.80) and 1 count/5 s (cross-validation: sensitivity = 0.91, specificity = 0.96) for the hip; 18 mg (cross-validation: sensitivity = 0.86, specificity = 0.86) and 102 counts/5 s (cross-validation: sensitivity = 0.91, specificity = 0.92) for the non-dominant wrist; and 22 mg and 175 counts/5 s (not cross-validated) for the dominant wrist. MVPA cut-points were 14 mg (cross-validation: sensitivity = 0.70, specificity = 0.99) and 54 count/5 s (cross-validation: sensitivity = 1.00, specificity = 0.96) for the hip; 60 mg (cross-validation: sensitivity = 0.83, specificity = 0.99) and 182 counts/5 s (cross-validation: sensitivity = 1.00, specificity = 0.89) for the non-dominant wrist; and 64 mg and 268 counts/5 s (not cross-validated) for the dominant wrist. These cut-points can classify ST and MVPA in older adults from hip- and wrist-worn accelerometers. Full article
(This article belongs to the Special Issue Wearable Devices: Applications in Older Adults)
Show Figures

Figure 1

26 pages, 22394 KiB  
Review
3D Printing Techniques and Their Applications to Organ-on-a-Chip Platforms: A Systematic Review
by Violeta Carvalho, Inês Gonçalves, Teresa Lage, Raquel O. Rodrigues, Graça Minas, Senhorinha F. C. F. Teixeira, Ana S. Moita, Takeshi Hori, Hirokazu Kaji and Rui A. Lima
Sensors 2021, 21(9), 3304; https://doi.org/10.3390/s21093304 - 10 May 2021
Cited by 64 | Viewed by 10047
Abstract
Three-dimensional (3D) in vitro models, such as organ-on-a-chip platforms, are an emerging and effective technology that allows the replication of the function of tissues and organs, bridging the gap amid the conventional models based on planar cell cultures or animals and the complex [...] Read more.
Three-dimensional (3D) in vitro models, such as organ-on-a-chip platforms, are an emerging and effective technology that allows the replication of the function of tissues and organs, bridging the gap amid the conventional models based on planar cell cultures or animals and the complex human system. Hence, they have been increasingly used for biomedical research, such as drug discovery and personalized healthcare. A promising strategy for their fabrication is 3D printing, a layer-by-layer fabrication process that allows the construction of complex 3D structures. In contrast, 3D bioprinting, an evolving biofabrication method, focuses on the accurate deposition of hydrogel bioinks loaded with cells to construct tissue-engineered structures. The purpose of the present work is to conduct a systematic review (SR) of the published literature, according to the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses, providing a source of information on the evolution of organ-on-a-chip platforms obtained resorting to 3D printing and bioprinting techniques. In the literature search, PubMed, Scopus, and ScienceDirect databases were used, and two authors independently performed the search, study selection, and data extraction. The goal of this SR is to highlight the importance and advantages of using 3D printing techniques in obtaining organ-on-a-chip platforms, and also to identify potential gaps and future perspectives in this research field. Additionally, challenges in integrating sensors in organs-on-chip platforms are briefly investigated and discussed. Full article
(This article belongs to the Special Issue Organ-on-a-Chip and Biosensors)
Show Figures

Graphical abstract

16 pages, 7256 KiB  
Article
Deep Supervised Residual Dense Network for Underwater Image Enhancement
by Yanling Han, Lihua Huang, Zhonghua Hong, Shouqi Cao, Yun Zhang and Jing Wang
Sensors 2021, 21(9), 3289; https://doi.org/10.3390/s21093289 - 10 May 2021
Cited by 26 | Viewed by 3137
Abstract
Underwater images are important carriers and forms of underwater information, playing a vital role in exploring and utilizing marine resources. However, underwater images have characteristics of low contrast and blurred details because of the absorption and scattering of light. In recent years, deep [...] Read more.
Underwater images are important carriers and forms of underwater information, playing a vital role in exploring and utilizing marine resources. However, underwater images have characteristics of low contrast and blurred details because of the absorption and scattering of light. In recent years, deep learning has been widely used in underwater image enhancement and restoration because of its powerful feature learning capabilities, but there are still shortcomings in detailed enhancement. To address the problem, this paper proposes a deep supervised residual dense network (DS_RD_Net), which is used to better learn the mapping relationship between clear in-air images and synthetic underwater degraded images. DS_RD_Net first uses residual dense blocks to extract features to enhance feature utilization; then, it adds residual path blocks between the encoder and decoder to reduce the semantic differences between the low-level features and high-level features; finally, it employs a deep supervision mechanism to guide network training to improve gradient propagation. Experiments results (PSNR was 36.2, SSIM was 96.5%, and UCIQE was 0.53) demonstrated that the proposed method can fully retain the local details of the image while performing color restoration and defogging compared with other image enhancement methods, achieving good qualitative and quantitative effects. Full article
(This article belongs to the Special Issue Image Sensing and Processing with Convolutional Neural Networks)
Show Figures

Figure 1

21 pages, 7855 KiB  
Article
A Low-Cost IoT System for Real-Time Monitoring of Climatic Variables and Photovoltaic Generation for Smart Grid Application
by Gustavo Costa Gomes de Melo, Igor Cavalcante Torres, Ícaro Bezzera Queiroz de Araújo, Davi Bibiano Brito and Erick de Andrade Barboza
Sensors 2021, 21(9), 3293; https://doi.org/10.3390/s21093293 - 10 May 2021
Cited by 31 | Viewed by 5167
Abstract
Monitoring and data acquisition are essential to recognize the renewable resources available on-site, evaluate electrical conversion efficiency, detect failures, and optimize electrical production. Commercial monitoring systems for the photovoltaic system are generally expensive and closed for modifications. This work proposes a low-cost real-time [...] Read more.
Monitoring and data acquisition are essential to recognize the renewable resources available on-site, evaluate electrical conversion efficiency, detect failures, and optimize electrical production. Commercial monitoring systems for the photovoltaic system are generally expensive and closed for modifications. This work proposes a low-cost real-time internet of things system for micro and mini photovoltaic generation systems that can monitor continuous voltage, continuous current, alternating power, and seven meteorological variables. The proposed system measures all relevant meteorological variables and directly acquires photovoltaic generation data from the plant (not from the inverter). The system is implemented using open software, connects to the internet without cables, stores data locally and in the cloud, and uses the network time protocol to synchronize the devices’ clocks. To the best of our knowledge, no work reported in the literature presents these features altogether. Furthermore, experiments carried out with the proposed system showed good effectiveness and reliability. This system enables fog and cloud computing in a photovoltaic system, creating a time series measurements data set, enabling the future use of machine learning to create smart photovoltaic systems. Full article
(This article belongs to the Special Issue Smart IoT System for Renewable Energy Resource)
Show Figures

Figure 1

31 pages, 3659 KiB  
Review
High Temperature Ultrasonic Transducers: A Review
by Rymantas Kazys and Vaida Vaskeliene
Sensors 2021, 21(9), 3200; https://doi.org/10.3390/s21093200 - 5 May 2021
Cited by 54 | Viewed by 9575
Abstract
There are many fields such as online monitoring of manufacturing processes, non-destructive testing in nuclear plants, or corrosion rate monitoring techniques of steel pipes in which measurements must be performed at elevated temperatures. For that high temperature ultrasonic transducers are necessary. In the [...] Read more.
There are many fields such as online monitoring of manufacturing processes, non-destructive testing in nuclear plants, or corrosion rate monitoring techniques of steel pipes in which measurements must be performed at elevated temperatures. For that high temperature ultrasonic transducers are necessary. In the presented paper, a literature review on the main types of such transducers, piezoelectric materials, backings, and the bonding techniques of transducers elements suitable for high temperatures, is presented. In this review, the main focus is on ultrasonic transducers with piezoelectric elements suitable for operation at temperatures higher than of the most commercially available transducers, i.e., 150 °C. The main types of the ultrasonic transducers that are discussed are the transducers with thin protectors, which may serve as matching layers, transducers with high temperature delay lines, wedges, and waveguide type transducers. The piezoelectric materials suitable for high temperature applications such as aluminum nitride, lithium niobate, gallium orthophosphate, bismuth titanate, oxyborate crystals, lead metaniobate, and other piezoceramics are analyzed. Bonding techniques used for joining of the transducer elements such as joining with glue, soldering, brazing, dry contact, and diffusion bonding are discussed. Special attention is paid to efficient diffusion and thermo-sonic diffusion bonding techniques. Various types of backings necessary for improving a bandwidth and to obtain a short pulse response are described. Full article
(This article belongs to the Special Issue Ultrasonic Transducers for High Temperature Applications)
Show Figures

Figure 1

27 pages, 2497 KiB  
Review
Biosensing Applications Using Nanostructure-Based Localized Surface Plasmon Resonance Sensors
by Dong Min Kim, Jong Seong Park, Seung-Woon Jung, Jinho Yeom and Seung Min Yoo
Sensors 2021, 21(9), 3191; https://doi.org/10.3390/s21093191 - 4 May 2021
Cited by 52 | Viewed by 5750
Abstract
Localized surface plasmon resonance (LSPR)-based biosensors have recently garnered increasing attention due to their potential to allow label-free, portable, low-cost, and real-time monitoring of diverse analytes. Recent developments in this technology have focused on biochemical markers in clinical and environmental settings coupled with [...] Read more.
Localized surface plasmon resonance (LSPR)-based biosensors have recently garnered increasing attention due to their potential to allow label-free, portable, low-cost, and real-time monitoring of diverse analytes. Recent developments in this technology have focused on biochemical markers in clinical and environmental settings coupled with advances in nanostructure technology. Therefore, this review focuses on the recent advances in LSPR-based biosensor technology for the detection of diverse chemicals and biomolecules. Moreover, we also provide recent examples of sensing strategies based on diverse nanostructure platforms, in addition to their advantages and limitations. Finally, this review discusses potential strategies for the development of biosensors with enhanced sensing performance. Full article
Show Figures

Figure 1

Back to TopTop