sensors-logo

Journal Browser

Journal Browser

Data, Signal and Image Processing and Applications in Sensors II

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (20 October 2022) | Viewed by 68980

Special Issue Editor


E-Mail Website
Guest Editor
Department of Engineering/IEETA, University of Trás-os-Montes e Alto Douro, 5000-801 Vila Real, Portugal
Interests: signal & image processing and applications; study and development of devices & systems for friendly smart environments; development of multimedia-based teaching/learning methods and tools, with particular emphasis on the use of the internet
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the rapid advances in sensor technology, a vast and ever-growing amount of data in various domains and modalities is readily available. However, presenting raw signal data collected directly from sensors is sometimes inappropriate, due to the presence of, for example, noise or distortion, among others. In order to obtain relevant and insightful metrics from sensor signals’ data, further enhancement of the sensor signals acquired, such as the noise reduction in the one-dimensional electroencephalographic (EEG) signals or color correction in the endoscopic images, and their analysis by computer-based medical systems, is needed. The processing of the data in itself and the consequent extraction of useful information are also vital and included in the topics of this Special Issue.

This Special Issue of Sensors aims to highlight advances in the development, testing, and application of data, signal, and image processing algorithms and techniques to all types of sensors and sensing methodologies. Experimental and theoretical results, in as much detail as possible, are very welcome. Review papers are also very welcome. There is no restriction on the length of the papers.

Topics include but are not limited to (listed in alphabetical order):

  • Advanced sensor characterization techniques
  • Ambient assisted living
  • Biomedical signal and image analysis
  • Internet of things (IoT)
  • Low-level programming of sensors
  • Machine learning (e.g., deep learning) in signal and image processing
  • Multimodal information processing for healthcare, monitoring, and surveillance
  • Multi-objective signal processing optimization
  • Other emerging applications of signal and information processing
  • Radar signal processing
  • Real-time signal and image processing algorithms and architectures (e.g., FPGA, DSP, GPU)
  • Remote sensing processing
  • Sensor data fusion and integration
  • Sensor error modelling and online calibration
  • Sensors and smart sensors for IoT devices
  • Signal and image processing (e.g., deblurring, denoising, super-resolution)
  • Signal and image understanding (e.g., object detection and recognition, action recognition, semantic segmentation, novel feature extraction)
  • Smart environments, smart cities and smart grid, load forecasting and energy management
  • Smart sensors development and applications
  • Wearable sensor signal processing and its applications

The sequel Special Issue "Data, Signal and Image Processing and Applications in Sensors III" has been announced. We look forward to receiving your submission for the new Special Issue.
https://www.mdpi.com/journal/sensors/special_issues/CGM87D9OA4
Deadline for manuscript submissions is 15 May 2023.

Prof. Dr. Manuel José Cabral dos Santos Reis
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Related Special Issue

Published Papers (28 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Other

9 pages, 170 KiB  
Editorial
Data, Signal and Image Processing and Applications in Sensors II
by Manuel J. C. S. Reis
Sensors 2024, 24(8), 2555; https://doi.org/10.3390/s24082555 - 16 Apr 2024
Viewed by 303
Abstract
A vast and ever-growing amount of data in various domains and modalities is readily available, being the rapid advance of sensor technology one of its main contributor [...] Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)

Research

Jump to: Editorial, Other

14 pages, 5242 KiB  
Article
Sensorimotor Time Delay Estimation by EMG Signal Processing in People Living with Spinal Cord Injury
by Seyed Mohammadreza Shokouhyan, Mathias Blandeau, Laura Wallard, Thierry Marie Guerra, Philippe Pudlo, Dany H. Gagnon and Franck Barbier
Sensors 2023, 23(3), 1132; https://doi.org/10.3390/s23031132 - 18 Jan 2023
Cited by 2 | Viewed by 1493
Abstract
Neuro mechanical time delay is inevitable in the sensorimotor control of the body due to sensory, transmission, signal processing and muscle activation delays. In essence, time delay reduces stabilization efficiency, leading to system instability (e.g., falls). For this reason, estimation of time delay [...] Read more.
Neuro mechanical time delay is inevitable in the sensorimotor control of the body due to sensory, transmission, signal processing and muscle activation delays. In essence, time delay reduces stabilization efficiency, leading to system instability (e.g., falls). For this reason, estimation of time delay in patients such as people living with spinal cord injury (SCI) can help therapists and biomechanics to design more appropriate exercise or assistive technologies in the rehabilitation procedure. In this study, we aim to estimate the muscle onset activation in SCI people by four strategies on EMG data. Seven complete SCI individuals participated in this study, and they maintained their stability during seated balance after a mechanical perturbation exerting at the level of the third thoracic vertebra between the scapulas. EMG activity of eight upper limb muscles were recorded during the stability. Two strategies based on the simple filtering (first strategy) approach and TKEO technique (second strategy) in the time domain and two other approaches of cepstral analysis (third strategy) and power spectrum (fourth strategy) in the time–frequency domain were performed in order to estimate the muscle onset. The results demonstrated that the TKEO technique could efficiently remove the electrocardiogram (ECG) and motion artifacts compared with the simple classical filtering approach. However, the first and second strategies failed to find muscle onset in several trials, which shows the weakness of these two strategies. The time–frequency techniques (cepstral analysis and power spectrum) estimated longer activation onset compared with the other two strategies in the time domain, which we associate with lower-frequency movement in the maintaining of sitting stability. In addition, no correlation was found for the muscle activation sequence nor for the estimated delay value, which is most likely caused by motion redundancy and different stabilization strategies in each participant. The estimated time delay can be used in developing a sensory motor control model of the body. It not only can help therapists and biomechanics to understand the underlying mechanisms of body, but also can be useful in developing assistive technologies based on their stability mechanism. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

13 pages, 2500 KiB  
Article
A Denoising Method for Mining Cable PD Signal Based on Genetic Algorithm Optimization of VMD and Wavelet Threshold
by Yanwen Wang, Peng Chen, Yongmei Zhao and Yanying Sun
Sensors 2022, 22(23), 9386; https://doi.org/10.3390/s22239386 - 01 Dec 2022
Cited by 12 | Viewed by 1199
Abstract
When the pulse current method is used for partial discharge (PD) monitoring of mining cables, the detected PD signals are seriously disturbed by the field noise, which are easily submerged in the noise and cannot be extracted. In order to realize the effective [...] Read more.
When the pulse current method is used for partial discharge (PD) monitoring of mining cables, the detected PD signals are seriously disturbed by the field noise, which are easily submerged in the noise and cannot be extracted. In order to realize the effective separation of the PD signal and the interference signal of the mining cable and improve the signal-to-noise ratio of the PD signal, a denoising method for the PD signal of the mining cable based on genetic algorithm optimization of variational mode decomposition (VMD) and wavelet threshold is proposed in this paper. Firstly, the genetic algorithm is used to optimize the VMD, and the optimal value of the number of modal components K and the quadratic penalty factor α is determined; secondly, the PD signal is decomposed by the VMD algorithm to obtain K intrinsic mode functions (IMF). Then, wavelet threshold denoising is applied to each IMF, and the denoised IMFs are reconstructed. Finally, the feasibility of the denoising method proposed in this paper is verified by simulation and experiment. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

18 pages, 1724 KiB  
Article
Analysis of Physiological Responses during Pain Induction
by Raquel Sebastião, Ana Bento and Susana Brás
Sensors 2022, 22(23), 9276; https://doi.org/10.3390/s22239276 - 29 Nov 2022
Cited by 2 | Viewed by 1754
Abstract
Pain is a complex phenomenon that arises from the interaction of multiple neuroanatomic and neurochemical systems with several cognitive and affective processes. Nowadays, the assessment of pain intensity still relies on the use of self-reports. However, recent research has shown a connection between [...] Read more.
Pain is a complex phenomenon that arises from the interaction of multiple neuroanatomic and neurochemical systems with several cognitive and affective processes. Nowadays, the assessment of pain intensity still relies on the use of self-reports. However, recent research has shown a connection between the perception of pain and exacerbated stress response in the Autonomic Nervous System. As a result, there has been an increasing analysis of the use of autonomic reactivity with the objective to assess pain. In the present study, the methods include pre-processing, feature extraction, and feature analysis. For the purpose of understanding and characterizing physiological responses of pain, different physiological signals were, simultaneously, recorded while a pain-inducing protocol was performed. The obtained results, for the electrocardiogram (ECG), showed a statistically significant increase in the heart rate, during the painful period compared to non-painful periods. Additionally, heart rate variability features demonstrated a decrease in the Parasympathetic Nervous System influence. The features from the electromyogram (EMG) showed an increase in power and contraction force of the muscle during the pain induction task. Lastly, the electrodermal activity (EDA) showed an adjustment of the sudomotor activity, implying an increase in the Sympathetic Nervous System activity during the experience of pain. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

13 pages, 344 KiB  
Article
Evaluating the Window Size’s Role in Automatic EEG Epilepsy Detection
by Vasileios Christou, Andreas Miltiadous, Ioannis Tsoulos, Evaggelos Karvounis, Katerina D. Tzimourta, Markos G. Tsipouras, Nikolaos Anastasopoulos, Alexandros T. Tzallas and Nikolaos Giannakeas
Sensors 2022, 22(23), 9233; https://doi.org/10.3390/s22239233 - 27 Nov 2022
Cited by 10 | Viewed by 1729
Abstract
Electroencephalography is one of the most commonly used methods for extracting information about the brain’s condition and can be used for diagnosing epilepsy. The EEG signal’s wave shape contains vital information about the brain’s state, which can be challenging to analyse and interpret [...] Read more.
Electroencephalography is one of the most commonly used methods for extracting information about the brain’s condition and can be used for diagnosing epilepsy. The EEG signal’s wave shape contains vital information about the brain’s state, which can be challenging to analyse and interpret by a human observer. Moreover, the characteristic waveforms of epilepsy (sharp waves, spikes) can occur randomly through time. Considering all the above reasons, automatic EEG signal extraction and analysis using computers can significantly impact the successful diagnosis of epilepsy. This research explores the impact of different window sizes on EEG signals’ classification accuracy using four machine learning classifiers. The machine learning methods included a neural network with ten hidden nodes trained using three different training algorithms and the k-nearest neighbours classifier. The neural network training methods included the Broyden–Fletcher–Goldfarb–Shanno algorithm, the multistart method for global optimization problems, and a genetic algorithm. The current research utilized the University of Bonn dataset containing EEG data, divided into epochs having 50% overlap and window lengths ranging from 1 to 24 s. Then, statistical and spectral features were extracted and used to train the above four classifiers. The outcome from the above experiments showed that large window sizes with a length of about 21 s could positively impact the classification accuracy between the compared methods. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

12 pages, 5793 KiB  
Article
Learning-Based Image Damage Area Detection for Old Photo Recovery
by Tien-Ying Kuo, Yu-Jen Wei, Po-Chyi Su and Tzu-Hao Lin
Sensors 2022, 22(21), 8580; https://doi.org/10.3390/s22218580 - 07 Nov 2022
Cited by 2 | Viewed by 2023
Abstract
Most methods for repairing damaged old photos are manual or semi-automatic. With these methods, the damaged region must first be manually marked so that it can be repaired later either by hand or by an algorithm. However, damage marking is a time-consuming and [...] Read more.
Most methods for repairing damaged old photos are manual or semi-automatic. With these methods, the damaged region must first be manually marked so that it can be repaired later either by hand or by an algorithm. However, damage marking is a time-consuming and labor-intensive process. Although there are a few fully automatic repair methods, they are in the style of end-to-end repairing, which means they provide no control over damaged area detection, potentially destroying or being unable to completely preserve valuable historical photos to the full degree. Therefore, this paper proposes a deep learning-based architecture for automatically detecting damaged areas of old photos. We designed a damage detection model to automatically and correctly mark damaged areas in photos, and this damage can be subsequently repaired using any existing inpainting methods. Our experimental results show that our proposed damage detection model can detect complex damaged areas in old photos automatically and effectively. The damage marking time is substantially reduced to less than 0.01 s per photo to speed up old photo recovery processing. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

15 pages, 1190 KiB  
Article
Non-Local Temporal Difference Network for Temporal Action Detection
by Yilong He, Xiao Han, Yong Zhong and Lishun Wang
Sensors 2022, 22(21), 8396; https://doi.org/10.3390/s22218396 - 01 Nov 2022
Cited by 2 | Viewed by 1312
Abstract
As an important part of video understanding, temporal action detection (TAD) has wide application scenarios. It aims to simultaneously predict the boundary position and class label of every action instance in an untrimmed video. Most of the existing temporal action detection methods adopt [...] Read more.
As an important part of video understanding, temporal action detection (TAD) has wide application scenarios. It aims to simultaneously predict the boundary position and class label of every action instance in an untrimmed video. Most of the existing temporal action detection methods adopt a stacked convolutional block strategy to model long temporal structures. However, most of the information between adjacent frames is redundant, and distant information is weakened after multiple convolution operations. In addition, the durations of action instances vary widely, making it difficult for single-scale modeling to fit complex video structures. To address this issue, we propose a non-local temporal difference network (NTD), including a chunk convolution (CC) module, a multiple temporal coordination (MTC) module, and a temporal difference (TD) module. The TD module adaptively enhances the motion information and boundary features with temporal attention weights. The CC module evenly divides the input sequence into N chunks, using multiple independent convolution blocks to simultaneously extract features from neighboring chunks. Therefore, it realizes the information delivered from distant frames while avoiding trapping into the local convolution. The MTC module designs a cascade residual architecture, which realizes the multiscale temporal feature aggregation without introducing additional parameters. The NTD achieves a state-of-the-art performance on two large-scale datasets, 36.2% mAP@avg and 71.6% mAP@0.5 on ActivityNet-v1.3 and THUMOS-14, respectively. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

14 pages, 1880 KiB  
Article
A New Method for Image Protection Using Periodic Haar Piecewise-Linear Transform and Watermarking Technique
by Andrzej Dziech, Piotr Bogacki and Jan Derkacz
Sensors 2022, 22(21), 8106; https://doi.org/10.3390/s22218106 - 22 Oct 2022
Viewed by 998
Abstract
The paper presents a novel data-embedding method based on the Periodic Haar Piecewise-Linear (PHL) transform. The theoretical background behind the PHL transform concept is introduced. The proposed watermarking method assumes embedding hidden information in the PHL transform domain using the luminance channel of [...] Read more.
The paper presents a novel data-embedding method based on the Periodic Haar Piecewise-Linear (PHL) transform. The theoretical background behind the PHL transform concept is introduced. The proposed watermarking method assumes embedding hidden information in the PHL transform domain using the luminance channel of the original image. The watermark is embedded by modifying the coefficients with relatively low values. The proposed method was verified based on the measurement of the visual quality of an image with a watermark with respect to the length of the embedded information. In addition, the bit error rate (BER) is also considered for different sizes of a watermark. Furthermore, a method for the detection of image manipulation is presented. The elaborated technique seems to be suitable for applications in digital signal and image processing where high imperceptibility and low BER are required, and information security is of high importance. In particular, this method can be applied in systems where the sensitive data is transmitted or stored and needs to be protected appropriately (e.g., in medical image processing). Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

17 pages, 10018 KiB  
Article
Material Translation Based on Neural Style Transfer with Ideal Style Image Retrieval
by Gibran Benitez-Garcia, Hiroki Takahashi and Keiji Yanai
Sensors 2022, 22(19), 7317; https://doi.org/10.3390/s22197317 - 27 Sep 2022
Cited by 1 | Viewed by 1776
Abstract
The field of Neural Style Transfer (NST) has led to interesting applications that enable us to transform reality as human beings perceive it. Particularly, NST for material translation aims to transform the material of an object into that of a target material from [...] Read more.
The field of Neural Style Transfer (NST) has led to interesting applications that enable us to transform reality as human beings perceive it. Particularly, NST for material translation aims to transform the material of an object into that of a target material from a reference image. Since the target material (style) usually comes from a different object, the quality of the synthesized result totally depends on the reference image. In this paper, we propose a material translation method based on NST with automatic style image retrieval. The proposed CNN-feature-based image retrieval aims to find the ideal reference image that best translates the material of an object. An ideal reference image must share semantic information with the original object while containing distinctive characteristics of the desired material (style). Thus, we refine the search by selecting the most-discriminative images from the target material, while focusing on object semantics by removing its style information. To translate materials to object regions, we combine a real-time material segmentation method with NST. In this way, the material of the retrieved style image is transferred to the segmented areas only. We evaluate our proposal with different state-of-the-art NST methods, including conventional and recently proposed approaches. Furthermore, with a human perceptual study applied to 100 participants, we demonstrate that synthesized images of stone, wood, and metal can be perceived as real and even chosen over legitimate photographs of such materials. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

12 pages, 1692 KiB  
Article
Estimation of Respiratory Frequency in Women and Men by Kubios HRV Software Using the Polar H10 or Movesense Medical ECG Sensor during an Exercise Ramp
by Bruce Rogers, Marcelle Schaffarczyk and Thomas Gronwald
Sensors 2022, 22(19), 7156; https://doi.org/10.3390/s22197156 - 21 Sep 2022
Cited by 9 | Viewed by 4148
Abstract
Monitoring of the physiologic metric, respiratory frequency (RF), has been shown to be of value in health, disease, and exercise science. Both heart rate (HR) and variability (HRV), as represented by variation in RR interval timing, as well as analysis of ECG waveform [...] Read more.
Monitoring of the physiologic metric, respiratory frequency (RF), has been shown to be of value in health, disease, and exercise science. Both heart rate (HR) and variability (HRV), as represented by variation in RR interval timing, as well as analysis of ECG waveform variability, have shown potential in its measurement. Validation of RF accuracy using newer consumer hardware and software applications have been sparse. The intent of this report is to assess the precision of the RF derived using Kubios HRV Premium software version 3.5 with the Movesense Medical sensor single-channel ECG (MS ECG) and the Polar H10 (H10) HR monitor. Gas exchange data (GE), RR intervals (H10), and continuous ECG (MS ECG) were recorded from 21 participants performing an incremental cycling ramp to failure. Results showed high correlations between the reference GE and both the H10 (r = 0.85, SEE = 4.2) and MS ECG (r = 0.95, SEE = 2.6). Although median values were statistically different via Wilcoxon testing, adjusted median differences were clinically small for the H10 (RF about 1 breaths/min) and trivial for the MS ECG (RF about 0.1 breaths/min). ECG based measurement with the MS ECG showed reduced bias, limits of agreement (maximal bias, −2.0 breaths/min, maximal LoA, 6.1 to −10.0 breaths/min) compared to the H10 (maximal bias, −3.9 breaths/min, maximal LoA, 8.2 to −16.0 breaths/min). In conclusion, RF derived from the combination of the MS ECG sensor with Kubios HRV Premium software, tracked closely to the reference device through an exercise ramp, illustrates the potential for this system to be of practical usage during endurance exercise. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

19 pages, 9130 KiB  
Article
Automatic Stones Classification through a CNN-Based Approach
by Mauro Tropea, Giuseppe Fedele, Raffaella De Luca, Domenico Miriello and Floriano De Rango
Sensors 2022, 22(16), 6292; https://doi.org/10.3390/s22166292 - 21 Aug 2022
Cited by 7 | Viewed by 2519
Abstract
This paper presents an automatic recognition system for classifying stones belonging to different Calabrian quarries (Southern Italy). The tool for stone recognition has been developed in the SILPI project (acronym of “Sistema per l’Identificazione di Lapidei Per Immagini”), financed by POR [...] Read more.
This paper presents an automatic recognition system for classifying stones belonging to different Calabrian quarries (Southern Italy). The tool for stone recognition has been developed in the SILPI project (acronym of “Sistema per l’Identificazione di Lapidei Per Immagini”), financed by POR Calabria FESR-FSE 2014-2020. Our study is based on the Convolutional Neural Network (CNNs) that is used in literature for many different tasks such as speech recognition, neural language processing, bioinformatics, image classification and much more. In particular, we propose a two-stage hybrid approach based on the use of a model of Deep Learning (DL), in our case the CNN, in the first stage and a model of Machine Learning (ML) in the second one. In this work, we discuss a possible solution to stones classification which uses a CNN for the feature extraction phase and the Softmax or Multinomial Logistic Regression (MLR), Support Vector Machine (SVM), k-Nearest Neighbors (kNN), Random Forest (RF) and Gaussian Naive Bayes (GNB) ML techniques in order to perform the classification phase basing our study on the approach called Transfer Learning (TL). We show the image acquisition process in order to collect adequate information for creating an opportune database of the stone typologies present in the Calabrian quarries, also performing the identification of quarries in the considered region. Finally, we show a comparison of different DL and ML combinations in our Two-Stage Hybrid Model solution. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

21 pages, 4188 KiB  
Article
A Multi-Sensor Data-Fusion Method Based on Cloud Model and Improved Evidence Theory
by Xinjian Xiang, Kehan Li, Bingqiang Huang and Ying Cao
Sensors 2022, 22(15), 5902; https://doi.org/10.3390/s22155902 - 07 Aug 2022
Cited by 6 | Viewed by 2208
Abstract
The essential factors of information-aware systems are heterogeneous multi-sensory devices. Because of the ambiguity and contradicting nature of multi-sensor data, a data-fusion method based on the cloud model and improved evidence theory is proposed. To complete the conversion from quantitative to qualitative data, [...] Read more.
The essential factors of information-aware systems are heterogeneous multi-sensory devices. Because of the ambiguity and contradicting nature of multi-sensor data, a data-fusion method based on the cloud model and improved evidence theory is proposed. To complete the conversion from quantitative to qualitative data, the cloud model is employed to construct the basic probability assignment (BPA) function of the evidence corresponding to each data source. To address the issue that traditional evidence theory produces results that do not correspond to the facts when fusing conflicting evidence, the three measures of the Jousselme distance, cosine similarity, and the Jaccard coefficient are combined to measure the similarity of the evidence. The Hellinger distance of the interval is used to calculate the credibility of the evidence. The similarity and credibility are combined to improve the evidence, and the fusion is performed according to Dempster’s rule to finally obtain the results. The numerical example results show that the proposed improved evidence theory method has better convergence and focus, and the confidence in the correct proposition is up to 100%. Applying the proposed multi-sensor data-fusion method to early indoor fire detection, the method improves the accuracy by 0.9–6.4% and reduces the false alarm rate by 0.7–10.2% compared with traditional and other improved evidence theories, proving its validity and feasibility, which provides a certain reference value for multi-sensor information fusion. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

26 pages, 68706 KiB  
Article
Motion Blur Kernel Rendering Using an Inertial Sensor: Interpreting the Mechanism of a Thermal Detector
by Kangil Lee, Yuseok Ban and Changick Kim
Sensors 2022, 22(5), 1893; https://doi.org/10.3390/s22051893 - 28 Feb 2022
Cited by 5 | Viewed by 4001
Abstract
Various types of motion blur are frequently observed in the images captured by sensors based on thermal and photon detectors. The difference in mechanisms between thermal and photon detectors directly results in different patterns of motion blur. Motivated by this observation, we propose [...] Read more.
Various types of motion blur are frequently observed in the images captured by sensors based on thermal and photon detectors. The difference in mechanisms between thermal and photon detectors directly results in different patterns of motion blur. Motivated by this observation, we propose a novel method to synthesize blurry images from sharp images by analyzing the mechanisms of the thermal detector. Further, we propose a novel blur kernel rendering method, which combines our proposed motion blur model with the inertial sensor in the thermal image domain. The accuracy of the blur kernel rendering method is evaluated by the task of thermal image deblurring. We construct a synthetic blurry image dataset based on acquired thermal images using an infrared camera for evaluation. This dataset is the first blurry thermal image dataset with ground-truth images in the thermal image domain. Qualitative and quantitative experiments are extensively carried out on our dataset, which show that our proposed method outperforms state-of-the-art methods. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

22 pages, 10054 KiB  
Article
Hyperspectral Image Labeling and Classification Using an Ensemble Semi-Supervised Machine Learning Approach
by Vidya Manian, Estefanía Alfaro-Mejía and Roger P. Tokars
Sensors 2022, 22(4), 1623; https://doi.org/10.3390/s22041623 - 18 Feb 2022
Cited by 12 | Viewed by 3411
Abstract
Hyperspectral remote sensing has tremendous potential for monitoring land cover and water bodies from the rich spatial and spectral information contained in the images. It is a time and resource consuming task to obtain groundtruth data for these images by field sampling. A [...] Read more.
Hyperspectral remote sensing has tremendous potential for monitoring land cover and water bodies from the rich spatial and spectral information contained in the images. It is a time and resource consuming task to obtain groundtruth data for these images by field sampling. A semi-supervised method for labeling and classification of hyperspectral images is presented. The unsupervised stage consists of image enhancement by feature extraction, followed by clustering for labeling and generating the groundtruth image. The supervised stage for classification consists of a preprocessing stage involving normalization, computation of principal components, and feature extraction. An ensemble of machine learning models takes the extracted features and groundtruth data from the unsupervised stage as input and a decision block then combines the output of the machines to label the image based on majority voting. The ensemble of machine learning methods includes support vector machines, gradient boosting, Gaussian classifier, and linear perceptron. Overall, the gradient boosting method gives the best performance for supervised classification of hyperspectral images. The presented ensemble method is useful for generating labeled data for hyperspectral images that do not have groundtruth information. It gives an overall accuracy of 93.74% for the Jasper hyperspectral image, 100% accuracy for the HSI2 Lake Erie images, and 99.92% for the classification of cyanobacteria or harmful algal blooms and surface scum. The method distinguishes well between blue green algae and surface scum. The full pipeline ensemble method for classifying Lake Erie images in a cloud server runs 24 times faster than a workstation. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

13 pages, 624 KiB  
Article
A Convolutional Neural Network-Based Method for Discriminating Shadowed Targets in Frequency-Modulated Continuous-Wave Radar Systems
by Ammar Mohanna, Christian Gianoglio, Ali Rizik and Maurizio Valle
Sensors 2022, 22(3), 1048; https://doi.org/10.3390/s22031048 - 28 Jan 2022
Cited by 4 | Viewed by 2540
Abstract
The radar shadow effect prevents reliable target discrimination when a target lies in the shadow region of another target. In this paper, we address this issue in the case of Frequency-Modulated Continuous-Wave (FMCW) radars, which are low-cost and small-sized devices with an increasing [...] Read more.
The radar shadow effect prevents reliable target discrimination when a target lies in the shadow region of another target. In this paper, we address this issue in the case of Frequency-Modulated Continuous-Wave (FMCW) radars, which are low-cost and small-sized devices with an increasing number of applications. We propose a novel method based on Convolutional Neural Networks that take as input the spectrograms obtained after a Short-Time Fourier Transform (STFT) analysis of the radar-received signal. The method discerns whether a target is or is not in the shadow region of another target. The proposed method achieves test accuracy of 92% with a standard deviation of 2.86%. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

18 pages, 3414 KiB  
Article
Towards Semantic Photogrammetry: Generating Semantically Rich Point Clouds from Architectural Close-Range Photogrammetry
by Arnadi Murtiyoso, Eugenio Pellis, Pierre Grussenmeyer, Tania Landes and Andrea Masiero
Sensors 2022, 22(3), 966; https://doi.org/10.3390/s22030966 - 26 Jan 2022
Cited by 21 | Viewed by 3538
Abstract
Developments in the field of artificial intelligence have made great strides in the field of automatic semantic segmentation, both in the 2D (image) and 3D spaces. Within the context of 3D recording technology it has also seen application in several areas, most notably [...] Read more.
Developments in the field of artificial intelligence have made great strides in the field of automatic semantic segmentation, both in the 2D (image) and 3D spaces. Within the context of 3D recording technology it has also seen application in several areas, most notably in creating semantically rich point clouds which is usually performed manually. In this paper, we propose the introduction of deep learning-based semantic image segmentation into the photogrammetric 3D reconstruction and classification workflow. The main objective is to be able to introduce semantic classification at the beginning of the classical photogrammetric workflow in order to automatically create classified dense point clouds by the end of the said workflow. In this regard, automatic image masking depending on pre-determined classes were performed using a previously trained neural network. The image masks were then employed during dense image matching in order to constraint the process into the respective classes, thus automatically creating semantically classified point clouds as the final output. Results show that the developed method is promising, with automation of the whole process feasible from input (images) to output (labelled point clouds). Quantitative assessment gave good results for specific classes e.g., building facades and windows, with IoU scores of 0.79 and 0.77 respectively. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

17 pages, 1682 KiB  
Article
SSA with CWT and k-Means for Eye-Blink Artifact Removal from Single-Channel EEG Signals
by Ajay Kumar Maddirala and Kalyana C. Veluvolu
Sensors 2022, 22(3), 931; https://doi.org/10.3390/s22030931 - 25 Jan 2022
Cited by 16 | Viewed by 3295
Abstract
Recently, the use of portable electroencephalogram (EEG) devices to record brain signals in both health care monitoring and in other applications, such as fatigue detection in drivers, has been increased due to its low cost and ease of use. However, the measured EEG [...] Read more.
Recently, the use of portable electroencephalogram (EEG) devices to record brain signals in both health care monitoring and in other applications, such as fatigue detection in drivers, has been increased due to its low cost and ease of use. However, the measured EEG signals always mix with the electrooculogram (EOG), which are results due to eyelid blinking or eye movements. The eye-blinking/movement is an uncontrollable activity that results in a high-amplitude slow-time varying component that is mixed in the measured EEG signal. The presence of these artifacts misled our understanding of the underlying brain state. As the portable EEG devices comprise few EEG channels or sometimes a single EEG channel, classical artifact removal techniques such as blind source separation methods cannot be used to remove these artifacts from a single-channel EEG signal. Hence, there is a demand for the development of new single-channel-based artifact removal techniques. Singular spectrum analysis (SSA) has been widely used as a single-channel-based eye-blink artifact removal technique. However, while removing the artifact, the low-frequency components from the non-artifact region of the EEG signal are also removed by SSA. To preserve these low-frequency components, in this paper, we have proposed a new methodology by integrating the SSA with continuous wavelet transform (CWT) and the k-means clustering algorithm that removes the eye-blink artifact from the single-channel EEG signals without altering the low frequencies of the EEG signal. The proposed method is evaluated on both synthetic and real EEG signals. The results also show the superiority of the proposed method over the existing methods. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

17 pages, 8936 KiB  
Article
Vital Signal Detection Using Multi-Radar for Reductions in Body Movement Effects
by Ah-Jung Jang, In-Seong Lee and Jong-Ryul Yang
Sensors 2021, 21(21), 7398; https://doi.org/10.3390/s21217398 - 07 Nov 2021
Cited by 5 | Viewed by 2900
Abstract
Vital signal detection using multiple radars is proposed to reduce the signal degradation from a subject’s body movement. The phase variation in the transceiving signals of continuous-wave radar due to respiration and heartbeat is generated by the body surface movement of the organs [...] Read more.
Vital signal detection using multiple radars is proposed to reduce the signal degradation from a subject’s body movement. The phase variation in the transceiving signals of continuous-wave radar due to respiration and heartbeat is generated by the body surface movement of the organs monitored in the line-of-sight (LOS) of the radar. The body movement signals obtained by two adjacent radars can be assumed to be the same over a certain distance. However, the vital signals are different in each radar, and each radar has a different LOS because of the asymmetric movement of lungs and heart. The proposed method uses two adjacent radars with different LOS to obtain correlated signals that reinforce the difference in the asymmetrical movement of the organs. The correlated signals can improve the signal-to-noise ratio in vital signal detection because of a reduction in the body movement effect. Two radars at different frequencies in the 5.8 GHz band are implemented to reduce direct signal coupling. Measurement results using the radars arranged at angles of 30°, 45°, and 60° showed that the proposed method can detect the vital signals with a mean accuracy of 97.8% for the subject moving at a maximum velocity of 53.4 mm/s. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

23 pages, 4116 KiB  
Article
Data Enhancement via Low-Rank Matrix Reconstruction in Pulsed Thermography for Carbon-Fibre-Reinforced Polymers
by Samira Ebrahimi, Julien R. Fleuret, Matthieu Klein, Louis-Daniel Théroux, Clemente Ibarra-Castanedo and Xavier P. V. Maldague
Sensors 2021, 21(21), 7185; https://doi.org/10.3390/s21217185 - 29 Oct 2021
Cited by 2 | Viewed by 1548
Abstract
Pulsed thermography is a commonly used non-destructive testing method and is increasingly studied for the assessment of advanced materials such as carbon fibre-reinforced polymer (CFRP). Different processing approaches are proposed to detect and characterize anomalies that may be generated in structures during the [...] Read more.
Pulsed thermography is a commonly used non-destructive testing method and is increasingly studied for the assessment of advanced materials such as carbon fibre-reinforced polymer (CFRP). Different processing approaches are proposed to detect and characterize anomalies that may be generated in structures during the manufacturing cycle or service period. In this study, matrix decomposition using Robust PCA via Inexact-ALM is investigated as a pre- and post-processing approach in combination with state-of-the-art approaches (i.e., PCT, PPT and PLST) on pulsed thermography thermal data. An academic sample with several artificial defects of different types, i.e., flat-bottom-holes (FBH), pull-outs (PO) and Teflon inserts (TEF), was employed to assess and compare defect detection and segmentation capabilities of different processing approaches. For this purpose, the contrast-to-noise ratio (CNR) and similarity coefficient were used as quantitative metrics. The results show a clear improvement in CNR when Robust PCA is applied as a pre-processing technique, CNR values for FBH, PO and TEF improve up to 164%, 237% and 80%, respectively, when compared to principal component thermography (PCT), whilst the CNR improvement with respect to pulsed phase thermography (PPT) was 77%, 101% and 289%, respectively. In the case of partial least squares thermography, Robust PCA results improved not only only when used as a pre-processing technique but also when used as a post-processing technique; however, this improvement is higher for FBHs and POs after pre-processing. Pre-processing increases CNR scores for FBHs and POs with a ratio from 0.43% to 115.88% and from 13.48% to 216.63%, respectively. Similarly, post-processing enhances the FBHs and POs results with a ratio between 9.62% and 296.9% and 16.98% to 92.6%, respectively. A low-rank matrix computed from Robust PCA as a pre-processing technique on raw data before using PCT and PPT can enhance the results of 67% of the defects. Using low-rank matrix decomposition from Robust PCA as a pre- and post-processing technique outperforms PLST results of 69% and 67% of the defects. These results clearly indicate that pre-processing pulsed thermography data by Robust PCA can elevate the defect detectability of advanced processing techniques, such as PCT, PPT and PLST, while post-processing using the same methods, in some cases, can deteriorate the results. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

13 pages, 2218 KiB  
Article
Corona Discharge Characteristics under Variable Frequency and Pressure Environments
by Pau Bas-Calopa, Jordi-Roger Riba and Manuel Moreno-Eguilaz
Sensors 2021, 21(19), 6676; https://doi.org/10.3390/s21196676 - 08 Oct 2021
Cited by 14 | Viewed by 2420
Abstract
More electric aircrafts (MEAs) are paving the path to all electric aircrafts (AEAs), which make a much more intensive use of electrical power than conventional aircrafts. Due to the strict weight requirements, both MEA and AEA systems require to increase the distribution voltage [...] Read more.
More electric aircrafts (MEAs) are paving the path to all electric aircrafts (AEAs), which make a much more intensive use of electrical power than conventional aircrafts. Due to the strict weight requirements, both MEA and AEA systems require to increase the distribution voltage in order to limit the required electrical current. Under this paradigm new issues arise, in part due to the voltage rise and in part because of the harsh environments found in aircrafts systems, especially those related to low pressure and high-electric frequency operation. Increased voltage levels, high-operating frequencies, low-pressure environments and reduced distances between wires pose insulation systems at risk, so partial discharges (PDs) and electrical breakdown are more likely to occur. This paper performs an experimental analysis of the effect of low-pressure environments and high-operating frequencies on the visual corona voltage, since corona discharges occurrence is directly related to arc tracking and insulation degradation in wiring systems. To this end, a rod-to-plane electrode configuration is tested in the 20–100 kPa and 50–1000 Hz ranges, these ranges cover most aircraft applications, so that the corona extinction voltage is experimentally determined by using a low-cost high-resolution CMOS imaging sensor which is sensitive to the visible and near ultraviolet (UV) spectra. The imaging sensor locates the discharge points and the intensity of the discharge, offering simplicity and low-cost measurements with high sensitivity. Moreover, to assess the performance of such sensor, the discharges are also acquired by analyzing the leakage current using an inexpensive resistor and a fast oscilloscope. The experimental data presented in this paper can be useful in designing insulation systems for MEA and AEA applications. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

22 pages, 2015 KiB  
Article
Effective Connectivity for Decoding Electroencephalographic Motor Imagery Using a Probabilistic Neural Network
by Muhammad Ahsan Awais, Mohd Zuki Yusoff, Danish M. Khan, Norashikin Yahya, Nidal Kamel and Mansoor Ebrahim
Sensors 2021, 21(19), 6570; https://doi.org/10.3390/s21196570 - 30 Sep 2021
Cited by 5 | Viewed by 2565
Abstract
Motor imagery (MI)-based brain–computer interfaces have gained much attention in the last few years. They provide the ability to control external devices, such as prosthetic arms and wheelchairs, by using brain activities. Several researchers have reported the inter-communication of multiple brain regions during [...] Read more.
Motor imagery (MI)-based brain–computer interfaces have gained much attention in the last few years. They provide the ability to control external devices, such as prosthetic arms and wheelchairs, by using brain activities. Several researchers have reported the inter-communication of multiple brain regions during motor tasks, thus making it difficult to isolate one or two brain regions in which motor activities take place. Therefore, a deeper understanding of the brain’s neural patterns is important for BCI in order to provide more useful and insightful features. Thus, brain connectivity provides a promising approach to solving the stated shortcomings by considering inter-channel/region relationships during motor imagination. This study used effective connectivity in the brain in terms of the partial directed coherence (PDC) and directed transfer function (DTF) as intensively unconventional feature sets for motor imagery (MI) classification. MANOVA-based analysis was performed to identify statistically significant connectivity pairs. Furthermore, the study sought to predict MI patterns by using four classification algorithms—an SVM, KNN, decision tree, and probabilistic neural network. The study provides a comparative analysis of all of the classification methods using two-class MI data extracted from the PhysioNet EEG database. The proposed techniques based on a probabilistic neural network (PNN) as a classifier and PDC as a feature set outperformed the other classification and feature extraction techniques with a superior classification accuracy and a lower error rate. The research findings indicate that when the PDC was used as a feature set, the PNN attained the greatest overall average accuracy of 98.65%, whereas the same classifier was used to attain the greatest accuracy of 82.81% with the DTF. This study validates the activation of multiple brain regions during a motor task by achieving better classification outcomes through brain connectivity as compared to conventional features. Since the PDC outperformed the DTF as a feature set with its superior classification accuracy and low error rate, it has great potential for application in MI-based brain–computer interfaces. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

15 pages, 2609 KiB  
Article
Comparing Methods of Feature Extraction of Brain Activities for Octave Illusion Classification Using Machine Learning
by Nina Pilyugina, Akihiko Tsukahara and Keita Tanaka
Sensors 2021, 21(19), 6407; https://doi.org/10.3390/s21196407 - 25 Sep 2021
Cited by 2 | Viewed by 2307
Abstract
The aim of this study was to find an efficient method to determine features that characterize octave illusion data. Specifically, this study compared the efficiency of several automatic feature selection methods for automatic feature extraction of the auditory steady-state responses (ASSR) data in [...] Read more.
The aim of this study was to find an efficient method to determine features that characterize octave illusion data. Specifically, this study compared the efficiency of several automatic feature selection methods for automatic feature extraction of the auditory steady-state responses (ASSR) data in brain activities to distinguish auditory octave illusion and nonillusion groups by the difference in ASSR amplitudes using machine learning. We compared univariate selection, recursive feature elimination, principal component analysis, and feature importance by testifying the results of feature selection methods by using several machine learning algorithms: linear regression, random forest, and support vector machine. The univariate selection with the SVM as the classification method showed the highest accuracy result, 75%, compared to 66.6% without using feature selection. The received results will be used for future work on the explanation of the mechanism behind the octave illusion phenomenon and creating an algorithm for automatic octave illusion classification. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

21 pages, 11218 KiB  
Article
IndoorCare: Low-Cost Elderly Activity Monitoring System through Image Processing
by Daniel Fuentes, Luís Correia, Nuno Costa, Arsénio Reis, José Ribeiro, Carlos Rabadão, João Barroso and António Pereira
Sensors 2021, 21(18), 6051; https://doi.org/10.3390/s21186051 - 09 Sep 2021
Cited by 6 | Viewed by 1977
Abstract
The Portuguese population is aging at an increasing rate, which introduces new problems, particularly in rural areas, where the population is small and widely spread throughout the territory. These people, mostly elderly, have low income and are often isolated and socially excluded. This [...] Read more.
The Portuguese population is aging at an increasing rate, which introduces new problems, particularly in rural areas, where the population is small and widely spread throughout the territory. These people, mostly elderly, have low income and are often isolated and socially excluded. This work researches and proposes an affordable Ambient Assisted Living (AAL)-based solution to monitor the activities of elderly individuals, inside their homes, in a pervasive and non-intrusive way, while preserving their privacy. The solution uses a set of low-cost IoT sensor devices, computer vision algorithms and reasoning rules, to acquire data and recognize the activities performed by a subject inside a home. A conceptual architecture and a functional prototype were developed, the prototype being successfully tested in an environment similar to a real case scenario. The system and the underlying concept can be used as a building block for remote and distributed elderly care services, in which the elderly live autonomously in their homes, but have the attention of a caregiver when needed. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

21 pages, 8971 KiB  
Article
SAR.IoT: Secured Augmented Reality for IoT Devices Management
by Daniel Fuentes, Luís Correia, Nuno Costa, Arsénio Reis, João Barroso and António Pereira
Sensors 2021, 21(18), 6001; https://doi.org/10.3390/s21186001 - 07 Sep 2021
Cited by 3 | Viewed by 2481
Abstract
Currently, solutions based on the Internet of Things (IoT) concept are increasingly being adopted in several fields, namely, industry, agriculture, and home automation. The costs associated with this type of equipment is reasonably small, as IoT devices usually do not have output peripherals [...] Read more.
Currently, solutions based on the Internet of Things (IoT) concept are increasingly being adopted in several fields, namely, industry, agriculture, and home automation. The costs associated with this type of equipment is reasonably small, as IoT devices usually do not have output peripherals to display information about their status (e.g., a screen or a printer), although they may have informative LEDs, which is sometimes insufficient. For most IoT devices, the price of a minimalist display, to output and display the device’s running status (i.e., what the device is doing), might cost much more than the actual IoT device. Occasionally, it might become necessary to visualize the IoT device output, making it necessary to find solutions to show the hardware output information in real time, without requiring extra equipment, only what the administrator usually has with them. In order to solve the above, a technological solution that allows for the visualization of IoT device information in actual time, using augmented reality and a simple smartphone, was developed and analyzed. In addition, the system created integrates a security layer, at the level of AR, to secure the shown data from unwanted eyes. The results of the tests carried out allowed us to validate the operation of the solution when accessing the information of the IoT devices, verify the operation of the security layer in AR, analyze the interaction between smartphones, the platform, and the devices, and check which AR markers are most optimized for this use case. This work results in a secure augmented reality solution, which can be used with a simple smartphone, to monitor/manage IoT devices in industrial, laboratory or research environments. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

17 pages, 1769 KiB  
Article
SFPD: Simultaneous Face and Person Detection in Real-Time for Human–Robot Interaction
by Marc-André Fiedler, Philipp Werner, Aly Khalifa and Ayoub Al-Hamadi
Sensors 2021, 21(17), 5918; https://doi.org/10.3390/s21175918 - 02 Sep 2021
Cited by 5 | Viewed by 2146
Abstract
Face and person detection are important tasks in computer vision, as they represent the first component in many recognition systems, such as face recognition, facial expression analysis, body pose estimation, face attribute detection, or human action recognition. Thereby, their detection rate and runtime [...] Read more.
Face and person detection are important tasks in computer vision, as they represent the first component in many recognition systems, such as face recognition, facial expression analysis, body pose estimation, face attribute detection, or human action recognition. Thereby, their detection rate and runtime are crucial for the performance of the overall system. In this paper, we combine both face and person detection in one framework with the goal of reaching a detection performance that is competitive to the state of the art of lightweight object-specific networks while maintaining real-time processing speed for both detection tasks together. In order to combine face and person detection in one network, we applied multi-task learning. The difficulty lies in the fact that no datasets are available that contain both face as well as person annotations. Since we did not have the resources to manually annotate the datasets, as it is very time-consuming and automatic generation of ground truths results in annotations of poor quality, we solve this issue algorithmically by applying a special training procedure and network architecture without the need of creating new labels. Our newly developed method called Simultaneous Face and Person Detection (SFPD) is able to detect persons and faces with 40 frames per second. Because of this good trade-off between detection performance and inference time, SFPD represents a useful and valuable real-time framework especially for a multitude of real-world applications such as, e.g., human–robot interaction. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

12 pages, 1904 KiB  
Communication
Stochastic Memristive Interface for Neural Signal Processing
by Svetlana A. Gerasimova, Alexey I. Belov, Dmitry S. Korolev, Davud V. Guseinov, Albina V. Lebedeva, Maria N. Koryazhkina, Alexey N. Mikhaylov, Victor B. Kazantsev and Alexander N. Pisarchik
Sensors 2021, 21(16), 5587; https://doi.org/10.3390/s21165587 - 19 Aug 2021
Cited by 17 | Viewed by 2792
Abstract
We propose a memristive interface consisting of two FitzHugh–Nagumo electronic neurons connected via a metal–oxide (Au/Zr/ZrO2(Y)/TiN/Ti) memristive synaptic device. We create a hardware–software complex based on a commercial data acquisition system, which records a signal generated by a presynaptic electronic neuron [...] Read more.
We propose a memristive interface consisting of two FitzHugh–Nagumo electronic neurons connected via a metal–oxide (Au/Zr/ZrO2(Y)/TiN/Ti) memristive synaptic device. We create a hardware–software complex based on a commercial data acquisition system, which records a signal generated by a presynaptic electronic neuron and transmits it to a postsynaptic neuron through the memristive device. We demonstrate, numerically and experimentally, complex dynamics, including chaos and different types of neural synchronization. The main advantages of our system over similar devices are its simplicity and real-time performance. A change in the amplitude of the presynaptic neurogenerator leads to the potentiation of the memristive device due to the self-tuning of its parameters. This provides an adaptive modulation of the postsynaptic neuron output. The developed memristive interface, due to its stochastic nature, simulates a real synaptic connection, which is very promising for neuroprosthetic applications. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

18 pages, 6235 KiB  
Article
A Spectrum Correction Algorithm Based on Beat Signal of FMCW Laser Ranging System
by Yi Hao, Ping Song, Xuanquan Wang and Zhikang Pan
Sensors 2021, 21(15), 5057; https://doi.org/10.3390/s21155057 - 26 Jul 2021
Cited by 5 | Viewed by 2263
Abstract
The accuracy of target distance obtained by a frequency modulated continuous wave (FMCW) laser ranging system is often affected by factors such as white Gaussian noise (WGN), spectrum leakage, and the picket fence effect. There are some traditional spectrum correction algorithms to solve [...] Read more.
The accuracy of target distance obtained by a frequency modulated continuous wave (FMCW) laser ranging system is often affected by factors such as white Gaussian noise (WGN), spectrum leakage, and the picket fence effect. There are some traditional spectrum correction algorithms to solve the problem above, but the results are unsatisfactory. In this article, a decomposition filtering-based dual-window correction (DFBDWC) algorithm is proposed to alleviate the problem caused by these factors. This algorithm reduces the influence of these factors by utilizing a decomposition filtering, dual-window in time domain and two phase values of spectral peak in the frequency domain, respectively. With the comparison of DFBDWC and these traditional algorithms in simulation and experiment on a built platform, the results show a superior performance of DFBDWC based on this platform. The maximum absolute error of target distance calculated by this algorithm is reduced from 0.7937 m of discrete Fourier transform (DFT) algorithm to 0.0407 m, which is the best among all mentioned spectrum correction algorithms. A high performance FMCW laser ranging system can be realized with the proposed algorithm, which has attractive potential in a wide scope of applications. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

Other

Jump to: Editorial, Research

16 pages, 459 KiB  
Study Protocol
Mobile 5P-Medicine Approach for Cardiovascular Patients
by Ivan Miguel Pires, Hanna Vitaliyivna Denysyuk, María Vanessa Villasana, Juliana Sá, Petre Lameski, Ivan Chorbev, Eftim Zdravevski, Vladimir Trajkovik, José Francisco Morgado and Nuno M. Garcia
Sensors 2021, 21(21), 6986; https://doi.org/10.3390/s21216986 - 21 Oct 2021
Cited by 13 | Viewed by 3419
Abstract
Medicine is heading towards personalized care based on individual situations and conditions. With smartphones and increasingly miniaturized wearable devices, the sensors available on these devices can perform long-term continuous monitoring of several user health-related parameters, making them a powerful tool for a new [...] Read more.
Medicine is heading towards personalized care based on individual situations and conditions. With smartphones and increasingly miniaturized wearable devices, the sensors available on these devices can perform long-term continuous monitoring of several user health-related parameters, making them a powerful tool for a new medicine approach for these patients. Our proposed system, described in this article, aims to develop innovative solutions based on artificial intelligence techniques to empower patients with cardiovascular disease. These solutions will realize a novel 5P (Predictive, Preventive, Participatory, Personalized, and Precision) medicine approach by providing patients with personalized plans for treatment and increasing their ability for self-monitoring. Such capabilities will be derived by learning algorithms from physiological data and behavioral information, collected using wearables and smart devices worn by patients with health conditions. Further, developing an innovative system of smart algorithms will also focus on providing monitoring techniques, predicting extreme events, generating alarms with varying health parameters, and offering opportunities to maintain active engagement of patients in the healthcare process by promoting the adoption of healthy behaviors and well-being outcomes. The multiple features of this future system will increase the quality of life for cardiovascular diseases patients and provide seamless contact with a healthcare professional. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

Back to TopTop