sensors-logo

Journal Browser

Journal Browser

Intelligent Sensor Signal in Machine Learning

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 August 2019) | Viewed by 77981

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Engineering, Keimyung University, Shindang-Dong, Dalseo-Gu, Daegu 704-701,Republic of Korea
Interests: computer vision; pattern recognition; object detection tracking; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Engineering, Keimyung University, Daegu 704-701, Republic of Korea
Interests: camera calibration; computer vision; image processing; signal processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues.

With the advancement of sensor technology, research has been actively carried out to fuse sensor signals and to extract useful information for various recognition problems based on machine learning. Recently, we have been obtaining signals from various sensors, such as wearable sensors, mobile sensors, cameras, heart rate monitoring devices, EEG head-caps and headbands, ECG sensors, breathing monitors, EMG sensors, and temperature sensors. However, as the sensor signal itself has no meaning, the machine learning algorithm must be combined in order to process the signals and make various decisions. Therefore, the use of machine learning, including deep learning, is appropriate for these challenging tasks.

The purpose of this Special Issue is to take the opportunity to introduce the current developments of intelligent sensor applications and innovative sensor fusion techniques combined with machine learning, including computer vision, pattern recognition, expert systems, deep learning, and so on. In this Special Issue, you are invited to submit contributions of original research, advancement, developments, and experiments pertaining to machine learning combined with sensors. Therefore, this Special Issue welcomes the newly developed methods and ideas combining the data obtained from various sensors in the following fields (but not limited to these fields):

  • Sensor fusion techniques based on machine learning
  • Sensors and big data analysis with machine learning
  • Autonomous vehicle technologies combining sensors and machine learning
  • Wireless sensor networks and communication based on machine learning
  • Deep network structure/learning algorithm for intelligent sensing
  • Autonomous robotics with intelligent sensors and machine learning 
  • Multi-modal/task learning for decision-making and control
  • Decision algorithms for autonomous driving
  • Machine learning and artificial intelligence for traffic/quality of experience management in IoT
  • Fuzzy fusion of sensors, data, and information
  • Machine learning for IoT and sensor research challenges
  • Advanced driver assistant systems (ADAS) based on machine learning
  • State-of-practice, research overview, experience reports, industrial experiments, and case studies in the intelligent sensors or IoT

Prof. Dr. ByoungChul Ko
Dr. Deokwoo Lee
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 36176 KiB  
Article
Panchromatic Image Super-Resolution Via Self Attention-Augmented Wasserstein Generative Adversarial Network
by Juan Du, Kuanhong Cheng, Yue Yu, Dabao Wang and Huixin Zhou
Sensors 2021, 21(6), 2158; https://doi.org/10.3390/s21062158 - 19 Mar 2021
Cited by 6 | Viewed by 2337
Abstract
Panchromatic (PAN) images contain abundant spatial information that is useful for earth observation, but always suffer from low-resolution ( LR) due to the sensor limitation and large-scale view field. The current super-resolution (SR) methods based on traditional attention mechanism have shown remarkable advantages [...] Read more.
Panchromatic (PAN) images contain abundant spatial information that is useful for earth observation, but always suffer from low-resolution ( LR) due to the sensor limitation and large-scale view field. The current super-resolution (SR) methods based on traditional attention mechanism have shown remarkable advantages but remain imperfect to reconstruct the edge details of SR images. To address this problem, an improved SR model which involves the self-attention augmented Wasserstein generative adversarial network ( SAA-WGAN) is designed to dig out the reference information among multiple features for detail enhancement. We use an encoder-decoder network followed by a fully convolutional network (FCN) as the backbone to extract multi-scale features and reconstruct the High-resolution (HR) results. To exploit the relevance between multi-layer feature maps, we first integrate a convolutional block attention module (CBAM) into each skip-connection of the encoder-decoder subnet, generating weighted maps to enhance both channel-wise and spatial-wise feature representation automatically. Besides, considering that the HR results and LR inputs are highly similar in structure, yet cannot be fully reflected in traditional attention mechanism, we, therefore, designed a self augmented attention (SAA) module, where the attention weights are produced dynamically via a similarity function between hidden features; this design allows the network to flexibly adjust the fraction relevance among multi-layer features and keep the long-range inter information, which is helpful to preserve details. In addition, the pixel-wise loss is combined with perceptual and gradient loss to achieve comprehensive supervision. Experiments on benchmark datasets demonstrate that the proposed method outperforms other SR methods in terms of both objective evaluation and visual effect. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

23 pages, 1106 KiB  
Article
Machine Learning Methodology in a System Applying the Adaptive Strategy for Teaching Human Motions
by Krzysztof Wójcik and Marcin Piekarczyk
Sensors 2020, 20(1), 314; https://doi.org/10.3390/s20010314 - 06 Jan 2020
Cited by 10 | Viewed by 3384
Abstract
The teaching of motion activities in rehabilitation, sports, and professional work has great social significance. However, the automatic teaching of these activities, particularly those involving fast motions, requires the use of an adaptive system that can adequately react to the changing stages and [...] Read more.
The teaching of motion activities in rehabilitation, sports, and professional work has great social significance. However, the automatic teaching of these activities, particularly those involving fast motions, requires the use of an adaptive system that can adequately react to the changing stages and conditions of the teaching process. This paper describes a prototype of an automatic system that utilizes the online classification of motion signals to select the proper teaching algorithm. The knowledge necessary to perform the classification process is acquired from experts by the use of the machine learning methodology. The system utilizes multidimensional motion signals that are captured using MEMS (Micro-Electro-Mechanical Systems) sensors. Moreover, an array of vibrotactile actuators is used to provide feedback to the learner. The main goal of the presented article is to prove that the effectiveness of the described teaching system is higher than the system that controls the learning process without the use of signal classification. Statistical tests carried out by the use of a prototype system confirmed that thesis. This is the main outcome of the presented study. An important contribution is also a proposal to standardize the system structure. The standardization facilitates the system configuration and implementation of individual, specialized teaching algorithms. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

26 pages, 11889 KiB  
Article
Robust Stride Detector from Ankle-Mounted Inertial Sensors for Pedestrian Navigation and Activity Recognition with Machine Learning Approaches
by Bertrand Beaufils, Frédéric Chazal, Marc Grelet and Bertrand Michel
Sensors 2019, 19(20), 4491; https://doi.org/10.3390/s19204491 - 16 Oct 2019
Cited by 7 | Viewed by 3148
Abstract
In this paper, a stride detector algorithm combined with a technique inspired by zero velocity update (ZUPT) is proposed to reconstruct the trajectory of a pedestrian from an ankle-mounted inertial device. This innovative approach is based on sensor alignment and machine learning. It [...] Read more.
In this paper, a stride detector algorithm combined with a technique inspired by zero velocity update (ZUPT) is proposed to reconstruct the trajectory of a pedestrian from an ankle-mounted inertial device. This innovative approach is based on sensor alignment and machine learning. It is able to detect 100% of both normal walking strides and more than 97% of atypical strides such as small steps, side steps, and backward walking that existing methods can hardly detect. This approach is also more robust in critical situations, when for example the wearer is sitting and moving the ankle or when the wearer is bicycling (less than two false detected strides per hour on average). As a consequence, the algorithm proposed for trajectory reconstruction achieves much better performances than existing methods for daily life contexts, in particular in narrow areas such as in a house. The computed stride trajectory contains essential information for recognizing the activity (atypical stride, walking, running, and stairs). For this task, we adopt a machine learning approach based on descriptors of these trajectories, which is shown to be robust to a large of variety of gaits. We tested our algorithm on recordings of healthy adults and children, achieving more than 99% success. The algorithm also achieved more than 97% success in challenging situations recorded by children suffering from movement disorders. Compared to most algorithms in the literature, this original method does not use a fixed-size sliding window but infers this last in an adaptive way. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

17 pages, 1812 KiB  
Article
Fast Depth Estimation in a Single Image Using Lightweight Efficient Neural Network
by Sangwon Kim, Jaeyeal Nam and Byoungchul Ko
Sensors 2019, 19(20), 4434; https://doi.org/10.3390/s19204434 - 13 Oct 2019
Cited by 4 | Viewed by 6028
Abstract
Depth estimation is a crucial and fundamental problem in the computer vision field. Conventional methods re-construct scenes using feature points extracted from multiple images; however, these approaches require multiple images and thus are not easily implemented in various real-time applications. Moreover, the special [...] Read more.
Depth estimation is a crucial and fundamental problem in the computer vision field. Conventional methods re-construct scenes using feature points extracted from multiple images; however, these approaches require multiple images and thus are not easily implemented in various real-time applications. Moreover, the special equipment required by hardware-based approaches using 3D sensors is expensive. Therefore, software-based methods for estimating depth from a single image using machine learning or deep learning are emerging as new alternatives. In this paper, we propose an algorithm that generates a depth map in real time using a single image and an optimized lightweight efficient neural network (L-ENet) algorithm instead of physical equipment, such as an infrared sensor or multi-view camera. Because depth values have a continuous nature and can produce locally ambiguous results, pixel-wise prediction with ordinal depth range classification was applied in this study. In addition, in our method various convolution techniques are applied to extract a dense feature map, and the number of parameters is greatly reduced by reducing the network layer. By using the proposed L-ENet algorithm, an accurate depth map can be generated from a single image quickly and, in a comparison with the ground truth, we can produce depth values closer to those of the ground truth with small errors. Experiments confirmed that the proposed L-ENet can achieve a significantly improved estimation performance over the state-of-the-art algorithms in depth estimation based on a single image. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

27 pages, 3738 KiB  
Article
PEnBayes: A Multi-Layered Ensemble Approach for Learning Bayesian Network Structure from Big Data
by Yan Tang, Jianwu Wang, Mai Nguyen and Ilkay Altintas
Sensors 2019, 19(20), 4400; https://doi.org/10.3390/s19204400 - 11 Oct 2019
Cited by 5 | Viewed by 3820
Abstract
Discovering the Bayesian network (BN) structure from big datasets containing rich causal relationships is becoming increasingly valuable for modeling and reasoning under uncertainties in many areas with big data gathered from sensors due to high volume and fast veracity. Most of the current [...] Read more.
Discovering the Bayesian network (BN) structure from big datasets containing rich causal relationships is becoming increasingly valuable for modeling and reasoning under uncertainties in many areas with big data gathered from sensors due to high volume and fast veracity. Most of the current BN structure learning algorithms have shortcomings facing big data. First, learning a BN structure from the entire big dataset is an expensive task which often ends in failure due to memory constraints. Second, it is quite difficult to select a learner from numerous BN structure learning algorithms to consistently achieve good learning accuracy. Lastly, there is a lack of an intelligent method that merges separately learned BN structures into a well structured BN network. To address these shortcomings, we introduce a novel parallel learning approach called PEnBayes (Parallel Ensemble-based Bayesian network learning). PEnBayes starts with an adaptive data preprocessing phase that calculates the Appropriate Learning Size and intelligently divides a big dataset for fast distributed local structure learning. Then, PEnBayes learns a collection of local BN Structures in parallel using a two-layered weighted adjacent matrix-based structure ensemble method. Lastly, PEnBayes merges the local BN Structures into a global network structure using the structure ensemble method at the global layer. For the experiment, we generate big data sets by simulating sensor data from patient monitoring, transportation, and disease diagnosis domains. The Experimental results show that PEnBayes achieves a significantly improved execution performance with more consistent and stable results compared with three baseline learning algorithms. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

19 pages, 1571 KiB  
Article
Machine Learning for LTE Energy Detection Performance Improvement
by Małgorzata Wasilewska and Hanna Bogucka
Sensors 2019, 19(19), 4348; https://doi.org/10.3390/s19194348 - 08 Oct 2019
Cited by 15 | Viewed by 3453
Abstract
The growing number of radio communication devices and limited spectrum resources are drivers for the development of new techniques of dynamic spectrum access and spectrum sharing. In order to make use of the spectrum opportunistically, the concept of cognitive radio was proposed, where [...] Read more.
The growing number of radio communication devices and limited spectrum resources are drivers for the development of new techniques of dynamic spectrum access and spectrum sharing. In order to make use of the spectrum opportunistically, the concept of cognitive radio was proposed, where intelligent decisions on transmission opportunities are based on spectrum sensing. In this paper, two Machine Learning (ML) algorithms, namely k-Nearest Neighbours and Random Forest, have been proposed to increase spectrum sensing performance. These algorithms have been applied to Energy Detection (ED) and Energy Vector-based data (EV) to detect the presence of a Fourth Generation (4G) Long-Term Evolution (LTE) signal for the purpose of utilizing the available resource blocks by a 5G new radio system. The algorithms capitalize on time, frequency and spatial dependencies in daily communication traffic. Research results show that the ML methods used can significantly improve the spectrum sensing performance if the input training data set is carefully chosen. The input data sets with ED decisions and energy values have been examined, and advantages and disadvantages of their real-life application have been analyzed. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

23 pages, 1419 KiB  
Article
An FPGA-Based Neuro-Fuzzy Sensor for Personalized Driving Assistance
by Óscar Mata-Carballeira, Jon Gutiérrez-Zaballa, Inés del Campo and Victoria Martínez
Sensors 2019, 19(18), 4011; https://doi.org/10.3390/s19184011 - 17 Sep 2019
Cited by 15 | Viewed by 4123
Abstract
Advanced driving-assistance systems (ADAS) are intended to automatize driver tasks, as well as improve driving and vehicle safety. This work proposes an intelligent neuro-fuzzy sensor for driving style (DS) recognition, suitable for ADAS enhancement. The development of the driving style intelligent sensor uses [...] Read more.
Advanced driving-assistance systems (ADAS) are intended to automatize driver tasks, as well as improve driving and vehicle safety. This work proposes an intelligent neuro-fuzzy sensor for driving style (DS) recognition, suitable for ADAS enhancement. The development of the driving style intelligent sensor uses naturalistic driving data from the SHRP2 study, which includes data from a CAN bus, inertial measurement unit, and front radar. The system has been successfully implemented using a field-programmable gate array (FPGA) device of the Xilinx Zynq programmable system-on-chip (PSoC). It can mimic the typical timing parameters of a group of drivers as well as tune these typical parameters to model individual DSs. The neuro-fuzzy intelligent sensor provides high-speed real-time active ADAS implementation and is able to personalize its behavior into safe margins without driver intervention. In particular, the personalization procedure of the time headway (THW) parameter for an ACC in steady car following was developed, achieving a performance of 0.53 microseconds. This performance fulfilled the requirements of cutting-edge active ADAS specifications. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

21 pages, 1306 KiB  
Article
Pedestrian Positioning Using a Double-Stacked Particle Filter in Indoor Wireless Networks
by Kwangjae Sung, Hyung Kyu Lee and Hwangnam Kim
Sensors 2019, 19(18), 3907; https://doi.org/10.3390/s19183907 - 10 Sep 2019
Cited by 6 | Viewed by 2539
Abstract
The indoor pedestrian positioning methods are affected by substantial bias and errors because of the use of cheap microelectromechanical systems (MEMS) devices (e.g., gyroscope and accelerometer) and the users’ movements. Moreover, because radio-frequency (RF) signal values are changed drastically due to multipath fading [...] Read more.
The indoor pedestrian positioning methods are affected by substantial bias and errors because of the use of cheap microelectromechanical systems (MEMS) devices (e.g., gyroscope and accelerometer) and the users’ movements. Moreover, because radio-frequency (RF) signal values are changed drastically due to multipath fading and obstruction, the performance of RF-based localization systems may deteriorate in practice. To deal with this problem, various indoor localization methods that integrate the positional information gained from received signal strength (RSS) fingerprinting scheme and the motion of the user inferred by dead reckoning (DR) approach via Bayes filters have been suggested to accomplish more accurate localization results indoors. Among the Bayes filters, while the particle filter (PF) can offer the most accurate positioning performance, it may require substantial computation time due to use of many samples (particles) for high positioning accuracy. This paper introduces a pedestrian localization scheme performed on a mobile phone that leverages the RSS fingerprint-based method, dead reckoning (DR), and improved PF called a double-stacked particle filter (DSPF) in indoor environments. As a key element of our system, the DSPF algorithm is employed to correct the position of the user by fusing noisy location data gained by the RSS fingerprinting and DR schemes. By estimating the position of the user through the proposal distribution and target distribution obtained from multiple measurements, the DSPF method can offer better localization results compared to the Kalman filtering-based methods, and it can achieve competitive localization accuracy compared with PF while offering higher computational efficiency than PF. Experimental results demonstrate that the DSPF algorithm can achieve accurate and reliable localization with higher efficiency in computational cost compared with PF in indoor environments. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

17 pages, 1239 KiB  
Article
Time Coherent Full-Body Poses Estimated Using Only Five Inertial Sensors: Deep versus Shallow Learning
by Frank J. Wouda, Matteo Giuberti, Nina Rudigkeit, Bert-Jan F. van Beijnum, Mannes Poel and Peter H. Veltink
Sensors 2019, 19(17), 3716; https://doi.org/10.3390/s19173716 - 27 Aug 2019
Cited by 11 | Viewed by 4263
Abstract
Full-body motion capture typically requires sensors/markers to be placed on each rigid body segment, which results in long setup times and is obtrusive. The number of sensors/markers can be reduced using deep learning or offline methods. However, this requires large training datasets and/or [...] Read more.
Full-body motion capture typically requires sensors/markers to be placed on each rigid body segment, which results in long setup times and is obtrusive. The number of sensors/markers can be reduced using deep learning or offline methods. However, this requires large training datasets and/or sufficient computational resources. Therefore, we investigate the following research question: “What is the performance of a shallow approach, compared to a deep learning one, for estimating time coherent full-body poses using only five inertial sensors?”. We propose to incorporate past/future inertial sensor information into a stacked input vector, which is fed to a shallow neural network for estimating full-body poses. Shallow and deep learning approaches are compared using the same input vector configurations. Additionally, the inclusion of acceleration input is evaluated. The results show that a shallow learning approach can estimate full-body poses with a similar accuracy (~6 cm) to that of a deep learning approach (~7 cm). However, the jerk errors are smaller using the deep learning approach, which can be the effect of explicit recurrent modelling. Furthermore, it is shown that the delay using a shallow learning approach (72 ms) is smaller than that of a deep learning approach (117 ms). Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

19 pages, 9429 KiB  
Article
Logistic Regression for Machine Learning in Process Tomography
by Tomasz Rymarczyk, Edward Kozłowski, Grzegorz Kłosowski and Konrad Niderla
Sensors 2019, 19(15), 3400; https://doi.org/10.3390/s19153400 - 02 Aug 2019
Cited by 123 | Viewed by 10739
Abstract
The main goal of the research presented in this paper was to develop a refined machine learning algorithm for industrial tomography applications. The article presents algorithms based on logistic regression in relation to image reconstruction using electrical impedance tomography (EIT) and ultrasound transmission [...] Read more.
The main goal of the research presented in this paper was to develop a refined machine learning algorithm for industrial tomography applications. The article presents algorithms based on logistic regression in relation to image reconstruction using electrical impedance tomography (EIT) and ultrasound transmission tomography (UST). The test object was a tank filled with water in which reconstructed objects were placed. For both EIT and UST, a novel approach was used in which each pixel of the output image was reconstructed by a separately trained prediction system. Therefore, it was necessary to use many predictive systems whose number corresponds to the number of pixels of the output image. Thanks to this approach the under-completed problem was changed to an over-completed one. To reduce the number of predictors in logistic regression by removing irrelevant and mutually correlated entries, the elastic net method was used. The developed algorithm that reconstructs images pixel-by-pixel is insensitive to the shape, number and position of the reconstructed objects. In order to assess the quality of mappings obtained thanks to the new algorithm, appropriate metrics were used: compatibility ratio (CR) and relative error (RE). The obtained results enabled the assessment of the usefulness of logistic regression in the reconstruction of EIT and UST images. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

18 pages, 3667 KiB  
Article
A Combined Offline and Online Algorithm for Real-Time and Long-Term Classification of Sheep Behaviour: Novel Approach for Precision Livestock Farming
by Jorge A. Vázquez-Diosdado, Veronica Paul, Keith A Ellis, David Coates, Radhika Loomba and Jasmeet Kaler
Sensors 2019, 19(14), 3201; https://doi.org/10.3390/s19143201 - 20 Jul 2019
Cited by 34 | Viewed by 5210
Abstract
Real-time and long-term behavioural monitoring systems in precision livestock farming have huge potential to improve welfare and productivity for the better health of farm animals. However, some of the biggest challenges for long-term monitoring systems relate to “concept drift”, which occurs when systems [...] Read more.
Real-time and long-term behavioural monitoring systems in precision livestock farming have huge potential to improve welfare and productivity for the better health of farm animals. However, some of the biggest challenges for long-term monitoring systems relate to “concept drift”, which occurs when systems are presented with challenging new or changing conditions, and/or in scenarios where training data is not accurately reflective of live sensed data. This study presents a combined offline algorithm and online learning algorithm which deals with concept drift and is deemed by the authors as a useful mechanism for long-term in-the-field monitoring systems. The proposed algorithm classifies three relevant sheep behaviours using information from an embedded edge device that includes tri-axial accelerometer and tri-axial gyroscope sensors. The proposed approach is for the first time reported in precision livestock behavior monitoring and demonstrates improvement in classifying relevant behaviour in sheep, in real-time, under dynamically changing conditions. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

20 pages, 5192 KiB  
Article
The Novel Sensor Network Structure for Classification Processing Based on the Machine Learning Method of the ACGAN
by Yuantao Chen, Jiajun Tao, Jin Wang, Xi Chen, Jingbo Xie, Jie Xiong and Kai Yang
Sensors 2019, 19(14), 3145; https://doi.org/10.3390/s19143145 - 17 Jul 2019
Cited by 17 | Viewed by 3498 | Retraction
Abstract
To address the problem of unstable training and poor accuracy in image classification algorithms based on generative adversarial networks (GAN), a novel sensor network structure for classification processing using auxiliary classifier generative adversarial networks (ACGAN) is proposed in this paper. Firstly, the real/fake [...] Read more.
To address the problem of unstable training and poor accuracy in image classification algorithms based on generative adversarial networks (GAN), a novel sensor network structure for classification processing using auxiliary classifier generative adversarial networks (ACGAN) is proposed in this paper. Firstly, the real/fake discrimination of sensor samples in the network has been canceled at the output layer of the discriminative network and only the posterior probability estimation of the sample tag is outputted. Secondly, by regarding the real sensor samples as supervised data and the generative sensor samples as labeled fake data, we have reconstructed the loss function of the generator and discriminator by using the real/fake attributes of sensor samples and the cross-entropy loss function of the label. Thirdly, the pooling and caching method has been introduced into the discriminator to enable more effective extraction of the classification features. Finally, feature matching has been added to the discriminative network to ensure the diversity of the generative sensor samples. Experimental results have shown that the proposed algorithm (CP-ACGAN) achieves better classification accuracy on the MNIST dataset, CIFAR10 dataset and CIFAR100 dataset than other solutions. Moreover, when compared with the ACGAN and CNN classification algorithms, which have the same deep network structure as CP-ACGAN, the proposed method continues to achieve better classification effects and stability than other main existing sensor solutions. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

17 pages, 4045 KiB  
Article
A Cascade Ensemble Learning Model for Human Activity Recognition with Smartphones
by Shoujiang Xu, Qingfeng Tang, Linpeng Jin and Zhigeng Pan
Sensors 2019, 19(10), 2307; https://doi.org/10.3390/s19102307 - 19 May 2019
Cited by 27 | Viewed by 4539
Abstract
Human activity recognition (HAR) has gained lots of attention in recent years due to its high demand in different domains. In this paper, a novel HAR system based on a cascade ensemble learning (CELearning) model is proposed. Each layer of the proposed model [...] Read more.
Human activity recognition (HAR) has gained lots of attention in recent years due to its high demand in different domains. In this paper, a novel HAR system based on a cascade ensemble learning (CELearning) model is proposed. Each layer of the proposed model is comprised of Extremely Gradient Boosting Trees (XGBoost), Random Forest, Extremely Randomized Trees (ExtraTrees) and Softmax Regression, and the model goes deeper layer by layer. The initial input vectors sampled from smartphone accelerometer and gyroscope sensor are trained separately by four different classifiers in the first layer, and the probability vectors representing different classes to which each sample belongs are obtained. Both the initial input data and the probability vectors are concatenated together and considered as input to the next layer’s classifiers, and eventually the final prediction is obtained according to the classifiers of the last layer. This system achieved satisfying classification accuracy on two public datasets of HAR based on smartphone accelerometer and gyroscope sensor. The experimental results show that the proposed approach has gained better classification accuracy for HAR compared to existing state-of-the-art methods, and the training process of the model is simple and efficient. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

13 pages, 2108 KiB  
Article
Contextual Action Cues from Camera Sensor for Multi-Stream Action Recognition
by Jongkwang Hong, Bora Cho, Yong Won Hong and Hyeran Byun
Sensors 2019, 19(6), 1382; https://doi.org/10.3390/s19061382 - 20 Mar 2019
Cited by 18 | Viewed by 3418
Abstract
In action recognition research, two primary types of information are appearance and motion information that is learned from RGB images through visual sensors. However, depending on the action characteristics, contextual information, such as the existence of specific objects or globally-shared information in the [...] Read more.
In action recognition research, two primary types of information are appearance and motion information that is learned from RGB images through visual sensors. However, depending on the action characteristics, contextual information, such as the existence of specific objects or globally-shared information in the image, becomes vital information to define the action. For example, the existence of the ball is vital information distinguishing “kicking” from “running”. Furthermore, some actions share typical global abstract poses, which can be used as a key to classify actions. Based on these observations, we propose the multi-stream network model, which incorporates spatial, temporal, and contextual cues in the image for action recognition. We experimented on the proposed method using C3D or inflated 3D ConvNet (I3D) as a backbone network, regarding two different action recognition datasets. As a result, we observed overall improvement in accuracy, demonstrating the effectiveness of our proposed method. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

18 pages, 1733 KiB  
Article
Estimation of Pedestrian Pose Orientation Using Soft Target Training Based on Teacher–Student Framework
by DuYeong Heo, Jae Yeal Nam and Byoung Chul Ko
Sensors 2019, 19(5), 1147; https://doi.org/10.3390/s19051147 - 06 Mar 2019
Cited by 11 | Viewed by 3689
Abstract
Semi-supervised learning is known to achieve better generalisation than a model learned solely from labelled data. Therefore, we propose a new method for estimating a pedestrian pose orientation using a soft-target method, which is a type of semi-supervised learning method. Because a convolutional [...] Read more.
Semi-supervised learning is known to achieve better generalisation than a model learned solely from labelled data. Therefore, we propose a new method for estimating a pedestrian pose orientation using a soft-target method, which is a type of semi-supervised learning method. Because a convolutional neural network (CNN) based pose orientation estimation requires large numbers of parameters and operations, we apply the teacher–student algorithm to generate a compressed student model with high accuracy and compactness resembling that of the teacher model by combining a deep network with a random forest. After the teacher model is generated using hard target data, the softened outputs (soft-target data) of the teacher model are used for training the student model. Moreover, the orientation of the pedestrian has specific shape patterns, and a wavelet transform is applied to the input image as a pre-processing step owing to its good spatial frequency localisation property and the ability to preserve both the spatial information and gradient information of an image. For a benchmark dataset considering real driving situations based on a single camera, we used the TUD and KITTI datasets. We applied the proposed algorithm to various driving images in the datasets, and the results indicate that its classification performance with regard to the pose orientation is better than that of other state-of-the-art methods based on a CNN. In addition, the computational speed of the proposed student model is faster than that of other deep CNNs owing to the shorter model structure with a smaller number of parameters. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

12 pages, 8465 KiB  
Article
A Deep Convolutional Neural Network Inspired by Auditory Perception for Underwater Acoustic Target Recognition
by Honghui Yang, Junhao Li, Sheng Shen and Guanghui Xu
Sensors 2019, 19(5), 1104; https://doi.org/10.3390/s19051104 - 04 Mar 2019
Cited by 60 | Viewed by 4679
Abstract
Underwater acoustic target recognition (UATR) using ship-radiated noise faces big challenges due to the complex marine environment. In this paper, inspired by neural mechanisms of auditory perception, a new end-to-end deep neural network named auditory perception inspired Deep Convolutional Neural Network (ADCNN) is [...] Read more.
Underwater acoustic target recognition (UATR) using ship-radiated noise faces big challenges due to the complex marine environment. In this paper, inspired by neural mechanisms of auditory perception, a new end-to-end deep neural network named auditory perception inspired Deep Convolutional Neural Network (ADCNN) is proposed for UATR. In the ADCNN model, inspired by the frequency component perception neural mechanism, a bank of multi-scale deep convolution filters are designed to decompose raw time domain signal into signals with different frequency components. Inspired by the plasticity neural mechanism, the parameters of the deep convolution filters are initialized randomly, and the is n learned and optimized for UATR. The n, max-pooling layers and fully connected layers extract features from each decomposed signal. Finally, in fusion layers, features from each decomposed signal are merged and deep feature representations are extracted to classify underwater acoustic targets. The ADCNN model simulates the deep acoustic information processing structure of the auditory system. Experimental results show that the proposed model can decompose, model and classify ship-radiated noise signals efficiently. It achieves a classification accuracy of 81.96%, which is the highest in the contrast experiments. The experimental results show that auditory perception inspired deep learning method has encouraging potential to improve the classification performance of UATR. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

15 pages, 5193 KiB  
Article
Vision Sensor Based Fuzzy System for Intelligent Vehicles
by Kwangsoo Kim, Yangho Kim and Sooyeong Kwak
Sensors 2019, 19(4), 855; https://doi.org/10.3390/s19040855 - 19 Feb 2019
Cited by 5 | Viewed by 3439
Abstract
Those in the automotive industry and many researchers have become interested in the development of pedestrian protection systems in recent years. In particular, vision-based methods for predicting pedestrian intentions are now being actively studied to improve the performance of pedestrian protection systems. In [...] Read more.
Those in the automotive industry and many researchers have become interested in the development of pedestrian protection systems in recent years. In particular, vision-based methods for predicting pedestrian intentions are now being actively studied to improve the performance of pedestrian protection systems. In this paper, we propose a vision-based system that can detect pedestrians using an on-dash camera in the car, and can then analyze their movements to determine the probability of collision. Information about pedestrians, including position, distance, movement direction, and magnitude are extracted using computer vision technologies and, using this information, a fuzzy rule-based system makes a judgement on the pedestrian’s risk level. To verify the function of the proposed system, we built several test datasets, collected by ourselves, in high-density regions where vehicles and pedestrians mix closely. The true positive rate of the experimental results was about 86%, which shows the validity of the proposed system. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

Review

Jump to: Research

34 pages, 797 KiB  
Review
Multiple Physiological Signals Fusion Techniques for Improving Heartbeat Detection: A Review
by Javier Tejedor, Constantino A. García, David G. Márquez, Rafael Raya and Abraham Otero
Sensors 2019, 19(21), 4708; https://doi.org/10.3390/s19214708 - 29 Oct 2019
Cited by 21 | Viewed by 4746
Abstract
This paper presents a review of the techniques found in the literature that aim to achieve a robust heartbeat detection from fusing multi-modal physiological signals (e.g., electrocardiogram (ECG), blood pressure (BP), artificial blood pressure (ABP), stroke volume (SV), photoplethysmogram (PPG), electroencephalogram (EEG), electromyogram [...] Read more.
This paper presents a review of the techniques found in the literature that aim to achieve a robust heartbeat detection from fusing multi-modal physiological signals (e.g., electrocardiogram (ECG), blood pressure (BP), artificial blood pressure (ABP), stroke volume (SV), photoplethysmogram (PPG), electroencephalogram (EEG), electromyogram (EMG), and electrooculogram (EOG), among others). Techniques typically employ ECG, BP, and ABP, of which usage has been shown to obtain the best performance under challenging conditions. SV, PPG, EMG, EEG, and EOG signals can help increase performance when included within the fusion. Filtering, signal normalization, and resampling are common preprocessing steps. Delay correction between the heartbeats obtained over some of the physiological signals must also be considered, and signal-quality assessment to retain the best signal/s must be considered as well. Fusion is usually accomplished by exploiting regularities in the RR intervals; by selecting the most promising signal for the detection at every moment; by a voting process; or by performing simultaneous detection and fusion using Bayesian techniques, hidden Markov models, or neural networks. Based on the results of the review, guidelines to facilitate future comparison of the performance of the different proposals are given and promising future lines of research are pointed out. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

Back to TopTop