sensors-logo

Journal Browser

Journal Browser

Artificial Intelligence and Machine Learning in Sensors Networks

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (15 October 2018) | Viewed by 136926

Special Issue Editors


E-Mail Website
Guest Editor
1. BISITE Research Group, University of Salamanca, 37007 Salamanca, Spain
2. Air Institute, IoT Digital Innovation Hub, 37188 Salamanca, Spain
3. Department of Electronics, Information and Communication, Faculty of Engineering, Osaka Institute of Technology, Osaka 535-8585, Japan
Interests: artificial intelligence; smart cities; smart grids
Special Issues, Collections and Topics in MDPI journals

grade E-Mail Website
Guest Editor
Andalusian Research Institute in Data Science and Computational Intelligence, Department of Computer Science and AI, University of Granada, 18071 Granada, Spain
Interests: intelligent decision making; artificial intelligence; computational intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

At present, there is a growing number of solutions that provide Artificial Intelligence (AI) and Machine Learning (ML) based systems. These solutions facilitate the creation of new products and services in many different fields. Sensor networks (SNs) are undergoing great expansion and development and the combination of both AI and SNs are now realities that are going to change our lives. The integration of these two technologies benefits other areas such as Industry 4.0, Internet of Things, Demotic Systems, etc. Furthermore, sensor networks (SNs) are widely used to collect environmental parameters in homes, buildings, vehicles, etc., where they are used as a source of information that aids the decision-making process and, in particular it allows systems to learn and to monitor activity. New AI and ML real time or execution time algorithms are needed, as well as different strategies to embed these algorithms in sensors. New clustering and classification techniques, reinforcement learning methods, or data quality approaches are required, as well as distributed AI algorithms.

This Special Issue calls for innovative work that explores new frontiers and challenges in the field of applying AI algorithms to SNs. As mentioned previously, this work will include new machine learning models, distributed AI proposals, hybrid AI systems, etc., as well as case studies or reviews of the state-of-the-art.

The topics of interest include, but are not limited to:

  • Artificial Intelligence models for Sensor Networks.
  • Machine Learning models for Sensor Networks.
  • Clustering and classification algorithms for SNs.
  • Deep and reinforcement learning for SNs.
  • Intelligence processing algorithms for SNs.
  • Intelligence image processing algorithms for SNs.
  • Big Data analytics for data processing from SNs.
  • Fuzzy Systems proposals for SNs.
  • Expert Systems for SNs.
  • Hybrid Systems for SNs
  • Intelligent real time algorithms for SNs.
  • Intelligent execution time algorithms for SNs.
  • Intelligent security proposals for WSNs.
  • Blockchain in WSNs.
  • Multi Agent Systems.
  • Organization Based Multiagent Systems.
  • Virtual Organizations.
  • Applications of AI in SN domains: energy, IoT, Industry 4.0, etc.

Prof. Dr. Juan Manuel Corchado Rodríguez
Prof. Dr. Enrique Herrera
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Machine Learning
  • Artificial Intelligence
  • Learning
  • Fuzzy
  • ANN

Published Papers (25 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 6851 KiB  
Article
Single Image Super-Resolution Based on Global Dense Feature Fusion Convolutional Network
by Wang Xu, Renwen Chen, Bin Huang, Xiang Zhang and Chuan Liu
Sensors 2019, 19(2), 316; https://doi.org/10.3390/s19020316 - 14 Jan 2019
Cited by 11 | Viewed by 3949
Abstract
Deep neural networks (DNNs) have been widely adopted in single image super-resolution (SISR) recently with great success. As a network goes deeper, intermediate features become hierarchical. However, most SISR methods based on DNNs do not make full use of the hierarchical features. The [...] Read more.
Deep neural networks (DNNs) have been widely adopted in single image super-resolution (SISR) recently with great success. As a network goes deeper, intermediate features become hierarchical. However, most SISR methods based on DNNs do not make full use of the hierarchical features. The features cannot be read directly by the subsequent layers, therefore, the previous hierarchical information has little influence on the subsequent layer output, and the performance is relatively poor. To address this issue, a novel global dense feature fusion convolutional network (DFFNet) is proposed, which can take full advantage of global intermediate features. Especially, a feature fusion block (FFblock) is introduced as the basic module. Each block can directly read raw global features from previous ones and then learns the feature spatial correlation and channel correlation between features in a holistic way, leading to a continuous global information memory mechanism. Experiments on the benchmark tests show that the proposed method DFFNet achieves favorable performance against the state-of-art methods. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

18 pages, 6319 KiB  
Article
Object Tracking Algorithm Based on Dual Color Feature Fusion with Dimension Reduction
by Shuo Hu, Yanan Ge, Jianglong Han and Xuguang Zhang
Sensors 2019, 19(1), 73; https://doi.org/10.3390/s19010073 - 25 Dec 2018
Cited by 5 | Viewed by 3359
Abstract
Aiming at the problem of poor robustness and the low effectiveness of target tracking in complex scenes by using single color features, an object-tracking algorithm based on dual color feature fusion via dimension reduction is proposed, according to the Correlation Filter (CF)-based tracking [...] Read more.
Aiming at the problem of poor robustness and the low effectiveness of target tracking in complex scenes by using single color features, an object-tracking algorithm based on dual color feature fusion via dimension reduction is proposed, according to the Correlation Filter (CF)-based tracking framework. First, Color Name (CN) feature and Color Histogram (CH) feature extraction are respectively performed on the input image, and then the template and the candidate region are correlated by the CF-based methods, and the CH response and CN response of the target region are obtained, respectively. A self-adaptive feature fusion strategy is proposed to linearly fuse the CH response and the CN response to obtain a dual color feature response with global color distribution information and main color information. Finally, the position of the target is estimated, based on the fused response map, with the maximum of the fused response map corresponding to the estimated target position. The proposed method is based on fusion in the framework of the Staple algorithm, and dimension reduction by Principal Component Analysis (PCA) on the scale; the complexity of the algorithm is reduced, and the tracking performance is further improved. Experimental results on quantitative and qualitative evaluations on challenging benchmark sequences show that the proposed algorithm has better tracking accuracy and robustness than other state-of-the-art tracking algorithms in complex scenarios. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

21 pages, 6783 KiB  
Article
MBOSS: A Symbolic Representation of Human Activity Recognition Using Mobile Sensors
by Kevin G. Montero Quispe, Wesllen Sousa Lima, Daniel Macêdo Batista and Eduardo Souto
Sensors 2018, 18(12), 4354; https://doi.org/10.3390/s18124354 - 10 Dec 2018
Cited by 18 | Viewed by 4048
Abstract
Human activity recognition (HAR) through sensors embedded in smartphones has allowed for the development of systems that are capable of detecting and monitoring human behavior. However, such systems have been affected by the high consumption of computational resources (e.g., memory and processing) needed [...] Read more.
Human activity recognition (HAR) through sensors embedded in smartphones has allowed for the development of systems that are capable of detecting and monitoring human behavior. However, such systems have been affected by the high consumption of computational resources (e.g., memory and processing) needed to effectively recognize activities. In addition, existing HAR systems are mostly based on supervised classification techniques, in which the feature extraction process is done manually, and depends on the knowledge of a specialist. To overcome these limitations, this paper proposes a new method for recognizing human activities based on symbolic representation algorithms. The method, called “Multivariate Bag-Of-SFA-Symbols” (MBOSS), aims to increase the efficiency of HAR systems and maintain accuracy levels similar to those of conventional systems based on time and frequency domain features. The experiments conducted on three public datasets showed that MBOSS performed the best in terms of accuracy, processing time, and memory consumption. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

22 pages, 3039 KiB  
Article
Demodulation of Chaos Phase Modulation Spread Spectrum Signals Using Machine Learning Methods and Its Evaluation for Underwater Acoustic Communication
by Chao Li, Franck Marzani and Fan Yang
Sensors 2018, 18(12), 4217; https://doi.org/10.3390/s18124217 - 01 Dec 2018
Cited by 5 | Viewed by 3814
Abstract
The chaos phase modulation sequences consist of complex sequences with a constant envelope, which has recently been used for direct-sequence spread spectrum underwater acoustic communication. It is considered an ideal spreading code for its benefits in terms of large code resource quantity, nice [...] Read more.
The chaos phase modulation sequences consist of complex sequences with a constant envelope, which has recently been used for direct-sequence spread spectrum underwater acoustic communication. It is considered an ideal spreading code for its benefits in terms of large code resource quantity, nice correlation characteristics and high security. However, demodulating this underwater communication signal is a challenging job due to complex underwater environments. This paper addresses this problem as a target classification task and conceives a machine learning-based demodulation scheme. The proposed solution is implemented and optimized on a multi-core center processing unit (CPU) platform, then evaluated with replay simulation datasets. In the experiments, time variation, multi-path effect, propagation loss and random noise were considered as distortions. According to the results, compared to the reference algorithms, our method has greater reliability with better temporal efficiency performance. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

19 pages, 3874 KiB  
Article
Localization Reliability Improvement Using Deep Gaussian Process Regression Model
by Fei Teng, Wenyuan Tao and Chung-Ming Own
Sensors 2018, 18(12), 4164; https://doi.org/10.3390/s18124164 - 27 Nov 2018
Cited by 8 | Viewed by 2897
Abstract
With the widespread use of the Global Positioning System, indoor positioning technology has attracted increasing attention. Many systems with distinct deployment costs and positioning accuracies have been developed over the past decade for indoor positioning. The method that is based on received signal [...] Read more.
With the widespread use of the Global Positioning System, indoor positioning technology has attracted increasing attention. Many systems with distinct deployment costs and positioning accuracies have been developed over the past decade for indoor positioning. The method that is based on received signal strength (RSS) is the most widely used. However, manually measuring RSS signal values to build a fingerprint database is costly and time-consuming, and it is impractical in a dynamic environment with a large positioning area. In this study, we propose an indoor positioning system that is based on the deep Gaussian process regression (DGPR) model. This model is a nonparametric model and it only needs to measure part of the reference points, thus reducing the time and cost required for data collection. The model converts the RSS values into four types of characterizing values as input data and then predicts the position coordinates using DGPR. Finally, after reinforcement learning, the position coordinates are optimized. The authors conducted several experiments on a simulated environment by MATLAB and physical environments at Tianjin University. The experiments examined different environments, different kernels, and positioning accuracy. The results showed that the proposed method could not only retain the positioning accuracy, but also save the computation time that is required for location estimation. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

25 pages, 4562 KiB  
Article
Human Activity Recognition Based on Symbolic Representation Algorithms for Inertial Sensors
by Wesllen Sousa Lima, Hendrio L. De Souza Bragança, Kevin G. Montero Quispe and Eduardo J. Pereira Souto
Sensors 2018, 18(11), 4045; https://doi.org/10.3390/s18114045 - 20 Nov 2018
Cited by 14 | Viewed by 3942
Abstract
Mobile sensing has allowed the emergence of a variety of solutions related to the monitoring and recognition of human activities (HAR). Such solutions have been implemented in smartphones for the purpose of better understanding human behavior. However, such solutions still suffer from the [...] Read more.
Mobile sensing has allowed the emergence of a variety of solutions related to the monitoring and recognition of human activities (HAR). Such solutions have been implemented in smartphones for the purpose of better understanding human behavior. However, such solutions still suffer from the limitations of the computing resources found on smartphones. In this sense, the HAR area has focused on the development of solutions of low computational cost. In general, the strategies used in the solutions are based on shallow and deep learning algorithms. The problem is that not all of these strategies are feasible for implementation in smartphones due to the high computational cost required, mainly, by the steps of data preparation and the training of classification models. In this context, this article evaluates a new set of alternative strategies based on Symbolic Aggregate Approximation (SAX) and Symbolic Fourier Approximation (SFA) algorithms with the purpose of developing solutions with low computational cost in terms of memory and processing. In addition, this article also evaluates some classification algorithms adapted to manipulate symbolic data, such as SAX-VSM, BOSS, BOSS-VS and WEASEL. Experiments were performed on the UCI-HAR, SHOAIB and WISDM databases commonly used in the literature to validate HAR solutions based on smartphones. The results show that the symbolic representation algorithms are faster in the feature extraction phase, on average, by 84.81%, and reduce the consumption of memory space, on average, by 94.48%, and they have accuracy rates equivalent to conventional algorithms. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

25 pages, 7975 KiB  
Article
An EigenECG Network Approach Based on PCANet for Personal Identification from ECG Signal
by Jae-Neung Lee, Yeong-Hyeon Byeon, Sung-Bum Pan and Keun-Chang Kwak
Sensors 2018, 18(11), 4024; https://doi.org/10.3390/s18114024 - 18 Nov 2018
Cited by 16 | Viewed by 3413
Abstract
We herein propose an EigenECG Network (EECGNet) based on the principal component analysis network (PCANet) for the personal identification of electrocardiogram (ECG) from human biosignal data. The EECGNet consists of three stages. In the first stage, ECG signals are preprocessed by normalization and [...] Read more.
We herein propose an EigenECG Network (EECGNet) based on the principal component analysis network (PCANet) for the personal identification of electrocardiogram (ECG) from human biosignal data. The EECGNet consists of three stages. In the first stage, ECG signals are preprocessed by normalization and spike removal. The R peak points in the preprocessed ECG signals are detected. Subsequently, ECG signals are transformed into two-dimensional images to use as the input to the EECGNet. Further, we perform patch-mean removal and PCA algorithm similar to the PCANet from the transformed two-dimensional images. The second stage is almost the same as the first stage, where the mean removal and PCA process are repeatedly performed in the cascaded network. In the final stage, the binary quantization, block sliding, and histogram computation are performed. Thus, this EECGNet performs well without the use of back-propagation to obtain features from the visual content. We constructed a Chosun University (CU)-ECG database from an ECG sensor implemented by ourselves. Further, we used the well-known MIT Beth Israel Hospital (BIH) ECG database. The experimental results clearly reveal the good performance and effectiveness of the proposed method compared with conventional algorithms such as PCA, auto-encoder (AE), extreme learning machine (ELM), and ensemble extreme learning machine (EELM). Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

16 pages, 2260 KiB  
Article
3DAirSig: A Framework for Enabling In-Air Signatures Using a Multi-Modal Depth Sensor
by Jameel Malik, Ahmed Elhayek, Sheraz Ahmed, Faisal Shafait, Muhammad Imran Malik and Didier Stricker
Sensors 2018, 18(11), 3872; https://doi.org/10.3390/s18113872 - 10 Nov 2018
Cited by 26 | Viewed by 4790
Abstract
In-air signature is a new modality which is essential for user authentication and access control in noncontact mode and has been actively studied in recent years. However, it has been treated as a conventional online signature, which is essentially a 2D spatial representation. [...] Read more.
In-air signature is a new modality which is essential for user authentication and access control in noncontact mode and has been actively studied in recent years. However, it has been treated as a conventional online signature, which is essentially a 2D spatial representation. Notably, this modality bears a lot more potential due to an important hidden depth feature. Existing methods for in-air signature verification neither capture this unique depth feature explicitly nor fully explore its potential in verification. Moreover, these methods are based on heuristic approaches for fingertip or hand palm center detection, which are not feasible in practice. Inspired by the great progress in deep-learning-based hand pose estimation, we propose a real-time in-air signature acquisition method which estimates hand joint positions in 3D using a single depth image. The predicted 3D position of fingertip is recorded for each frame. We present four different implementations of a verification module, which are based on the extracted depth and spatial features. An ablation study was performed to explore the impact of the depth feature in particular. For matching, we employed the most commonly used multidimensional dynamic time warping (MD-DTW) algorithm. We created a new database which contains 600 signatures recorded from 15 different subjects. Extensive evaluations were performed on our database. Our method, called 3DAirSig, achieved an equal error rate (EER) of 0.46 %. Experiments showed that depth itself is an important feature, which is sufficient for in-air signature verification. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

11 pages, 2811 KiB  
Article
Nondestructive Inspection of Reinforced Concrete Utility Poles with ISOMAP and Random Forest
by Saeed Ullah, Minjoong Jeong and Woosang Lee
Sensors 2018, 18(10), 3463; https://doi.org/10.3390/s18103463 - 15 Oct 2018
Cited by 15 | Viewed by 8337
Abstract
Reinforced concrete poles are very popular in transmission lines due to their economic efficiency. However, these poles have structural safety issues in their service terms that are caused by cracks, corrosion, deterioration, and short-circuiting of internal reinforcing steel wires. Therefore, they must be [...] Read more.
Reinforced concrete poles are very popular in transmission lines due to their economic efficiency. However, these poles have structural safety issues in their service terms that are caused by cracks, corrosion, deterioration, and short-circuiting of internal reinforcing steel wires. Therefore, they must be periodically inspected to evaluate their structural safety. There are many methods of performing external inspection after installation at an actual site. However, on-site nondestructive safety inspection of steel reinforcement wires inside poles is very difficult. In this study, we developed an application that classifies the magnetic field signals of multiple channels, as measured from the actual poles. Initially, the signal data were gathered by inserting sensors into the poles, and these data were then used to learn the patterns of safe and damaged features. These features were then processed with the isometric feature mapping (ISOMAP) dimensionality reduction algorithm. Subsequently, the resulting reduced data were processed with a random forest classification algorithm. The proposed method could elucidate whether the internal wires of the poles were broken or not according to actual sensor data. This method can be applied for evaluating the structural integrity of concrete poles in combination with portable devices for signal measurement (under development). Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

13 pages, 2917 KiB  
Article
Unsupervised Machine Learning for Advanced Tolerance Monitoring of Wire Electrical Discharge Machining of Disc Turbine Fir-Tree Slots
by Jun Wang, Jose A. Sanchez, Izaro Ayesta and Jon A. Iturrioz
Sensors 2018, 18(10), 3359; https://doi.org/10.3390/s18103359 - 08 Oct 2018
Cited by 14 | Viewed by 4173
Abstract
Manufacturing more efficient low pressure turbines has become a topic of primary importance for aerospace companies. Specifically, wire electrical discharge machining of disc turbine fir-tree slots has attracted increasing interest in recent years. However, important issues must be still addressed for optimum application [...] Read more.
Manufacturing more efficient low pressure turbines has become a topic of primary importance for aerospace companies. Specifically, wire electrical discharge machining of disc turbine fir-tree slots has attracted increasing interest in recent years. However, important issues must be still addressed for optimum application of the WEDM process for fir-tree slot production. The current work presents a novel approach for tolerance monitoring based on unsupervised machine learning methods using distribution of ionization time as a variable. The need for time-consuming experiments to set-up threshold values of the monitoring signal is avoided by using K-means and hierarchical clustering. The developments have been tested in the WEDM of a generic fir-tree slot under industrial conditions. Results show that 100% of the zones classified into Clusters 1 and 2 are related to short-circuit situations. Further, 100% of the zones classified in Clusters 3 and 5 lie within the tolerance band of ±15 μm. Finally, the 9 regions classified in Cluster 4 correspond to situations in which the wire is moving too far away from the part surface. These results are strongly in accord with tolerance distribution as measured by a coordinate measuring machine. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

15 pages, 1038 KiB  
Article
Network Distance-Based Simulated Annealing and Fuzzy Clustering for Sensor Placement Ensuring Observability and Minimal Relative Degree
by Daniel Leitold, Agnes Vathy-Fogarassy and Janos Abonyi
Sensors 2018, 18(9), 3096; https://doi.org/10.3390/s18093096 - 14 Sep 2018
Cited by 17 | Viewed by 3056
Abstract
Network science-based analysis of the observability of dynamical systems has been a focus of attention over the past five years. The maximum matching-based approach provides a simple tool to determine the minimum number of sensors and their positions. However, the resulting proportion of [...] Read more.
Network science-based analysis of the observability of dynamical systems has been a focus of attention over the past five years. The maximum matching-based approach provides a simple tool to determine the minimum number of sensors and their positions. However, the resulting proportion of sensors is particularly small when compared to the size of the system, and, although structural observability is ensured, the system demands additional sensors to provide the small relative order needed for fast and robust process monitoring and control. In this paper, two clustering and simulated annealing-based methodologies are proposed to assign additional sensors to the dynamical systems. The proposed methodologies simplify the observation of the system and decrease its relative order. The usefulness of the proposed method is justified in a sensor-placement problem of a heat exchanger network. The results show that the relative order of the observability is decreased significantly by an increase in the number of additional sensors. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

14 pages, 3240 KiB  
Article
Signal Processing for Time Domain Wavelengths of Ultra-Weak FBGs Array in Perimeter Security Monitoring Based on Spark Streaming
by Zhenhao Yu, Fang Liu, Yinquan Yuan, Sihan Li and Zhengying Li
Sensors 2018, 18(9), 2937; https://doi.org/10.3390/s18092937 - 04 Sep 2018
Cited by 7 | Viewed by 3200
Abstract
To detect perimeter intrusion accurately and quickly, a stream computing technology was used to improve real-time data processing in perimeter intrusion detection systems. Based on the traditional density-based spatial clustering of applications with noise (T-DBSCAN) algorithm, which depends on manual adjustments of neighborhood [...] Read more.
To detect perimeter intrusion accurately and quickly, a stream computing technology was used to improve real-time data processing in perimeter intrusion detection systems. Based on the traditional density-based spatial clustering of applications with noise (T-DBSCAN) algorithm, which depends on manual adjustments of neighborhood parameters, an adaptive parameters DBSCAN (AP-DBSCAN) method that can achieve unsupervised calculations was proposed. The proposed AP-DBSCAN method was implemented on a Spark Streaming platform to deal with the problems of data stream collection and real-time analysis, as well as judging and identifying the different types of intrusion. A number of sensing and processing experiments were finished and the experimental data indicated that the proposed AP-DBSCAN method on the Spark Streaming platform exhibited a fine calibration capacity for the adaptive parameters and the same accuracy as the T-DBSCAN method without the artificial setting of neighborhood parameters, in addition to achieving good performances in the perimeter intrusion detection systems. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

19 pages, 630 KiB  
Article
Real-Time Task Assignment Approach Leveraging Reinforcement Learning with Evolution Strategies for Long-Term Latency Minimization in Fog Computing
by Long Mai, Nhu-Ngoc Dao and Minho Park
Sensors 2018, 18(9), 2830; https://doi.org/10.3390/s18092830 - 27 Aug 2018
Cited by 36 | Viewed by 4920
Abstract
The emerging fog computing technology is characterized by an ultralow latency response, which benefits a massive number of time-sensitive services and applications in the Internet of things (IoT) era. To this end, the fog computing infrastructure must minimize latencies for both service delivery [...] Read more.
The emerging fog computing technology is characterized by an ultralow latency response, which benefits a massive number of time-sensitive services and applications in the Internet of things (IoT) era. To this end, the fog computing infrastructure must minimize latencies for both service delivery and execution phases. While the transmission latency significantly depends on external factors (e.g., channel bandwidth, communication resources, and interferences), the computation latency can be considered as an internal issue that the fog computing infrastructure could actively self-handle. From this view point, we propose a reinforcement learning approach that utilizes the evolution strategies for real-time task assignment among fog servers to minimize the total computation latency during a long-term period. Experimental results demonstrate that the proposed approach reduces the latency by approximately 16.1% compared to the existing methods. Additionally, the proposed learning algorithm has low computational complexity and an effectively parallel operation; therefore, it is especially appropriate to be implemented in modern heterogeneous computing platforms. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

15 pages, 1756 KiB  
Article
Decentralized Online Simultaneous Localization and Mapping for Multi-Agent Systems
by Andrés C. Jiménez, Vicente García-Díaz, Rubén González-Crespo and Sandro Bolaños
Sensors 2018, 18(8), 2612; https://doi.org/10.3390/s18082612 - 09 Aug 2018
Cited by 7 | Viewed by 3265
Abstract
Planning tasks performed by a robotic agent require previous access to a map of the environment and the position where the agent is located. This creates a problem when the agent is placed in a new environment. To solve it, the RA must [...] Read more.
Planning tasks performed by a robotic agent require previous access to a map of the environment and the position where the agent is located. This creates a problem when the agent is placed in a new environment. To solve it, the RA must execute the task known as Simultaneous Location and Mapping (SLAM) which locates the agent in the new environment while generating the map at the same time, geometrically or topologically. One of the big problems in SLAM is the amount of memory required for the RA to store the details of the environment map. In addition, environment data capture needs a robust processing unit to handle data representation, which in turn is reflected in a bigger RA unit with higher energy use and production costs. This article presents a design for a system capable of a decentralized implementation of SLAM that is based on the use of a system comprised of wireless agents capable of storing and distributing the map as it is being generated by the RA. The proposed system was validated in an environment with a surface area of 25 m 2 , in which it was capable of generating the topological map online, and without relying on external units connected to the system. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

17 pages, 1552 KiB  
Article
Attributes’ Importance for Zero-Shot Pose-Classification Based on Wearable Sensors
by Hiroki Ohashi, Mohammad Al-Naser, Sheraz Ahmed, Katsuyuki Nakamura, Takuto Sato and Andreas Dengel
Sensors 2018, 18(8), 2485; https://doi.org/10.3390/s18082485 - 01 Aug 2018
Cited by 16 | Viewed by 4311
Abstract
This paper presents a simple yet effective method for improving the performance of zero-shot learning (ZSL). ZSL classifies instances of unseen classes, from which no training data is available, by utilizing the attributes of the classes. Conventional ZSL methods have equally dealt with [...] Read more.
This paper presents a simple yet effective method for improving the performance of zero-shot learning (ZSL). ZSL classifies instances of unseen classes, from which no training data is available, by utilizing the attributes of the classes. Conventional ZSL methods have equally dealt with all the available attributes, but this sometimes causes misclassification. This is because an attribute that is effective for classifying instances of one class is not always effective for another class. In this case, a metric of classifying the latter class can be undesirably influenced by the irrelevant attribute. This paper solves this problem by taking the importance of each attribute for each class into account when calculating the metric. In addition to the proposal of this new method, this paper also contributes by providing a dataset for pose classification based on wearable sensors, named HDPoseDS. It contains 22 classes of poses performed by 10 subjects with 31 IMU sensors across full body. To the best of our knowledge, it is the richest wearable-sensor dataset especially in terms of sensor density, and thus it is suitable for studying zero-shot pose/action recognition. The presented method was evaluated on HDPoseDS and outperformed relative improvement of 5.9% in comparison to the best baseline method. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

26 pages, 10453 KiB  
Article
Improving Classification Algorithms by Considering Score Series in Wireless Acoustic Sensor Networks
by Amalia Luque, Javier Romero-Lemos, Alejandro Carrasco and Julio Barbancho
Sensors 2018, 18(8), 2465; https://doi.org/10.3390/s18082465 - 30 Jul 2018
Cited by 6 | Viewed by 3129
Abstract
The reduction in size, power consumption and price of many sensor devices has enabled the deployment of many sensor networks that can be used to monitor and control several aspects of various habitats. More specifically, the analysis of sounds has attracted a huge [...] Read more.
The reduction in size, power consumption and price of many sensor devices has enabled the deployment of many sensor networks that can be used to monitor and control several aspects of various habitats. More specifically, the analysis of sounds has attracted a huge interest in urban and wildlife environments where the classification of the different signals has become a major issue. Various algorithms have been described for this purpose, a number of which frame the sound and classify these frames, while others take advantage of the sequential information embedded in a sound signal. In the paper, a new algorithm is proposed that, while maintaining the frame-classification advantages, adds a new phase that considers and classifies the score series derived after frame labelling. These score series are represented using cepstral coefficients and classified using standard machine-learning classifiers. The proposed algorithm has been applied to a dataset of anuran calls and its results compared to the performance obtained in previous experiments on sensor networks. The main outcome of our research is that the consideration of score series strongly outperforms other algorithms and attains outstanding performance despite the noisy background commonly encountered in this kind of application. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

22 pages, 8784 KiB  
Article
Voiceprint Identification for Limited Dataset Using the Deep Migration Hybrid Model Based on Transfer Learning
by Cunwei Sun, Yuxin Yang, Chang Wen, Kai Xie and Fangqing Wen
Sensors 2018, 18(7), 2399; https://doi.org/10.3390/s18072399 - 23 Jul 2018
Cited by 30 | Viewed by 5928
Abstract
The convolutional neural network (CNN) has made great strides in the area of voiceprint recognition; but it needs a huge number of data samples to train a deep neural network. In practice, it is too difficult to get a large number of training [...] Read more.
The convolutional neural network (CNN) has made great strides in the area of voiceprint recognition; but it needs a huge number of data samples to train a deep neural network. In practice, it is too difficult to get a large number of training samples, and it cannot achieve a better convergence state due to the limited dataset. In order to solve this question, a new method using a deep migration hybrid model is put forward, which makes it easier to realize voiceprint recognition for small samples. Firstly, it uses Transfer Learning to transfer the trained network from the big sample voiceprint dataset to our limited voiceprint dataset for the further training. Fully-connected layers of a pre-training model are replaced by restricted Boltzmann machine layers. Secondly, the approach of Data Augmentation is adopted to increase the number of voiceprint datasets. Finally, we introduce fast batch normalization algorithms to improve the speed of the network convergence and shorten the training time. Our new voiceprint recognition approach uses the TLCNN-RBM (convolutional neural network mixed restricted Boltzmann machine based on transfer learning) model, which is the deep migration hybrid model that is used to achieve an average accuracy of over 97%, which is higher than that when using either CNN or the TL-CNN network (convolutional neural network based on transfer learning). Thus, an effective method for a small sample of voiceprint recognition has been provided. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

27 pages, 6021 KiB  
Article
Robust Face Recognition Using the Deep C2D-CNN Model Based on Decision-Level Fusion
by Jing Li, Tao Qiu, Chang Wen, Kai Xie and Fang-Qing Wen
Sensors 2018, 18(7), 2080; https://doi.org/10.3390/s18072080 - 28 Jun 2018
Cited by 64 | Viewed by 8642
Abstract
Given that facial features contain a wide range of identification information and cannot be completely represented by a single feature, the fusion of multiple features is particularly significant for achieving a robust face recognition performance, especially when there is a big difference between [...] Read more.
Given that facial features contain a wide range of identification information and cannot be completely represented by a single feature, the fusion of multiple features is particularly significant for achieving a robust face recognition performance, especially when there is a big difference between the test sets and the training sets. This has been proven in both traditional and deep learning approaches. In this work, we proposed a novel method named C2D-CNN (color 2-dimensional principal component analysis (2DPCA)-convolutional neural network). C2D-CNN combines the features learnt from the original pixels with the image representation learnt by CNN, and then makes decision-level fusion, which can significantly improve the performance of face recognition. Furthermore, a new CNN model is proposed: firstly, we introduce a normalization layer in CNN to speed up the network convergence and shorten the training time. Secondly, the layered activation function is introduced to make the activation function adaptive to the normalized data. Finally, probabilistic max-pooling is applied so that the feature information is preserved to the maximum extent while maintaining feature invariance. Experimental results show that compared with the state-of-the-art method, our method shows better performance and solves low recognition accuracy caused by the difference between test and training datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Graphical abstract

18 pages, 13325 KiB  
Article
Online Model Updating and Dynamic Learning Rate-Based Robust Object Tracking
by Md Mojahidul Islam, Guoqing Hu and Qianbo Liu
Sensors 2018, 18(7), 2046; https://doi.org/10.3390/s18072046 - 26 Jun 2018
Cited by 8 | Viewed by 3554
Abstract
Robust visual tracking is a significant and challenging issue in computer vision-related research fields and has attracted an immense amount of attention from researchers. Due to various practical applications, many studies have been done that have introduced numerous algorithms. It is considered to [...] Read more.
Robust visual tracking is a significant and challenging issue in computer vision-related research fields and has attracted an immense amount of attention from researchers. Due to various practical applications, many studies have been done that have introduced numerous algorithms. It is considered to be a challenging problem due to the unpredictability of various real-time situations, such as illumination variations, occlusion, fast motion, deformation, and scale variation, even though we only know the initial target position. To address these matters, we used a kernelized-correlation-filter-based translation filter with the integration of multiple features such as histogram of oriented gradients (HOG) and color attributes. These powerful features are useful to differentiate the target from the surrounding background and are effective for motion blur and illumination variations. To minimize the scale variation problem, we designed a correlation-filter-based scale filter. The proposed adaptive model’s updating and dynamic learning rate strategies based on a peak-to-sidelobe ratio effectively reduce model-drifting problems by avoiding noisy appearance changes. The experiment results show that our method provides the best performance compared to other methods, with a distance precision score of 79.9%, overlap success score of 59.0%, and an average running speed of 74 frames per second on the object tracking benchmark (OTB-2015). Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

19 pages, 14263 KiB  
Article
An Efficient Neural-Network-Based Microseismic Monitoring Platform for Hydraulic Fracture on an Edge Computing Architecture
by Xiaopu Zhang, Jun Lin, Zubin Chen, Feng Sun, Xi Zhu and Gengfa Fang
Sensors 2018, 18(6), 1828; https://doi.org/10.3390/s18061828 - 05 Jun 2018
Cited by 18 | Viewed by 5563
Abstract
Microseismic monitoring is one of the most critical technologies for hydraulic fracturing in oil and gas production. To detect events in an accurate and efficient way, there are two major challenges. One challenge is how to achieve high accuracy due to a poor [...] Read more.
Microseismic monitoring is one of the most critical technologies for hydraulic fracturing in oil and gas production. To detect events in an accurate and efficient way, there are two major challenges. One challenge is how to achieve high accuracy due to a poor signal-to-noise ratio (SNR). The other one is concerned with real-time data transmission. Taking these challenges into consideration, an edge-computing-based platform, namely Edge-to-Center LearnReduce, is presented in this work. The platform consists of a data center with many edge components. At the data center, a neural network model combined with convolutional neural network (CNN) and long short-term memory (LSTM) is designed and this model is trained by using previously obtained data. Once the model is fully trained, it is sent to edge components for events detection and data reduction. At each edge component, a probabilistic inference is added to the neural network model to improve its accuracy. Finally, the reduced data is delivered to the data center. Based on experiment results, a high detection accuracy (over 96%) with less transmitted data (about 90%) was achieved by using the proposed approach on a microseismic monitoring system. These results show that the platform can simultaneously improve the accuracy and efficiency of microseismic monitoring. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

21 pages, 13639 KiB  
Article
Agreement Technologies for Energy Optimization at Home
by Alfonso González-Briones, Pablo Chamoso, Fernando De La Prieta, Yves Demazeau and Juan M. Corchado
Sensors 2018, 18(5), 1633; https://doi.org/10.3390/s18051633 - 19 May 2018
Cited by 34 | Viewed by 4479
Abstract
Nowadays, it is becoming increasingly common to deploy sensors in public buildings or homes with the aim of obtaining data from the environment and taking decisions that help to save energy. Many of the current state-of-the-art systems make decisions considering solely the environmental [...] Read more.
Nowadays, it is becoming increasingly common to deploy sensors in public buildings or homes with the aim of obtaining data from the environment and taking decisions that help to save energy. Many of the current state-of-the-art systems make decisions considering solely the environmental factors that cause the consumption of energy. These systems are successful at optimizing energy consumption; however, they do not adapt to the preferences of users and their comfort. Any system that is to be used by end-users should consider factors that affect their wellbeing. Thus, this article proposes an energy-saving system, which apart from considering the environmental conditions also adapts to the preferences of inhabitants. The architecture is based on a Multi-Agent System (MAS), its agents use Agreement Technologies (AT) to perform a negotiation process between the comfort preferences of the users and the degree of optimization that the system can achieve according to these preferences. A case study was conducted in an office building, showing that the proposed system achieved average energy savings of 17.15%. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

14 pages, 2264 KiB  
Article
Residual Error Based Anomaly Detection Using Auto-Encoder in SMD Machine Sound
by Dong Yul Oh and Il Dong Yun
Sensors 2018, 18(5), 1308; https://doi.org/10.3390/s18051308 - 24 Apr 2018
Cited by 88 | Viewed by 10628
Abstract
Detecting an anomaly or an abnormal situation from given noise is highly useful in an environment where constantly verifying and monitoring a machine is required. As deep learning algorithms are further developed, current studies have focused on this problem. However, there are too [...] Read more.
Detecting an anomaly or an abnormal situation from given noise is highly useful in an environment where constantly verifying and monitoring a machine is required. As deep learning algorithms are further developed, current studies have focused on this problem. However, there are too many variables to define anomalies, and the human annotation for a large collection of abnormal data labeled at the class-level is very labor-intensive. In this paper, we propose to detect abnormal operation sounds or outliers in a very complex machine along with reducing the data-driven annotation cost. The architecture of the proposed model is based on an auto-encoder, and it uses the residual error, which stands for its reconstruction quality, to identify the anomaly. We assess our model using Surface-Mounted Device (SMD) machine sound, which is very complex, as experimental data, and state-of-the-art performance is successfully achieved for anomaly detection. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

14 pages, 12873 KiB  
Article
Application of Deep Learning Architectures for Accurate and Rapid Detection of Internal Mechanical Damage of Blueberry Using Hyperspectral Transmittance Data
by Zhaodi Wang, Menghan Hu and Guangtao Zhai
Sensors 2018, 18(4), 1126; https://doi.org/10.3390/s18041126 - 07 Apr 2018
Cited by 110 | Viewed by 9869
Abstract
Deep learning has become a widely used powerful tool in many research fields, although not much so yet in agriculture technologies. In this work, two deep convolutional neural networks (CNN), viz. Residual Network (ResNet) and its improved version named ResNeXt, are used to [...] Read more.
Deep learning has become a widely used powerful tool in many research fields, although not much so yet in agriculture technologies. In this work, two deep convolutional neural networks (CNN), viz. Residual Network (ResNet) and its improved version named ResNeXt, are used to detect internal mechanical damage of blueberries using hyperspectral transmittance data. The original structure and size of hypercubes are adapted for the deep CNN training. To ensure that the models are applicable to hypercube, we adjust the number of filters in the convolutional layers. Moreover, a total of 5 traditional machine learning algorithms, viz. Sequential Minimal Optimization (SMO), Linear Regression (LR), Random Forest (RF), Bagging and Multilayer Perceptron (MLP), are performed as the comparison experiments. In terms of model assessment, k-fold cross validation is used to indicate that the model performance does not vary with the different combination of dataset. In real-world application, selling damaged berries will lead to greater interest loss than discarding the sound ones. Thus, precision, recall, and F1-score are also used as the evaluation indicators alongside accuracy to quantify the false positive rate. The first three indicators are seldom used by investigators in the agricultural engineering domain. Furthermore, ROC curves and Precision-Recall curves are plotted to visualize the performance of classifiers. The fine-tuned ResNet/ResNeXt achieve average accuracy and F1-score of 0.8844/0.8784 and 0.8952/0.8905, respectively. Classifiers SMO/ LR/RF/Bagging/MLP obtain average accuracy and F1-score of 0.8082/0.7606/0.7314/0.7113/0.7827 and 0.8268/0.7796/0.7529/0.7339/0.7971, respectively. Two deep learning models achieve better classification performance than the traditional machine learning methods. Classification for each testing sample only takes 5.2 ms and 6.5 ms respectively for ResNet and ResNeXt, indicating that the deep learning framework has great potential for online fruit sorting. The results of this study demonstrate the potential of deep CNN application on analyzing the internal mechanical damage of fruit. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

15 pages, 13503 KiB  
Article
Comparative Performance Analysis of Support Vector Machine, Random Forest, Logistic Regression and k-Nearest Neighbours in Rainbow Trout (Oncorhynchus Mykiss) Classification Using Image-Based Features
by Mohammadmehdi Saberioon, Petr Císař, Laurent Labbé, Pavel Souček, Pablo Pelissier and Thierry Kerneis
Sensors 2018, 18(4), 1027; https://doi.org/10.3390/s18041027 - 29 Mar 2018
Cited by 49 | Viewed by 7160
Abstract
The main aim of this study was to develop a new objective method for evaluating the impacts of different diets on the live fish skin using image-based features. In total, one-hundred and sixty rainbow trout (Oncorhynchus mykiss) were fed either a [...] Read more.
The main aim of this study was to develop a new objective method for evaluating the impacts of different diets on the live fish skin using image-based features. In total, one-hundred and sixty rainbow trout (Oncorhynchus mykiss) were fed either a fish-meal based diet (80 fish) or a 100% plant-based diet (80 fish) and photographed using consumer-grade digital camera. Twenty-three colour features and four texture features were extracted. Four different classification methods were used to evaluate fish diets including Random forest (RF), Support vector machine (SVM), Logistic regression (LR) and k-Nearest neighbours (k-NN). The SVM with radial based kernel provided the best classifier with correct classification rate (CCR) of 82% and Kappa coefficient of 0.65. Although the both LR and RF methods were less accurate than SVM, they achieved good classification with CCR 75% and 70% respectively. The k-NN was the least accurate (40%) classification model. Overall, it can be concluded that consumer-grade digital cameras could be employed as the fast, accurate and non-invasive sensor for classifying rainbow trout based on their diets. Furthermore, these was a close association between image-based features and fish diet received during cultivation. These procedures can be used as non-invasive, accurate and precise approaches for monitoring fish status during the cultivation by evaluating diet’s effects on fish skin. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

20 pages, 2529 KiB  
Article
An Internet of Things System for Underground Mine Air Quality Pollutant Prediction Based on Azure Machine Learning
by ByungWan Jo and Rana Muhammad Asad Khan
Sensors 2018, 18(4), 930; https://doi.org/10.3390/s18040930 - 21 Mar 2018
Cited by 62 | Viewed by 11198
Abstract
The implementation of wireless sensor networks (WSNs) for monitoring the complex, dynamic, and harsh environment of underground coal mines (UCMs) is sought around the world to enhance safety. However, previously developed smart systems are limited to monitoring or, in a few cases, can [...] Read more.
The implementation of wireless sensor networks (WSNs) for monitoring the complex, dynamic, and harsh environment of underground coal mines (UCMs) is sought around the world to enhance safety. However, previously developed smart systems are limited to monitoring or, in a few cases, can report events. Therefore, this study introduces a reliable, efficient, and cost-effective internet of things (IoT) system for air quality monitoring with newly added features of assessment and pollutant prediction. This system is comprised of sensor modules, communication protocols, and a base station, running Azure Machine Learning (AML) Studio over it. Arduino-based sensor modules with eight different parameters were installed at separate locations of an operational UCM. Based on the sensed data, the proposed system assesses mine air quality in terms of the mine environment index (MEI). Principal component analysis (PCA) identified CH4, CO, SO2, and H2S as the most influencing gases significantly affecting mine air quality. The results of PCA were fed into the ANN model in AML studio, which enabled the prediction of MEI. An optimum number of neurons were determined for both actual input and PCA-based input parameters. The results showed a better performance of the PCA-based ANN for MEI prediction, with R2 and RMSE values of 0.6654 and 0.2104, respectively. Therefore, the proposed Arduino and AML-based system enhances mine environmental safety by quickly assessing and predicting mine air quality. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Sensors Networks)
Show Figures

Figure 1

Back to TopTop