Intelligent Information Processing for Sensors and IoT Communications

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information and Communications Technology".

Deadline for manuscript submissions: 31 January 2025 | Viewed by 16661

Special Issue Editor


E-Mail Website
Guest Editor
School of Engineering, Edith Cowan University, Joondalup, WA 6027, Australia
Interests: signal processing; artificial intelligence; electronics; sensors; applied mathematics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

For the last three decades, digital signal processing (DSP) and artificial intelligence (AI), supported by the wonderful developments in nanotechnology, have led to a revolution in digital communications and smart systems and inspired major developments in various fields. A new era for civilization has just begun, giving hope for solving the challenging problems of human life. Nowadays, sensor signal processing, supported by the huge progress in deep learning, is leading a new revolution with a significant impact on various applications in conjunction with the latest developments in the Internet of Things (IoT). Most existing applications utilize complex digital systems for information processing, though in some applications such as wireless sensor networks, systems may be confronted with limited storage, power, or computation capabilities, causing ordinary complex techniques to be reconsidered. This Special Issue of Information will focus on smart signal processing and/or AI techniques that enable efficient processing in sensor networks and IoT-related systems.

Topics to be covered include, but are not limited to, the following, aimed at sensor support, wireless sensor networks (WSNs), and IoT applications:

  • Deep learning and adaptive DSP techniques;
  • Biomedical applications;
  • Energy-efficient techniques and designs;
  • Low-cost hardware designs;
  • Efficient routing and transmission protocols;
  • Security techniques;
  • Computationally efficient techniques;
  • Communication techniques and systems.

Prof. Dr. Zahir M. Hussain
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • signal processing
  • sensors
  • IoT
  • security
  • wireless sensor networks
  • communication systems
  • biomedical engineering

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 9186 KiB  
Article
IoT-Based SHM Using Digital Twins for Interoperable and Scalable Decentralized Smart Sensing Systems
by Jiahang Chen, Jan Reitz, Rebecca Richstein, Kai-Uwe Schröder and Jürgen Roßmann
Information 2024, 15(3), 121; https://doi.org/10.3390/info15030121 - 20 Feb 2024
Viewed by 1149
Abstract
Advancing digitalization is reaching the realm of lightweight construction and structural–mechanical components. Through the synergistic combination of distributed sensors and intelligent evaluation algorithms, traditional structures evolve into smart sensing systems. In this context, Structural Health Monitoring (SHM) plays a key role in managing [...] Read more.
Advancing digitalization is reaching the realm of lightweight construction and structural–mechanical components. Through the synergistic combination of distributed sensors and intelligent evaluation algorithms, traditional structures evolve into smart sensing systems. In this context, Structural Health Monitoring (SHM) plays a key role in managing potential risks to human safety and environmental integrity due to structural failures by providing analysis, localization, and records of the structure’s loading and damaging conditions. The establishment of networks between sensors and data-processing units via Internet of Things (IoT) technologies is an elementary prerequisite for the integration of SHM into smart sensing systems. However, this integrating of SHM faces significant restrictions due to scalability challenges of smart sensing systems and IoT-specific issues, including communication security and interoperability. To address the issue, this paper presents a comprehensive methodological framework aimed at facilitating the scalable integration of objects ranging from components via systems to clusters into SHM systems. Furthermore, we detail a prototypical implementation of the conceptually developed framework, demonstrating a structural component and its corresponding Digital Twin. Here, real-time capable deformation and strain-based monitoring of the structure are achieved, showcasing the practical applicability of the proposed framework. Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
Show Figures

Figure 1

16 pages, 8616 KiB  
Article
Enhancing Pedestrian Tracking in Autonomous Vehicles by Using Advanced Deep Learning Techniques
by Majdi Sukkar, Madhu Shukla, Dinesh Kumar, Vassilis C. Gerogiannis, Andreas Kanavos and Biswaranjan Acharya
Information 2024, 15(2), 104; https://doi.org/10.3390/info15020104 - 09 Feb 2024
Viewed by 2095
Abstract
Effective collision risk reduction in autonomous vehicles relies on robust and straightforward pedestrian tracking. Challenges posed by occlusion and switching scenarios significantly impede the reliability of pedestrian tracking. In the current study, we strive to enhance the reliability and also the efficacy of [...] Read more.
Effective collision risk reduction in autonomous vehicles relies on robust and straightforward pedestrian tracking. Challenges posed by occlusion and switching scenarios significantly impede the reliability of pedestrian tracking. In the current study, we strive to enhance the reliability and also the efficacy of pedestrian tracking in complex scenarios. Particularly, we introduce a new pedestrian tracking algorithm that leverages both the YOLOv8 (You Only Look Once) object detector technique and the StrongSORT algorithm, which is an advanced deep learning multi-object tracking (MOT) method. Our findings demonstrate that StrongSORT, an enhanced version of the DeepSORT MOT algorithm, substantially improves tracking accuracy through meticulous hyperparameter tuning. Overall, the experimental results reveal that the proposed algorithm is an effective and efficient method for pedestrian tracking, particularly in complex scenarios encountered in the MOT16 and MOT17 datasets. The combined use of Yolov8 and StrongSORT contributes to enhanced tracking results, emphasizing the synergistic relationship between detection and tracking modules. Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
Show Figures

Figure 1

22 pages, 3482 KiB  
Article
Optimal Radio Propagation Modeling and Parametric Tuning Using Optimization Algorithms
by Joseph Isabona, Agbotiname Lucky Imoize, Oluwasayo Akinloye Akinwumi, Okiemute Roberts Omasheye, Emughedi Oghu, Cheng-Chi Lee and Chun-Ta Li
Information 2023, 14(11), 621; https://doi.org/10.3390/info14110621 - 19 Nov 2023
Viewed by 1247
Abstract
Benchmarking different optimization algorithms is tasky, particularly for network-based cellular communication systems. The design and management process of these systems involves many stochastic variables and complex design parameters that demand an unbiased estimation and analysis. Though several optimization algorithms exist for different parametric [...] Read more.
Benchmarking different optimization algorithms is tasky, particularly for network-based cellular communication systems. The design and management process of these systems involves many stochastic variables and complex design parameters that demand an unbiased estimation and analysis. Though several optimization algorithms exist for different parametric modeling and tuning, an in-depth evaluation of their functional performance has not been adequately addressed, especially for cellular communication systems. Firstly, in this paper, nine key numerical and global optimization algorithms, comprising Gauss–Newton (GN), gradient descent (GD), Genetic Algorithm (GA), Levenberg–Marguardt (LM), Quasi-Newton (QN), Trust-Region–Dog-Leg (TR), pattern search (PAS), Simulated Annealing (SA), and particle swam (PS), have been benchmarked against measured data. The experimental data were taken from different radio signal propagation terrains around four eNodeB cells. In order to assist the radio frequency (RF) engineer in selecting the most suitable optimization method for the parametric model tuning, three-fold benchmarking criteria comprising the Accuracy Profile Benchmark (APB), Function Evaluation Benchmark (FEB), and Execution Speed Benchmark (ESB) were employed. The APB and FEB were quantitatively compared against the measured data for fair benchmarking. By leveraging the APB performance criteria, the QN achieved the best results with the preferred values of 98.34, 97.31, 97.44, and 96.65% in locations 1–4. The GD attained the worst performance with the lowest APE values of 98.25, 95.45, 96.10, and 95.70 in the tested locations. In terms of objective function values and their evaluation count, the QN algorithm shows the fewest function counts of 44, 44, 56, and 44, and the lowest objective values of 80.85, 37.77, 54.69, and 41.24, thus attaining the best optimization algorithm results across the study locations. The worst performance was attained by the GD with objective values of 86.45, 39.58, 76.66, and 54.27, respectively. Though the objective values achieved with global optimization methods, PAS, GA, PS, and SA, are relatively small compared to the QN, their function evaluation counts are high. The PAS, GA, PS, and SA recorded 1367, 2550, 3450, and 2818 function evaluation counts, which are relatively high. Overall, the QN algorithm achieves the best optimization, and it can serve as a reference for RF engineers in selecting suitable optimization methods for propagation modeling and parametric tuning. Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
Show Figures

Figure 1

19 pages, 3807 KiB  
Article
BGP Dataset-Based Malicious User Activity Detection Using Machine Learning
by Hansol Park, Kookjin Kim, Dongil Shin and Dongkyoo Shin
Information 2023, 14(9), 501; https://doi.org/10.3390/info14090501 - 13 Sep 2023
Cited by 1 | Viewed by 1384
Abstract
Recent advances in the Internet and digital technology have brought a wide variety of activities into cyberspace, but they have also brought a surge in cyberattacks, making it more important than ever to detect and prevent cyberattacks. In this study, a method is [...] Read more.
Recent advances in the Internet and digital technology have brought a wide variety of activities into cyberspace, but they have also brought a surge in cyberattacks, making it more important than ever to detect and prevent cyberattacks. In this study, a method is proposed to detect anomalies in cyberspace by consolidating BGP (Border Gateway Protocol) data into numerical data that can be trained by machine learning (ML) through a tokenizer. BGP data comprise a mix of numeric and textual data, making it challenging for ML models to learn. To convert the data into a numerical format, a tokenizer, a preprocessing technique from Natural Language Processing (NLP), was employed. This process goes beyond merely replacing letters with numbers; its objective is to preserve the patterns and characteristics of the data. The Synthetic Minority Over-sampling Technique (SMOTE) was subsequently applied to address the issue of imbalanced data. Anomaly detection experiments were conducted on the model using various ML algorithms such as One-Class Support Vector Machine (One-SVM), Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM), Random Forest (RF), and Autoencoder (AE), and excellent performance in detection was demonstrated. In experiments, it performed best with the AE model, with an F1-Score of 0.99. In terms of the Area Under the Receiver Operating Characteristic (AUROC) curve, good performance was achieved by all ML models, with an average of over 90%. Improved cybersecurity is expected to be contributed by this research, as it enables the detection and monitoring of cyber anomalies from malicious users through BGP data. Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
Show Figures

Figure 1

19 pages, 6886 KiB  
Article
Theme Mapping and Bibliometric Analysis of Two Decades of Smart Farming
by Tri Kushartadi, Aditya Eka Mulyono, Azhari Haris Al Hamdi, Muhammad Afif Rizki, Muhammad Anwar Sadat Faidar, Wirawan Dwi Harsanto, Muhammad Suryanegara and Muhamad Asvial
Information 2023, 14(7), 396; https://doi.org/10.3390/info14070396 - 11 Jul 2023
Cited by 6 | Viewed by 1825
Abstract
The estimated global population for 2050 is 9 billion, which implies an increase in food demand. Agriculture is the primary source of food production worldwide, and improving its efficiency and productivity through an integration with information and communication technology system, so-called “smart farming”, [...] Read more.
The estimated global population for 2050 is 9 billion, which implies an increase in food demand. Agriculture is the primary source of food production worldwide, and improving its efficiency and productivity through an integration with information and communication technology system, so-called “smart farming”, is a promising approach to optimizing food supply. This research employed bibliometric analysis techniques to investigate smart farming trends, identify their potential benefits, and analyze their research insight. Data were collected from 1141 publications in the Scopus database in the period 1997–2021 and were extracted using VOS Viewer, which quantified the connections between the articles using the co-citation unit, resulting in a mapping of 10 clusters, ranging from agriculture to soil moisture. Finally, the analysis further focuses on the three major themes of smart farming, namely the IoT; blockchain and agricultural robots; and smart agriculture, crops, and irrigation. Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
Show Figures

Figure 1

23 pages, 6366 KiB  
Article
Oriented Crossover in Genetic Algorithms for Computer Networks Optimization
by Furkan Rabee and Zahir M. Hussain
Information 2023, 14(5), 276; https://doi.org/10.3390/info14050276 - 05 May 2023
Viewed by 2244
Abstract
Optimization using genetic algorithms (GA) is a well-known strategy in several scientific disciplines. The crossover is an essential operator of the genetic algorithm. It has been an active area of research to develop sustainable forms for this operand. In this work, a new [...] Read more.
Optimization using genetic algorithms (GA) is a well-known strategy in several scientific disciplines. The crossover is an essential operator of the genetic algorithm. It has been an active area of research to develop sustainable forms for this operand. In this work, a new crossover operand is proposed. This operand depends on giving an elicited description for the chromosome with a new structure for alleles of the parents. It is suggested that each allele has two attitudes, one attitude differs contrastingly with the other, and both of them complement the allele. Thus, in case where one attitude is good, the other should be bad. This is suitable for many systems which contain admired parameters and unadmired parameters. The proposed crossover would improve the desired attitudes and dampen the undesired attitudes. The proposed crossover can be achieved in two stages: The first stage is a mating method for both attitudes in one parent to improving one attitude at the expense of the other. The second stage comes after the first improvement stage for mating between different parents. Hence, two concurrent steps for improvement would be applied. Simulation experiments for the system show improvement in the fitness function. The proposed crossover could be helpful in different fields, especially to optimize routing algorithms and network protocols, an application that has been tested as a case study in this work. Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
Show Figures

Figure 1

18 pages, 2795 KiB  
Article
A Helium Speech Unscrambling Algorithm Based on Deep Learning
by Yonghong Chen and Shibing Zhang
Information 2023, 14(3), 189; https://doi.org/10.3390/info14030189 - 17 Mar 2023
Cited by 1 | Viewed by 1411
Abstract
Helium speech, the language spoken by divers in the deep sea who breathe a high-pressure helium–oxygen mixture, is almost unintelligible. To accurately unscramble helium speech, a neural network based on deep learning is proposed. First, an isolated helium speech corpus and a continuous [...] Read more.
Helium speech, the language spoken by divers in the deep sea who breathe a high-pressure helium–oxygen mixture, is almost unintelligible. To accurately unscramble helium speech, a neural network based on deep learning is proposed. First, an isolated helium speech corpus and a continuous helium speech corpus in a normal atmosphere are constructed, and an algorithm to automatically generate label files is proposed. Then, a convolution neural network (CNN), connectionist temporal classification (CTC) and a transformer are combined into a speech recognition network. Finally, an optimization algorithm is proposed to improve the recognition of continuous helium speech, which combines depth-wise separable convolution (DSC), a gated linear unit (GLU) and a feedforward neural network (FNN). The experimental results show that the accuracy of the algorithm, upon combining the CNN, CTC and the transformer, is 91.38%, and the optimization algorithm improves the accuracy of continuous helium speech recognition by 9.26%. Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
Show Figures

Figure 1

43 pages, 23297 KiB  
Article
Instantaneous Frequency Estimation of FM Signals under Gaussian and Symmetric α-Stable Noise: Deep Learning versus Time–Frequency Analysis
by Huda Saleem Razzaq and Zahir M. Hussain
Information 2023, 14(1), 18; https://doi.org/10.3390/info14010018 - 28 Dec 2022
Cited by 1 | Viewed by 2467
Abstract
Deep learning (DL) and machine learning (ML) are widely used in many fields but rarely used in the frequency estimation (FE) and slope estimation (SE) of signals. Frequency and slope estimation for frequency-modulated (FM) and single-tone sinusoidal signals are essential in various applications, [...] Read more.
Deep learning (DL) and machine learning (ML) are widely used in many fields but rarely used in the frequency estimation (FE) and slope estimation (SE) of signals. Frequency and slope estimation for frequency-modulated (FM) and single-tone sinusoidal signals are essential in various applications, such as wireless communications, sound navigation and ranging (SONAR), and radio detection and ranging (RADAR) measurements. This work proposed a novel frequency estimation technique for instantaneous linear FM (LFM) sinusoidal wave using deep learning. Deep neural networks (DNN) and convolutional neural networks (CNN) are classes of artificial neural networks (ANNs) used for the frequency and slope estimation for LFM signals under additive white Gaussian noise (AWGN) and additive symmetric alpha stable noise (SαSN). DNN is composed of input, output, and two hidden layers, where several nodes in the first and second hidden layers are 25 and 8, respectively. CNN is the content input layer; many hidden layers include convolution, batch normalization, ReLU, max pooling, fully connected, and dropout. The output layer consists of a fully connected softmax and classification layers. SαS distributions are impulsive noise disturbances found in many communication environments such as marine systems, their distribution lacks a closed-form probability density function (PDF), except for specific cases, and infinite second-order statistics, hence geometric SNR (GSNR) is used in this work to determine the effect of noise in a mixture of Gaussian and SαS noise processes. DNN is a machine learning classifier with few layers for reducing FE and SE complexity. CNN is a deep learning classifier, designed with many layers, and proved to be more accurate than DNN when dealing with big data and finding optimal features. Simulation results show that SαS noise can be much more harmful to the FE and SE of FM signals than Gaussian noise. DL and ML can significantly reduce FE complexity, memory cost, and power consumption as compared to the classical FE based on time–frequency analysis, which are important requirements for many systems, such as some Internet of Things (IoT) sensor applications. After training CNN for frequency and slope estimation of LFM signals, the performance of CNN (in terms of accuracy) can give good results at very low signal-to-noise ratios where time–frequency distribution (TFD) fails, giving more than 20 dB difference in the GSNR working range as compared to the classical spectrogram-based estimation, and over 15 dB difference with Viterbi-based estimate. Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
Show Figures

Figure 1

Review

Jump to: Research

21 pages, 1706 KiB  
Review
Compressive Sensing in Image/Video Compression: Sampling, Coding, Reconstruction, and Codec Optimization
by Jinjia Zhou and Jian Yang
Information 2024, 15(2), 75; https://doi.org/10.3390/info15020075 - 26 Jan 2024
Viewed by 1462
Abstract
Compressive Sensing (CS) has emerged as a transformative technique in image compression, offering innovative solutions to challenges in efficient signal representation and acquisition. This paper provides a comprehensive exploration of the key components within the domain of CS applied to image and video [...] Read more.
Compressive Sensing (CS) has emerged as a transformative technique in image compression, offering innovative solutions to challenges in efficient signal representation and acquisition. This paper provides a comprehensive exploration of the key components within the domain of CS applied to image and video compression. We delve into the fundamental principles of CS, highlighting its ability to efficiently capture and represent sparse signals. The sampling strategies employed in image compression applications are examined, emphasizing the role of CS in optimizing the acquisition of visual data. The measurement coding techniques leveraging the sparsity of signals are discussed, showcasing their impact on reducing data redundancy and storage requirements. Reconstruction algorithms play a pivotal role in CS, and this article reviews state-of-the-art methods, ensuring a high-fidelity reconstruction of visual information. Additionally, we explore the intricate optimization between the CS encoder and decoder, shedding light on advancements that enhance the efficiency and performance of compression techniques in different scenarios. Through a comprehensive analysis of these components, this review aims to provide a holistic understanding of the applications, challenges, and potential optimizations in employing CS for image and video compression tasks. Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
Show Figures

Figure 1

Back to TopTop