Intelligent Distributed Resource Allocation in Wireless Sensor Networks (WSNs)

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Networks".

Deadline for manuscript submissions: closed (31 January 2023) | Viewed by 16202

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Engineering, Chosun University, Gwangju 61452, Republic of Korea
Interests: wireless communication system; network security and privacy; IoT applications; privacy-preserving machine learning, AI based communication and networking
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Electronic Engineering, University of Hertfordshire, Hatfield AL10 9EU, UK
Interests: machine learning; radio resource management; vehicular networks; industrial networks; smart cities
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the recent development of internet technology and the advancement of various wireless access technologies, IoT technology through wireless sensor networks (WSNs) is being used in various vertical fields such as healthcare, the manufacturing industry, transportation, agriculture, smart cities, smart homes, etc. Depending on the vertical application and types of sensors, communication links are established using different Radio Access Technologies (RATs) in wireless sensor networks. Since the performance of WSNs relies on various factors, including radio interface, channel frequency, transmit power, etc., the efficient resource allocation mechanism is of considerable importance in terms of supporting required QoS and maximizing network performance.

Moreover, one of the most promising approaches to resolve various difficult to solve technical challenges in the wireless communication environment is artificial intelligence (AI). AI with machine learning (ML) techniques is being successfully used in a growing number of decision-making issues (e.g., resource optimization and network management) in complex wireless networking environments. The use of AI is expected to achieve a remarkable performance improvement of vertical applications through on- and off-line learning and optimization capabilities adaptively in a complex and dynamic wireless network environment. Particularly, AI/ML can efficiently analyze a high volume of data, find unique patterns and underlying structures and, finally, determine appropriate decisions by adapting to changes and uncertainties in the environment. In this regard, AI/ML could be considered as a very promising tool for managing complex resource allocation problems in wireless sensor networks.

This Special Issue focuses on techniques in the scientific or engineering field of advanced WSNs, with review and research articles regarding the state-of-the-art development of WSNs for emerging applications to be considered for publication. Appropriate topics include, but are not limited to:

  • AI-based resource allocation techniques in wireless sensor and IoT networks;
  • Intelligent distributed resource allocation for WSNs;
  • Collaborative multi-agent resource allocation for WSNs (e.g., federated learning);
  • Optimization models for resource allocations for heterogeneous WSNs;
  • Understanding the complexity of the interworking of WSNs, CRNs in IoT;
  • Optimal use of both the licensed and unlicensed channels for WSNs—resource allocation and scheduling;
  • Multi-objective resource allocation schemes for WSNs;
  • Wireless-specific security, privacy, and authentication;
  • Wireless sensor networks for a specific vertical field (e.g., e-health, smart cities, etc.).

Prof. Dr. Seokjoo Shin
Dr. Haeyoung Lee
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • wireless sensor networks
  • resource management
  • artificial intelligent/machine learning
  • intelligent distributed resource allocation
  • collaborative multi-agent resource allocation
  • optimization models
  • multi-objective resource allocation

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

19 pages, 3449 KiB  
Article
A Hybrid Edge-Cloud System for Networking Service Components Optimization Using the Internet of Things
by Souvik Pal, N. Z. Jhanjhi, Azmi Shawkat Abdulbaqi, D. Akila, Abdulaleem Ali Almazroi and Faisal S. Alsubaei
Electronics 2023, 12(3), 649; https://doi.org/10.3390/electronics12030649 - 28 Jan 2023
Cited by 11 | Viewed by 1398
Abstract
The need for data is growing steadily due to big data technologies and the Internet’s quick expansion, and the volume of data being generated is creating a significant need for data analysis. The Internet of Things (IoT) model has appeared as a crucial [...] Read more.
The need for data is growing steadily due to big data technologies and the Internet’s quick expansion, and the volume of data being generated is creating a significant need for data analysis. The Internet of Things (IoT) model has appeared as a crucial element for edge platforms. An IoT system has serious performance issues due to the enormous volume of data that many connected devices produce. Potential methods to increase resource consumption and responsive services’ adaptability in an IoT system include edge-cloud computation and networking function virtualization (NFV) techniques. In the edge environment, there is a service combination of many IoT applications. The significant transmission latency impacts the functionality of the entire network in the IoT communication procedure because of the data communication among various service components. As a result, this research proposes a new optimization technique for IoT service element installation in edge-cloud-hybrid systems, namely the IoT-based Service Components Optimization Model (IoT-SCOM), with the decrease of transmission latency as the optimization aim. Additionally, this research creates the IoT-SCOM model and optimizes it to choose the best deployment option with the least assured delay. The experimental findings demonstrate that the IoT-SCOM approach has greater accuracy and effectiveness for the difficulty of data-intensive service element installation in the edge-cloud environment compared to the existing methods and the stochastic optimization technique. Full article
Show Figures

Figure 1

23 pages, 1812 KiB  
Article
Many-to-Many Data Aggregation Scheduling Based on Multi-Agent Learning for Multi-Channel WSN
by Yao Lu, Keweiqi Wang and Erbao He
Electronics 2022, 11(20), 3356; https://doi.org/10.3390/electronics11203356 - 18 Oct 2022
Cited by 1 | Viewed by 1376
Abstract
Many-to-many data aggregation has become an indispensable technique to realize the simultaneous executions of multiple applications with less data traffic load and less energy consumption in a multi-channel WSN (wireless sensor network). The problem of how to efficiently allocate time slot and channel [...] Read more.
Many-to-many data aggregation has become an indispensable technique to realize the simultaneous executions of multiple applications with less data traffic load and less energy consumption in a multi-channel WSN (wireless sensor network). The problem of how to efficiently allocate time slot and channel for each node is one of the most critical problems for many-to-many data aggregation in multi-channel WSNs, and this problem can be solved with the new distributed scheduling method without communication conflict outlined in this paper. The many-to-many data aggregation scheduling process is abstracted as a decentralized partially observable Markov decision model in a multi-agent system. In the case of embedding cooperative multi-agent learning technology, sensor nodes with group observability work in a distributed manner. These nodes cooperated and exploit local feedback information to automatically learn the optimal scheduling strategy, then select the best time slot and channel for wireless communication. Simulation results show that the new scheduling method has advantages in performance when comparing with the existing methods. Full article
Show Figures

Figure 1

11 pages, 514 KiB  
Article
Machine-Learning-Based Approach for Virtual Machine Allocation and Migration
by Suruchi Talwani, Jimmy Singla, Gauri Mathur, Navneet Malik, N. Z Jhanjhi, Mehedi Masud and Sultan Aljahdali
Electronics 2022, 11(19), 3249; https://doi.org/10.3390/electronics11193249 - 09 Oct 2022
Cited by 8 | Viewed by 1978
Abstract
Due to its ability to supply reliable, robust and scalable computational power, cloud computing is becoming increasingly popular in industry, government, and academia. High-speed networks connect both virtual and real machines in cloud computing data centres. The system’s dynamic provisioning environment depends on [...] Read more.
Due to its ability to supply reliable, robust and scalable computational power, cloud computing is becoming increasingly popular in industry, government, and academia. High-speed networks connect both virtual and real machines in cloud computing data centres. The system’s dynamic provisioning environment depends on the requirements of end-user computer resources. Hence, the operational costs of a particular data center are relatively high. To meet service level agreements (SLAs), it is essential to assign an appropriate maximum number of resources. Virtualization is a fundamental technology used in cloud computing. It assists cloud providers to manage data centre resources effectively, and, hence, improves resource usage by creating several virtualmachine (VM) instances. Furthermore, VMs can be dynamically integrated into a few physical nodes based on current resource requirements using live migration, while meeting SLAs. As a result, unoptimised and inefficient VM consolidation can reduce performance when an application is exposed to varying workloads. This paper introduces a new machine-learning-based approach for dynamically integrating VMs based on adaptive predictions of usage thresholds to achieve acceptable service level agreement (SLAs) standards. Dynamic data was generated during runtime to validate the efficiency of the proposed technique compared with other machine learning algorithms. Full article
Show Figures

Figure 1

13 pages, 3365 KiB  
Article
Defect Synthesis Using Latent Mapping Adversarial Network for Automated Visual Inspection
by Seunghwan Song, Kyuchang Chang, Kio Yun, Changdong Jun and Jun-Geol Baek
Electronics 2022, 11(17), 2763; https://doi.org/10.3390/electronics11172763 - 01 Sep 2022
Cited by 5 | Viewed by 1706
Abstract
In Industry 4.0, internet of things (IoT) technologies are expanding and advanced smart factories are currently being developed. To build an automated visual inspection (AVI) and achieve smartization of steel manufacturing, detecting defects in products in real-time and accurately diagnosing the quality of [...] Read more.
In Industry 4.0, internet of things (IoT) technologies are expanding and advanced smart factories are currently being developed. To build an automated visual inspection (AVI) and achieve smartization of steel manufacturing, detecting defects in products in real-time and accurately diagnosing the quality of products are essential elements. As in various manufacturing industries, the steel manufacturing process presents a class imbalance problem for products. For example, fewer defect images are available than normal images. This study developed a new image synthesis methodology for the steel manufacturing industry called a latent mapping adversarial network. Inspired by the style-based generative adversarial network (StyleGAN) structure, we constructed a mapping network for the latent space, which made it possible to compose defect images of various sizes. We discovered the most suitable loss function, and optimized the proposed method in terms of convergence and computational cost. The experimental results demonstrate the competitive performance of the proposed model compared to the traditional models in terms of classification accuracy of 92.42% and F-score of 93.15%. Consequently, the problem of data imbalance is solved, and higher productivity in steel products is expected. Full article
Show Figures

Figure 1

23 pages, 4930 KiB  
Article
A Perceptual Encryption-Based Image Communication System for Deep Learning-Based Tuberculosis Diagnosis Using Healthcare Cloud Services
by Ijaz Ahmad and Seokjoo Shin
Electronics 2022, 11(16), 2514; https://doi.org/10.3390/electronics11162514 - 11 Aug 2022
Cited by 12 | Viewed by 1798
Abstract
Block-based perceptual encryption (PE) algorithms are becoming popular for multimedia data protection because of their low computational demands and format-compliancy with the JPEG standard. In conventional methods, a colored image as an input is a prerequisite to enable smaller block size for better [...] Read more.
Block-based perceptual encryption (PE) algorithms are becoming popular for multimedia data protection because of their low computational demands and format-compliancy with the JPEG standard. In conventional methods, a colored image as an input is a prerequisite to enable smaller block size for better security. However, in domains such as medical image processing, unavailability of color images makes PE methods inadequate for their secure transmission and storage. Therefore, this study proposes a PE method that is applicable for both color and grayscale images. In the proposed method, efficiency is achieved by considering smaller block size in encryption steps that have negligible effect on the compressibility of an image. The analyses have shown that the proposed system offers better security with only 12% more bitrate requirement as opposed to 113% in conventional methods. As an application of the proposed method, we have considered a smart hospital that avails healthcare cloud services to outsource their deep learning (DL) computations and storage needs. The EfficientNetV2-based model is implemented for automatic tuberculosis (TB) diagnosis in chest X-ray images. In addition, we have proposed noise-based data augmentation method to address data deficiency in medical image analysis. As a result, the model accuracy was improved by 10%. Full article
Show Figures

Figure 1

13 pages, 2341 KiB  
Article
AI-Based Resource Allocation Techniques in Wireless Sensor Internet of Things Networks in Energy Efficiency with Data Optimization
by Quazi Warisha Ahmed, Shruti Garg, Amrita Rai, Manikandan Ramachandran, Noor Zaman Jhanjhi, Mehedi Masud and Mohammed Baz
Electronics 2022, 11(13), 2071; https://doi.org/10.3390/electronics11132071 - 01 Jul 2022
Cited by 19 | Viewed by 2832
Abstract
For the past few years, the IoT (Internet of Things)-based restricted WSN (Wireless sensor network) has sparked a lot of attention and progress in order to attain improved resource utilisation as well as service delivery. For data transfer between heterogeneous devices, IoT requires [...] Read more.
For the past few years, the IoT (Internet of Things)-based restricted WSN (Wireless sensor network) has sparked a lot of attention and progress in order to attain improved resource utilisation as well as service delivery. For data transfer between heterogeneous devices, IoT requires a stronger communication network and an ideally placed energy-efficient WSN. This study uses deep learning architectures to provide a unique resource allocation method for wireless sensor IoT networks with energy efficiency as well as data optimization. EE (Energy efficiency) and SE (spectral efficiency) are two competing optimization goals in this case. The network’s energy efficiency has been improved because of a deep neural network based on whale optimization. The heuristic-based multi-objective firefly algorithm was used to optimise the data. This proposed method is applied to optimal power allocation and relay selection. The study is for a cooperative multi-hop network topology. The best resource allocation is achieved by reducing overall transmit power, and the best relay selection is accomplished by meeting Quality of Service (QoS) standards. As a result, an energy-efficient protocol has been created. The simulation results demonstrate the suggested model’s competitive performance when compared to traditional models in terms of throughput of 96%, energy efficiency of 95%, QoS of 75%, spectrum efficiency of 85%, and network lifetime of 91 percent. Full article
Show Figures

Figure 1

Review

Jump to: Research

40 pages, 1033 KiB  
Review
Survey on Multi-Path Routing Protocols of Underwater Wireless Sensor Networks: Advancement and Applications
by Iftekharul Islam Shovon and Seokjoo Shin
Electronics 2022, 11(21), 3467; https://doi.org/10.3390/electronics11213467 - 26 Oct 2022
Cited by 9 | Viewed by 2238
Abstract
Underwater wireless sensor networks (UWSNs) are a prominent research topic in academia and industry, with many applications such as ocean, seismic, environmental, and seabed explorations. The main challenges in deploying UWSN are high ocean interference and noise, which results in longer propagation time, [...] Read more.
Underwater wireless sensor networks (UWSNs) are a prominent research topic in academia and industry, with many applications such as ocean, seismic, environmental, and seabed explorations. The main challenges in deploying UWSN are high ocean interference and noise, which results in longer propagation time, low bandwidth, and changes in network topology. To mitigate these problems, routing protocols have been identified as an efficient solution. Over the years, several protocols have been proposed in this direction and among them, the most popular are the ones that use multi-path propagation. However, there is a lack of compilation of studies that highlight the advancement of multi-path routing protocols of UWSN through the years. Hence, getting a heuristic idea of the existing protocols is crucial. In this study, we present a comprehensive survey of UWSNs multi-path routing protocols and categorize them into three main categories; energy-based routing protocols, geographic information-based routing protocols, and data-based routing protocols. Furthermore, we sub-classify them into several categories and identify their advantages and disadvantages. In addition, we identify the application of UWSN, open challenges and compare the protocols. The findings of our study will allow researchers to better understand different categories of UWSN multi-path routing protocols in terms of their scope, advantages, and limitations. Full article
Show Figures

Figure 1

20 pages, 1936 KiB  
Review
A Systematic Review on the Energy Efficiency of Dynamic Clustering in a Heterogeneous Environment of Wireless Sensor Networks (WSNs)
by Mohammed F. Alomari, Moamin A. Mahmoud and Ramona Ramli
Electronics 2022, 11(18), 2837; https://doi.org/10.3390/electronics11182837 - 08 Sep 2022
Cited by 6 | Viewed by 1867
Abstract
There are a variety of applications for wireless sensor networks (WSNs), such as military, health monitoring systems, natural disasters, smartphones, and other surveillance systems. While the primary purpose of sensor nodes is to collect unattended data in hostile environments, many are placed in [...] Read more.
There are a variety of applications for wireless sensor networks (WSNs), such as military, health monitoring systems, natural disasters, smartphones, and other surveillance systems. While the primary purpose of sensor nodes is to collect unattended data in hostile environments, many are placed in large numbers and operate independently. Due to limited capabilities, power is often limited. Therefore, these nodes are grouped into clusters to increase communication efficiency. In WSNs, two different routing protocols are possible: apartment and hierarchical or clustering protocols. Due to their significant role in minimizing energy consumption, hierarchical methods have become very popular in clustering. In cluster-based methods, nodes are organized into clusters, and the sensor node with the most resources is appointed as the cluster head (CH). In this paper, we present a Systematic Literature Review (SLR) explaining the difficulties in developing cluster-based methods, critical factors for clustering, and hierarchical clustering protocols. The most important factor of a routing protocol for WSN is the energy consumption and lifetime of a network. Focusing on energy consumption, different cluster-based methods were analyzed to determine which technology should be deployed by analyzing specific criteria to support the selection process. Additionally, the pros and cons of different protocols are listed with their relevance in specific scenarios. To identify these protocols, a systematic literature review was conducted using research studies published from 2010 to 2021, with 30 papers analyzed in the final phase. Based on the results of this SLR, several issues need to be further investigated with respect to the interaction of the potential technology with the Internet of Things (IoT) and Vehicular Ad-Hoc Networks (VANETs). Full article
Show Figures

Figure 1

Back to TopTop