Network doi: 10.3390/network4010006
Authors: João J. O. Pires
Optical backbone networks, characterized by using optical fibers as a transmission medium, constitute the fundamental infrastructure employed today by network operators to deliver services to users. As network capacity is one of the key factors influencing optical network performance, it is important to comprehend its limitations and have the capability to estimate its value. In this context, we revisit the concept of capacity from various perspectives, including channel capacity, link capacity, and network capacity, thus providing an integrated view of the problem within the framework of the backbone tier. Hence, we review the fundamental concepts behind optical networks, along with the basic physical phenomena present in optical fiber transmission, and provide methodologies for estimating the different types of capacities, mainly using simple formulations. In particular, we propose a method to evaluate the network capacity that relies on the optical reach to account for physical layer aspects, in conjunction with capacitated routing techniques for traffic routing. We apply this method to three reference networks and obtain capacities ranging from tens to hundreds of terabits/s. Whenever possible, we also compare our results with published experimental data to understand how they relate.
]]>Network doi: 10.3390/network4010005
Authors: Paraskevi Christodoulou Konstantinos Limniotis
Data protection issues stemming from the use of machine learning algorithms that are used in automated decision-making systems are discussed in this paper. More precisely, the main challenges in this area are presented, putting emphasis on how important it is to simultaneously ensure the accuracy of the algorithms as well as privacy and personal data protection for the individuals whose data are used for training the corresponding models. In this respect, we also discuss how specific well-known data protection attacks that can be mounted in processes based on such algorithms are associated with a lack of specific legal safeguards; to this end, the General Data Protection Regulation (GDPR) is used as the basis for our evaluation. In relation to these attacks, some important privacy-enhancing techniques in this field are also surveyed. Moreover, focusing explicitly on deep learning algorithms as a type of machine learning algorithm, we further elaborate on one such privacy-enhancing technique, namely, the application of differential privacy to the training dataset. In this respect, we present, through an extensive set of experiments, the main difficulties that occur if one needs to demonstrate that such a privacy-enhancing technique is, indeed, sufficient to mitigate all the risks for the fundamental rights of individuals. More precisely, although we manage—by the proper configuration of several algorithms’ parameters—to achieve accuracy at about 90% for specific privacy thresholds, it becomes evident that even these values for accuracy and privacy may be unacceptable if a deep learning algorithm is to be used for making decisions concerning individuals. The paper concludes with a discussion of the current challenges and future steps, both from a legal as well as from a technical perspective.
]]>Network doi: 10.3390/network4010004
Authors: Herbert Maosa Karim Ouazzane Mohamed Chahine Ghanem
An intrusion detection system (IDS) perform postcompromise detection of security breaches whenever preventive measures such as firewalls do not avert an attack. However, these systems raise a vast number of alerts that must be analyzed and triaged by security analysts. This process is largely manual, tedious, and time-consuming. Alert correlation is a technique that reduces the number of intrusion alerts by aggregating alerts that are similar in some way. However, the correlation is performed outside the IDS through third-party systems and tools, after the IDS has already generated a high volume of alerts. These third-party systems add to the complexity of security operations. In this paper, we build on the highly researched area of alert and event correlation by developing a novel hierarchical event correlation model that promises to reduce the number of alerts issued by an intrusion detection system. This is achieved by correlating the events before the IDS classifies them. The proposed model takes the best features from similarity and graph-based correlation techniques to deliver an ensemble capability not possible by either approach separately. Further, we propose a correlation process for events rather than alerts as is the case in the current art. We further develop our own correlation and clustering algorithm which is tailor-made to the correlation and clustering of network event data. The model is implemented as a proof of concept with experiments run on standard intrusion detection sets. The correlation achieves an 87% data reduction through aggregation, producing nearly 21,000 clusters in about 30 s.
]]>Network doi: 10.3390/network4010003
Authors: Nadia Niknami Jie Wu
With the surge in cyber attacks, there is a pressing need for more robust network intrusion detection systems (IDSs). These IDSs perform at their best when they can monitor all the traffic coursing through the network, especially within a software-defined network (SDN). In an SDN configuration, the control plane and data plane operate independently, facilitating dynamic control over network flows. Typically, an IDS application resides in the control plane, or a centrally located network IDS transmits security reports to the controller. However, the controller, equipped with various control applications, may encounter challenges when analyzing substantial data, especially in the face of high traffic volumes. To enhance the processing power, detection rates, and alleviate the controller’s burden, deploying multiple instances of IDS across the data plane is recommended. While deploying IDS on individual switches within the data plane undoubtedly enhances detection rates, the associated costs of installing one at each switch raise concerns. To address this challenge, this paper proposes the deployment of IDS chains across the data plane to boost detection rates while preventing controller overload. The controller directs incoming traffic through alternative paths, incorporating IDS chains; however, potential delays from retransmitting traffic through an IDS chain could extend the journey to the destination. To address these delays and optimize flow distribution, our study proposes a method to balance flow assignments to specific IDS chains with minimal delay. Our approach is validated through comprehensive testing and evaluation using a test bed and trace-based simulation, demonstrating its effectiveness in reducing delays and hop counts across various traffic scenarios.
]]>Network doi: 10.3390/network4010002
Authors: Oliver J. Hall Stavros Shiaeles Fudong Li
With the ever-increasing advancement in blockchain technology, security is a significant concern when substantial investments are involved. This paper explores known smart contract exploits used in previous and current years. The purpose of this research is to provide a point of reference for users interacting with blockchain technology or smart contract developers. The primary research gathered in this paper analyses unique smart contracts deployed on a blockchain by investigating the Solidity code involved and the transactions on the ledger linked to these contracts. A disparity was found in the techniques used in 2021 compared to 2023 after Ethereum moved from a Proof-of-Work blockchain to a Proof-of-Stake one, demonstrating that with the advancement in blockchain technology, there is also a corresponding advancement in the level of effort bad actors exert to steal funds from users. The research concludes that as users become more wary of malicious smart contracts, bad actors continue to develop more sophisticated techniques to defraud users. It is recommended that even though this paper outlines many of the currently used techniques by bad actors, users who continue to interact with smart contracts should consistently stay up to date with emerging exploitations.
]]>Network doi: 10.3390/network4010001
Authors: Hanin Almutairi Ning Zhang
Low-Power and Lossy Networks (LLNs) have grown rapidly in recent years owing to the increased adoption of Internet of Things (IoT) and Machine-to-Machine (M2M) applications across various industries, including smart homes, industrial automation, healthcare, and smart cities. Owing to the characteristics of LLNs, such as Lossy channels and limited power, generic routing solutions designed for non-LLNs may not be adequate in terms of delivery reliability and routing efficiency. Consequently, a routing protocol for LLNs (RPL) was designed. Several RPL objective functions have been proposed to enhance the routing reliability in LLNs. This paper analyses these solutions against performance and security requirements to identify their limitations. Firstly, it discusses the characteristics and security issues of LLN and their impact on packet delivery reliability and routing efficiency. Secondly, it provides a comprehensive analysis of routing solutions and identifies existing limitations. Thirdly, based on these limitations, this paper highlights the need for a reliable and efficient path-finding solution for LLNs.
]]>Network doi: 10.3390/network3040026
Authors: Hamza Chahed Andreas Kassler
Time-Sensitive Networking (TSN) is a set of Ethernet standards aimed to improve determinism in packet delivery for converged networks. The main goal is to provide mechanisms that enable low and predictable transmission latency and high availability for demanding applications such as real-time audio/video streaming, automotive, and industrial control. To provide the required guarantees, TSN integrates different traffic shaping mechanisms including 802.1Qbv, 802.1Qch, and 802.1Qcr, allowing for the coexistence of different traffic classes with different priorities on the same network. Achieving the required quality of service (QoS) level needs proper selection and configuration of shaping mechanisms, which is difficult due to the diversity in the requirements of the coexisting streams under the presence of potential end-system-induced jitter. This paper discusses the suitability of the TSN traffic shaping mechanisms for the different traffic types, analyzes the TSN network configuration problem, i.e., finds the optimal path and shaper configurations for all TSN elements in the network to provide the required QoS, discusses the goals, constraints, and challenges of time-aware scheduling, and elaborates on the evaluation criteria of both the network-wide schedules and the scheduling algorithms that derive the configurations to present a common ground for comparison between the different approaches. Finally, we analyze the evolution of the scheduling task, identify shortcomings, and suggest future research directions.
]]>Network doi: 10.3390/network3040025
Authors: Nadia Niknami Avinash Srinivasan Ken St. Germain Jie Wu
The rise of the Internet of Things (IoT) has opened up exciting possibilities for new applications. One such novel application is the modernization of maritime communications. Effective maritime communication is vital for ensuring the safety of crew members, vessels, and cargo. The maritime industry is responsible for the transportation of a significant portion of global trade, and as such, the efficient and secure transfer of information is essential to maintain the flow of goods and services. With the increasing complexity of maritime operations, technological advancements such as unmanned aerial vehicles (UAVs), autonomous underwater vehicles (AUVs), and the Internet of Ships (IoS) have been introduced to enhance communication and operational efficiency. However, these technologies also bring new challenges in terms of security and network management. Compromised IT systems, with escalated privileges, can potentially enable easy and ready access to operational technology (OT) systems and networks with the same privileges, with an increased risk of zero-day attacks. In this paper, we first provide a review of the current state and modalities of maritime communications. We then review the current adoption of software-defined radios (SDRs) and software-defined networks (SDNs) in the maritime industry and evaluate their impact as maritime IoT enablers. Finally, as a key contribution of this paper, we propose a unified SDN–SDR-driven cross-layer communications framework that leverages the existing SATCOM communications infrastructure, for improved and resilient maritime communications in highly dynamic and resource-constrained environments.
]]>Network doi: 10.3390/network3040024
Authors: Mohamed Ali Setitra Mingyu Fan Bless Lord Y. Agbley Zine El Abidine Bensalem
In the contemporary landscape, Distributed Denial of Service (DDoS) attacks have emerged as an exceedingly pernicious threat, particularly in the context of network management centered around technologies like Software-Defined Networking (SDN). With the increasing intricacy and sophistication of DDoS attacks, the need for effective countermeasures has led to the adoption of Machine Learning (ML) techniques. Nevertheless, despite substantial advancements in this field, challenges persist, adversely affecting the accuracy of ML-based DDoS-detection systems. This article introduces a model designed to detect DDoS attacks. This model leverages a combination of Multilayer Perceptron (MLP) and Convolutional Neural Network (CNN) to enhance the performance of ML-based DDoS-detection systems within SDN environments. We propose utilizing the SHapley Additive exPlanations (SHAP) feature-selection technique and employing a Bayesian optimizer for hyperparameter tuning to optimize our model. To further solidify the relevance of our approach within SDN environments, we evaluate our model by using an open-source SDN dataset known as InSDN. Furthermore, we apply our model to the CICDDoS-2019 dataset. Our experimental results highlight a remarkable overall accuracy of 99.95% with CICDDoS-2019 and an impressive 99.98% accuracy with the InSDN dataset. These outcomes underscore the effectiveness of our proposed DDoS-detection model within SDN environments compared to existing techniques.
]]>Network doi: 10.3390/network3040023
Authors: Hadeel Alrubayyi Gokop Goteng Mona Jaber
With the expansion of the digital world, the number of Internet of things (IoT) devices is evolving dramatically. IoT devices have limited computational power and a small memory. Consequently, existing and complex security methods are not suitable to detect unknown malware attacks in IoT networks. This has become a major concern in the advent of increasingly unpredictable and innovative cyberattacks. In this context, artificial immune systems (AISs) have emerged as an effective malware detection mechanism with low requirements for computation and memory. In this research, we first validate the malware detection results of a recent AIS solution using multiple datasets with different types of malware attacks. Next, we examine the potential gains and limitations of promising AIS solutions under realistic implementation scenarios. We design a realistic IoT framework mimicking real-life IoT system architectures. The objective is to evaluate the AIS solutions’ performance with regard to the system constraints. We demonstrate that AIS solutions succeed in detecting unknown malware in the most challenging conditions. Furthermore, the systemic results with different system architectures reveal the AIS solutions’ ability to transfer learning between IoT devices. Transfer learning is a pivotal feature in the presence of highly constrained devices in the network. More importantly, this work highlights that previously published AIS performance results, which were obtained in a simulation environment, cannot be taken at face value. In reality, AIS’s malware detection accuracy for IoT systems is 91% in the most restricted designed system compared to the 99% accuracy rate reported in the simulation experiment.
]]>Network doi: 10.3390/network3040022
Authors: John Baugh Jinhua Guo
Information-Centric Networking (ICN) is a new paradigm of network architecture that focuses on content rather than hosts as first-class citizens of the network. As part of these architectures, in-network storage devices are essential to provide end users with close copies of popular content, to reduce latency and improve the overall experience for the user but also to reduce network congestion and load on the content producers. To be effective, in-network storage devices, such as content storage routers, should maintain copies of the most popular content objects. Adversaries that wish to reduce this effectiveness can launch cache pollution attacks to eliminate the benefit of the in-network storage device caches. Therefore, it is crucial to protect these devices and ensure the highest hit rate possible. This paper demonstrates Per-Face Popularity approaches to reducing the effects of cache pollution and improving hit rates by normalizing assessed popularity across all faces of content storage routers. The mechanisms that were developed prevent consumers, whether legitimate or malicious, on any single face or small number of faces from overwhelmingly influencing the content objects that remain in the cache. The results demonstrate that per-face approaches generally have much better hit rates than currently used cache replacement techniques.
]]>Network doi: 10.3390/network3040021
Authors: Juexing Wang Xiao Zhang Li Xiao Tianxing Li
Smart Agriculture has gained significant attention in recent years due to its benefits for both humans and the environment. However, the high costs associated with commercial devices have prevented some agricultural lands from reaping the advantages of technological advancements. Traditional methods, such as reflectance spectroscopy, offer reliable and repeatable solutions for soil property sensing, but the high costs and redundancy of preprocessing steps limit their on-site applications in real-world scenarios. Recently, RF-based soil sensing systems have opened a new dimension in soil property analysis using IoT-based systems. These systems are not only portable, but also significantly cheaper than traditional methods. In this paper, we carry out a comprehensive review of state-of-the-art soil property sensing, divided into four areas. First, we delve into the fundamental knowledge and studies of reflectance-spectroscopy-based soil sensing, also known as traditional methods. Secondly, we introduce some RF-based IoT soil sensing systems employing a variety of signal types. In the third segment, we introduce the details of sample pretreatment, inference methods, and evaluation metrics. Finally, after analyzing the strengths and weaknesses of the current work, we discuss potential future aspects of soil property sensing.
]]>Network doi: 10.3390/network3040020
Authors: Yujin Nakano Tomofumi Matsuzawa
Ad hoc networks, formed by multiple wireless communication devices without any connection to wired or intermediary devices such as by access points, are widely used in various situations to construct flexible networks that are not restricted by communication facilities. Ad hoc networks can rarely use existing infrastructure, and no authentication infrastructure is included in these networks as a trusted third party. Hence, distinguishing between ordinary and malicious terminals can be challenging. As a result, black hole attacks are among the most serious security threats to Ad hoc On-demand Distance Vector (AODV) routing, which is one of the most popular routing protocols in mobile ad hoc networks. In this study, we propose a defense method against black hole attacks in which malicious nodes are actively detected to prevent attacks. We applied the proposed method to a network containing nodes engaging in black hole attacks, confirming that the network’s performance is dramatically improved compared to a network without the proposed method.
]]>Network doi: 10.3390/network3030019
Authors: Aljuhara Alshagri Abdulmohsen Mutairi
New versions of HTTP protocols have been developed to overcome many of the limitations of the original HTTP/1.1 protocol and its underlying transport mechanism over TCP. In this paper, we investigated the performance of modern Internet protocols such as HTTP/2 over TCP and HTTP/3 over QUIC in high-latency satellite links. The goal was to uncover the interaction of the new features of HTTP such as parallel streams and optimized security handshake with modern congestion control algorithms such as CUBIC and BBR over high-latency links. An experimental satellite network emulation testbed was developed for the evaluation. The study analyzed several user-level web performance metrics such as average page load time, First Contentful Paint and Largest Contentful Paint. The results indicate an overhead problem with HTTP/3 that becomes more significant when using a loss-based congestion control algorithm such as CUBIC which is widely used on the Internet. Also, the results highlight the significance of the web page structure and how objects are distributed in it. Among the various Internet protocols evaluated, the results show that HTTP/3 over QUIC will perform better by an average of 35% than HTTP/2 over TCP in satellites links specifically with a more aggressive congestion algorithm such as BBR. This can be attributed to the non-blocking stream multiplexing feature of QUIC and the reduced TLS handshake of HTTP/3.
]]>Network doi: 10.3390/network3030018
Authors: Milan Chauhan Stavros Shiaeles
The rapidly growing use of cloud computing raises security concerns. This study paper seeks to examine cloud security frameworks, addressing cloud-associated issues and suggesting solutions. This research provides greater knowledge of the various frameworks, assisting in making educated decisions about selecting and implementing suitable security measures for cloud-based systems. The study begins with introducing cloud technology, its issues and frameworks to secure infrastructure, and an examination of the various cloud security frameworks available in the industry. A full comparison is performed to assess the framework’s focus, scope, approach, strength, limitations, implementation steps and tools required in the implementation process. The frameworks focused on in the paper are COBIT5, NIST (National Institute of Standards and Technology), ISO (International Organization for Standardization), CSA (Cloud Security Alliance) STAR and AWS (Amazon Web Services) well-architected framework. Later, the study digs into identifying and analyzing prevalent cloud security issues. This contains attack vectors that are inherent in cloud settings. Plus, this part includes the risk factor of top cloud security threats and their effect on cloud platforms. Also, it presents ideas and countermeasures to reduce the observed difficulties.
]]>Network doi: 10.3390/network3030017
Authors: Patikiri Arachchige Don Shehan Nilmantha Wijesekara Subodha Gunawardena
Knowledge-Defined Networking (KDN) necessarily consists of a knowledge plane for the generation of knowledge, typically using machine learning techniques, and the dissemination of knowledge, in order to make knowledge-driven intelligent network decisions. In one way, KDN can be recognized as knowledge-driven Software-Defined Networking (SDN), having additional management and knowledge planes. On the other hand, KDN encapsulates all knowledge-/intelligence-/ cognition-/machine learning-driven networks, emphasizing knowledge generation (KG) and dissemination for making intelligent network decisions, unlike SDN, which emphasizes logical decoupling of the control plane. Blockchain is a technology created for secure and trustworthy decentralized transaction storage and management using a sequence of immutable and linked transactions. The decision-making trustworthiness of a KDN system is reliant on the trustworthiness of the data, knowledge, and AI model sharing. To this point, a KDN may make use of the capabilities of the blockchain system for trustworthy data, knowledge, and machine learning model sharing, as blockchain transactions prevent repudiation and are immutable, pseudo-anonymous, optionally encrypted, reliable, access-controlled, and untampered, to protect the sensitivity, integrity, and legitimacy of sharing entities. Furthermore, blockchain has been integrated with knowledge-based networks for traffic optimization, resource sharing, network administration, access control, protecting privacy, traffic filtering, anomaly or intrusion detection, network virtualization, massive data analysis, edge and cloud computing, and data center networking. Despite the fact that many academics have employed the concept of blockchain in cognitive networks to achieve various objectives, we can also identify challenges such as high energy consumption, scalability issues, difficulty processing big data, etc. that act as barriers for integrating the two concepts together. Academicians have not yet reviewed blockchain-based network solutions in diverse application categories for diverse knowledge-defined networks in general, which consider knowledge generation and dissemination using various techniques such as machine learning, fuzzy logic, and meta-heuristics. Therefore, this article fills a void in the content of the literature by first reviewing the diverse existing blockchain-based applications in diverse knowledge-based networks, analyzing and comparing the existing works, describing the advantages and difficulties of using blockchain systems in KDN, and, finally, providing propositions based on identified challenges and then presenting prospects for the future.
]]>Network doi: 10.3390/network3030016
Authors: Gia Khanh Tran Takuto Kamei Shoma Tanaka
Localization methods of unknown emitters are used for the monitoring of illegal radio waves. The localization methods using ground-based sensors suffer from a degradation of localization accuracy in environments where the distance between the emitter and the sensor is non-line-of-sight (NLoS). Therefore, research is being conducted to improve localization accuracy by utilizing Unmanned Aerial Vehicles (UAVs) as sensors to ensure a line-of-sight (LoS) condition. However, UAVs can fly freely over the sky, making it difficult to optimize flight paths based on particle swarm optimization (PSO) for efficient and accurate localization. This paper examines the optimization of UAV flight paths to achieve highly efficient and accurate outdoor localization of unknown emitters via two approaches, a circular orbit and free-path trajectory, respectively. Our numerical results reveal the improved localization estimation error performance of our proposed approach. Particularly, when evaluating at the 90th percentile of the error’s cumulative distribution function (CDF), the proposed approach can reach an error of 28.59 m with a circular orbit and 12.91 m with a free-path orbit, as compared to the conventional fixed sensor case whose localization estimation error is 55.02 m.
]]>Network doi: 10.3390/network3030015
Authors: Pedro Juan Roig Salvador Alcaraz Katja Gilly Cristina Bernad Carlos Juiz
Data centers are getting more and more attention due the rapid increase of IoT deployments, which may result in the implementation of smaller facilities being closer to the end users as well as larger facilities up in the cloud. In this paper, an arithmetic study has been carried out in order to measure a coefficient related to both the average number of hops among nodes and the average number of links among devices for a range of typical network topologies fit for data centers. Such topologies are either tree-like or graph-like designs, where this coefficient provides a balance between performance and simplicity, resulting in lower values in the coefficient accounting for a better compromise between both factors in redundant architectures. The motivation of this contribution is to craft a coefficient that is easy to calculate by applying simple arithmetic operations. This coefficient can be seen as another tool to compare network topologies in data centers that could act as a tie-breaker so as to select a given design when other parameters offer contradictory results.
]]>Network doi: 10.3390/network3020014
Authors: Sampath Edirisinghe Orga Galagedarage Imali Dias Chathurika Ranaweera
Sixth-generation (6G) mobile technology is currently under development, and is envisioned to fulfill the requirements of a fully connected world, providing ubiquitous wireless connectivity for diverse users and emerging applications. Transformative solutions are expected to drive the surge to accommodate a rapidly growing number of intelligent devices and services. In this regard, wireless local area networks (WLANs) have a major role to play in indoor spaces, from supporting explosive growth in high-bandwidth applications to massive sensor arrays with diverse network requirements. Sixth-generation technology is expected to have a superconvergence of networks, including WLANs, to support this growth in applications in multiple dimensions. To this end, this paper comprehensively reviews the latest developments in diverse WLAN technologies, including WiFi, visible light communication, and optical wireless communication networks, as well as their technical capabilities. This paper also discusses how well these emerging WLANs align with supporting 6G requirements. The analyses presented in the paper provide insight into the research opportunities that need to be investigated to overcome the challenges in integrating WLANs in a 6G ecosystem.
]]>Network doi: 10.3390/network3020013
Authors: Laura Galluccio Joannes Sam Mertens Giacomo Morabito
With the explosion of big data, the implementation of distributed machine learning mechanisms in wireless sensor networks (WSNs) is becoming required for reducing the number of data traveling throughout the network and for identifying anomalies promptly and reliably. In WSNs, the above need has to be considered along with the limited energy and processing resources available at the nodes. In this paper, we tackle the resulting complex problem by designing a multi-criteria protocol CINE that stands for “Clustered distributed learnIng exploiting Node centrality and residual Energy” for distributed learning in WSNs. More specifically, considering the energy and processing capabilities of nodes, we design a scheme that assumes that nodes are partitioned in clusters and selects a central node in each cluster, called cluster head (CH), that executes the training of the machine learning (ML) model for all the other nodes in the cluster, called cluster members (CMs). In fact, CMs are responsible for executing the inference only. Since the CH role requires the consumption of more resources, the proposed scheme rotates the CH role among all nodes in the cluster. The protocol has been simulated and tested using real environmental data sets.
]]>Network doi: 10.3390/network3020012
Authors: Takato Fukugami Tomofumi Matsuzawa
In recent years, Internet traffic has increased due to its widespread use. This can be attributed to the growth of social games on smartphones and video distribution services with increasingly high image quality. In these situations, a routing mechanism is required to control congestion, but most existing routing protocols select a single optimal path. This causes the load to be concentrated on certain links, increasing the risk of congestion. In addition to the optimal path, the network has redundant paths leading to the destination node. In this study, we propose a multipath control with multi-commodity flow problem. Comparing the proposed method with OSPF, which is single-path control, and OSPF-ECMP, which is multipath control, we confirmed that the proposed method records higher packet arrival rates. This is expected to reduce congestion.
]]>Network doi: 10.3390/network3010011
Authors: Muna Al-Saadi Asiya Khan Vasilios Kelefouras David J. Walker Bushra Al-Saadi
Software-defined networks (SDNs) have the capabilities of controlling the efficient movement of data flows through a network to fulfill sufficient flow management and effective usage of network resources. Currently, most data center networks (DCNs) suffer from the exploitation of network resources by large packets (elephant flow) that enter the network at any time, which affects a particular flow (mice flow). Therefore, it is crucial to find a solution for identifying and finding an appropriate routing path in order to improve the network management system. This work proposes a SDN application to find the best path based on the type of flow using network performance metrics. These metrics are used to characterize and identify flows as elephant and mice by utilizing unsupervised machine learning (ML) and the thresholding method. A developed routing algorithm was proposed to select the path based on the type of flow. A validation test was performed by testing the proposed framework using different topologies of the DCN and comparing the performance of a SDN-Ryu controller with that of the proposed framework based on three factors: throughput, bandwidth, and data transfer rate. The results show that 70% of the time, the proposed framework has higher performance for different types of flows.
]]>Network doi: 10.3390/network3010010
Authors: Andrea Piroddi Maurizio Torregiani
In the area of low-power wireless networks, one technology that many researchers are focusing on relates to positioning methods such as fingerprinting in densely populated urban areas. This work presents an experimental study aimed at quantifying mean location estimation error in populated areas. Using a dataset provided by the University of Antwerp, a neural network was implemented with the aim of providing end-device location. In this way, we were able to measure the mean localization error in areas of high urban density. The results obtained show a deviation of less than 150 m in locating the end device. This offset can be decreased up to a few meters, provided that there is a greater density of nodes per square meter. This result could enable Internet of Things (IoT) applications to use fingerprinting in place of energy-consuming alternatives.
]]>Network doi: 10.3390/network3010009
Authors: Ricardo Lent
Because the operation of space networks is carefully planned, it is possible to predict future contact opportunities from link budget analysis using the anticipated positions of the nodes over time. In the standard approach to space delay-tolerant networking (DTN), such knowledge is used by contact graph routing (CGR) to decide the paths for data bundles. However, the computation assumes nearly ideal channel conditions, disregarding the impact of the convergence layer retransmissions (e.g., as implemented by the Licklider transmission protocol (LTP)). In this paper, the effect of the bundle forwarding time estimation (i.e., the link service time) to routing optimality is analyzed, and an accurate expression for lossy channels is discussed. The analysis is performed first from a general and protocol-agnostic perspective, assuming knowledge of the statistical properties and general features of the contact opportunities. Then, a practical case is studied using the standard space DTN protocol, evaluating the performance improvement of CGR under the proposed forwarding time estimation. The results of this study provide insight into the optimal routing problem for a space DTN and a suggested improvement to the current routing standard.
]]>Network doi: 10.3390/network3010008
Authors: Md Mamunur Rashid Shahriar Usman Khan Fariha Eusufzai Md. Azharuddin Redwan Saifur Rahman Sabuj Mahmoud Elsharief
The Internet of Things (IoT) is a network of electrical devices that are connected to the Internet wirelessly. This group of devices generates a large amount of data with information about users, which makes the whole system sensitive and prone to malicious attacks eventually. The rapidly growing IoT-connected devices under a centralized ML system could threaten data privacy. The popular centralized machine learning (ML)-assisted approaches are difficult to apply due to their requirement of enormous amounts of data in a central entity. Owing to the growing distribution of data over numerous networks of connected devices, decentralized ML solutions are needed. In this paper, we propose a Federated Learning (FL) method for detecting unwanted intrusions to guarantee the protection of IoT networks. This method ensures privacy and security by federated training of local IoT device data. Local IoT clients share only parameter updates with a central global server, which aggregates them and distributes an improved detection algorithm. After each round of FL training, each of the IoT clients receives an updated model from the global server and trains their local dataset, where IoT devices can keep their own privacy intact while optimizing the overall model. To evaluate the efficiency of the proposed method, we conducted exhaustive experiments on a new dataset named Edge-IIoTset. The performance evaluation demonstrates the reliability and effectiveness of the proposed intrusion detection model by achieving an accuracy (92.49%) close to that offered by the conventional centralized ML models’ accuracy (93.92%) using the FL method.
]]>Network doi: 10.3390/network3010007
Authors: Pedro Juan Roig Salvador Alcaraz Katja Gilly Cristina Bernad Carlos Juiz
Data center organization and optimization presents the opportunity to try and design systems with specific characteristics. In this sense, the combination of artificial intelligence methodology and sustainability may lead to achieve optimal topologies with enhanced feature, whilst taking care of the environment by lowering carbon emissions. In this paper, a model for a field monitoring system has been proposed, where an edge data center topology in the form of a redundant ring has been designed for redundancy purposes to join together nodes spread apart. Additionally, a formal algebraic model of such a design has been exposed and verified.
]]>Network doi: 10.3390/network3010006
Authors: Naresh Adhikari Mahalingam Ramkumar
During the recent decade, two variants of evolving computing networks have augmented the Internet: (i) The Internet of Things (IoT) and (ii) Blockchain Network(s) (BCNs). The IoT is a network of heterogeneous digital devices embedded with sensors and software for various automation and monitoring purposes. A Blockchain Network is a broadcast network of computing nodes provisioned for validating digital transactions and recording the “well-formed” transactions in a unique data storage called a blockchain ledger. The power of a blockchain network is that (ideally) every node maintains its own copy of the ledger and takes part in validating the transactions. Integrating IoT and BCNs brings promising applications in many areas, including education, health, finance, agriculture, industry, and the environment. However, the complex, dynamic and heterogeneous computing and communication needs of IoT technologies, optionally integrated by blockchain technologies (if mandated), draw several challenges on scaling, interoperability, and security goals. In recent years, numerous models integrating IoT with blockchain networks have been proposed, tested, and deployed for businesses. Numerous studies are underway to uncover the applications of IoT and Blockchain technology. However, a close look reveals that very few applications successfully cater to the security needs of an enterprise. Needless to say, it makes less sense to integrate blockchain technology with an existing IoT that can serve the security need of an enterprise. In this article, we investigate several frameworks for IoT operations, the applicability of integrating them with blockchain technology, and due security considerations that the security personnel must make during the deployment and operations of IoT and BCN. Furthermore, we discuss the underlying security concerns and recommendations for blockchain-integrated IoT networks.
]]>Network doi: 10.3390/network3010005
Authors: Pedro Juan Roig Salvador Alcaraz Katja Gilly Cristina Bernad Carlos Juiz
Data center organization and optimization are increasingly receiving attention due to the ever-growing deployments of edge and fog computing facilities. The main aim is to achieve a topology that processes the traffic flows as fast as possible and that does not only depend on AI-based computing resources, but also on the network interconnection among physical hosts. In this paper, graph theory is introduced, due to its features related to network connectivity and stability, which leads to more resilient and sustainable deployments, where cage graphs may have an advantage over the rest. In this context, the Petersen graph cage is studied as a convenient candidate for small data centers due to its small number of nodes and small network diameter, thus providing an interesting solution for edge and fog data centers.
]]>Network doi: 10.3390/network3010004
Authors: Network Editorial Office Network Editorial Office
High-quality academic publishing is built on rigorous peer review [...]
]]>Network doi: 10.3390/network3010003
Authors: Mikhail Zolotukhin Di Zhang Timo Hämäläinen Parsa Miraghaei
The introduction of 5G technology along with the exponential growth in connected devices is expected to cause a challenge for the efficient and reliable network resource allocation. Network providers are now required to dynamically create and deploy multiple services which function under various requirements in different vertical sectors while operating on top of the same physical infrastructure. The recent progress in artificial intelligence and machine learning is theorized to be a potential answer to the arising resource allocation challenges. It is therefore expected that future generation mobile networks will heavily depend on its artificial intelligence components which may result in those components becoming a high-value attack target. In particular, a smart adversary may exploit vulnerabilities of the state-of-the-art machine learning models deployed in a 5G system to initiate an attack. This study focuses on the analysis of adversarial example generation attacks against machine learning based frameworks that may be present in the next generation networks. First, various AI/ML algorithms and the data used for their training and evaluation in mobile networks is discussed. Next, multiple AI/ML applications found in recent scientific papers devoted to 5G are overviewed. After that, existing adversarial example generation based attack algorithms are reviewed and frameworks which employ these algorithms for fuzzing stat-of-art AI/ML models are summarised. Finally, adversarial example generation attacks against several of the AI/ML frameworks described are presented.
]]>Network doi: 10.3390/network3010002
Authors: Dominick Ta Stephanie Booth Rachel Dudukovich
This paper proposes a Software-Defined Delay Tolerant Networking (SDDTN) architecture as a solution to managing large Delay Tolerant Networking (DTN) networks in a scalable manner. This work is motivated by the planned deployments of large DTN networks on the Moon and beyond in deep space. Current space communication involves relatively few nodes and is heavily deterministic and scheduled, which will not be true in the future. It is unclear how these large space DTN networks, consisting of inherently intermittent links, will be able to adapt to dynamically changing network conditions. In addition to the proposed SDDTN architecture, this paper explores data plane programming and the Programming Protocol-Independent Packet Processors (P4) language as a possible method of implementing this SDDTN architecture, enumerates the challenges of this approach, and presents intermediate results.
]]>Network doi: 10.3390/network3010001
Authors: Ali Al-Allawee Pascal Lorenz Abdelhafid Abouaissa Mosleh Abualhaj
Real-time communication has witnessed a dramatic increase in recent years in user daily usage. In this domain, Session Initiation Protocol (SIP) is a well-known protocol found to provide trusted services (voice or video) to end users along with efficiency, scalability, and interoperability. Just like other Internet technology, SIP stores its related data in databases with a predefined data structure. In recent, SIP technologies have adopted the real advantages of in-memory databases as cache systems to ensure fast database operations during real-time communication. Meanwhile, in industry, there are several names of in-memory databases that have been implemented with different structures (e.g., query types, data structure, persistency, and key/value size). However, there are limited resources and poor recommendations on how to select a proper in-memory database in SIP communications. This paper provides recommended and efficient in-memory databases which are most fitted to SIP servers by evaluating three types of databases including Memcache, Redis, and Local (OpenSIPS built-in). The evaluation has been conducted based on the experimental performance of the impact of in-memory operations (store and fetch) against the SIP server by applying heavy load traffic through different scenarios. To sum up, evaluation results show that the Local database consumed less memory compared to Memcached and Redis for read and write operations. While persistency was considered, Memcache is the preferable database selection due to its 25.20 KB/s for throughput and 0.763 s of call–response time.
]]>Network doi: 10.3390/network2040038
Authors: Radheshyam Singh José Soler Tidiane Sylla Leo Mendiboure Marion Berbineau
This paper provides a detailed tutorial to develop a sandbox to emulate coexistence scenarios for road and railway services in terms of sharing telecommunication infrastructure using software-defined network (SDN) capabilities. This paper provides detailed instructions for the creation of network topology using Mininet–WiFi that can mimic real-life coexistence scenarios between railways and roads. The network elements are programmed and controlled by the ONOS SDN controller. The developed SDN application can differentiate the data traffic from railways and roads. Data traffic differentiation is carried out using a VLAN tagging mechanism. Further, it also provides comprehensive information about the different tools that are used to generate the data traffic that can emulate messaging, video streaming, and critical data transmission of railway and road domains. It also provides the steps to use SUMO to represent the selected coexistence scenarios in a graphical way.
]]>Network doi: 10.3390/network2040037
Authors: Tariq Daradkeh Anjali Agarwal
Predicting workload demands can help to achieve elastic scaling by optimizing data center configuration, such that increasing/decreasing data center resources provides an accurate and efficient configuration. Predicting workload and optimizing data center resource configuration are two challenging tasks. In this work, we investigate workload and data center modeling to help in predicting workload and data center operation that is used as an experimental environment to evaluate optimized elastic scaling for real data center traces. Three methods of machine learning are used and compared with an analytical approach to model the workload and data center actions. Our approach is to use an analytical model as a predictor to evaluate and test the optimization solution set and find the best configuration and scaling actions before applying it to the real data center. The results show that machine learning with an analytical approach can help to find the best prediction values of workload demands and evaluate the scaling and resource capacity required to be provisioned. Machine learning is used to find the optimal configuration and to solve the elasticity scaling boundary values. Machine learning helps in optimization by reducing elastic scaling violation and configuration time and by categorizing resource configuration with respect to scaling capacity values. The results show that the configuration cost and time are minimized by the best provisioning actions.
]]>Network doi: 10.3390/network2040036
Authors: Garett Fox Rajendra V. Boppana
Machine learning (ML) is frequently used to identify malicious traffic flows on a network. However, the requirement of complex preprocessing of network data to extract features or attributes of interest before applying the ML models restricts their use to offline analysis of previously captured network traffic to identify attacks that have already occurred. This paper applies machine learning analysis for network security with low preprocessing overhead. Raw network data are converted directly into bitmap files and processed through a Two-Dimensional Convolutional Neural Network (2D-CNN) model to identify malicious traffic. The model has high accuracy in detecting various malicious traffic flows, even zero-day attacks, based on testing with three open-source network traffic datasets. The overhead of preprocessing the network data before applying the 2D-CNN model is very low, making it suitable for on-the-fly network traffic analysis for malicious traffic flows.
]]>Network doi: 10.3390/network2040035
Authors: Shaharyar Khan Stuart Madnick
Recent world events and geopolitics have brought the vulnerability of critical infrastructure to cyberattacks to the forefront. While there has been considerable attention to attacks on Information Technology (IT) systems, such as data theft and ransomware, the vulnerabilities and dangers posed by industrial control systems (ICS) have received significantly less attention. What is very different is that industrial control systems can be made to do things that could destroy equipment or even harm people. For example, in 2021 the US encountered a cyberattack on a water treatment plant in Florida that could have resulted in serious injuries or even death. These risks are based on the unique physical characteristics of these industrial systems. In this paper, we present a holistic, integrated safety and security analysis, we call Cybersafety, based on the STAMP (System-Theoretic Accident Model and Processes) framework, for one such industrial system—an industrial chiller plant—as an example. In this analysis, we identify vulnerabilities emerging from interactions between technology, operator actions as well as organizational structure, and provide recommendations to mitigate resulting loss scenarios in a systematic manner.
]]>Network doi: 10.3390/network2040034
Authors: Doaa N. Mhawi Haider W. Oleiwi Nagham H. Saeed Heba L. Al-Taie
When it comes to web search, information retrieval (IR) represents a critical technique as web pages have been increasingly growing. However, web users face major problems; unrelated user query retrieved documents (i.e., low precision), a lack of relevant document retrieval (i.e., low recall), acceptable retrieval time, and minimum storage space. This paper proposed a novel advanced document-indexing method (ADIM) with an integrated evolutionary algorithm. The proposed IRS includes three main stages; the first stage (i.e., the advanced documents indexing method) is preprocessing, which consists of two steps: dataset documents reading and advanced documents indexing method (ADIM), resulting in a set of two tables. The second stage is the query searching algorithm to produce a set of words or keywords and the related documents retrieving. The third stage (i.e., the searching algorithm) consists of two steps. The modified genetic algorithm (MGA) proposed new fitness functions using a cross-point operator with dynamic length chromosomes with the adaptive function of the culture algorithm (CA). The proposed system ranks the most relevant documents to the user query by adding a simple parameter (∝) to the fitness function to guarantee the convergence solution, retrieving the most relevant user’s document by integrating MGA with the CA algorithm to achieve the best accuracy. This system was simulated using a free dataset called WebKb containing Worldwide Webpages of computer science departments at multiple universities. The dataset is composed of 8280 HTML-programed semi-structured documents. Experimental results and evaluation measurements showed 100% average precision with 98.5236% average recall for 50 test queries, while the average response time was 00.46.74.78 milliseconds with 18.8 MB memory space for document indexing. The proposed work outperforms all the literature, comparatively, representing a remarkable leap in the studied field.
]]>Network doi: 10.3390/network2040033
Authors: Maria Papathanasaki Panagiotis Fountas Kostas Kolomvatsos
The ever-increasing demand for services of end-users in the Internet of Things (IoT) often causes great congestion in the nodes dedicated to serving their requests. Such nodes are usually placed at the edge of the network, becoming the intermediates between the IoT infrastructure and Cloud. Edge nodes offer many advantages when adopted to perform processing activities that are realized close to end-users, limiting the latency in the provision of responses. In this article, we attempt to solve the problem of the potential overloading of edge nodes by proposing a mechanism that always keeps free space in their queue to host high-priority processing tasks. We introduce a proactive, self-healing mechanism that utilizes the principles of Fuzzy Logic, in combination with a non-parametric statistical method that reveals the trend of nodes’ loads as depicted by the incoming tasks and their capability to serve them in the minimum possible time. Through our approach, we manage to ensure the uninterrupted service of high-priority tasks, taking into consideration the demand for tasks as well. Based on this approach, we ensure the fastest possible delivery of results to the requestors while keeping the latency for serving high-priority tasks at the lowest possible levels. A set of experimental scenarios is adopted to evaluate the performance of the suggested model by presenting the corresponding numerical results.
]]>Network doi: 10.3390/network2040032
Authors: John Kafke Thiago Viana
Voice over IP is quickly becoming the industry standard voice communication service. While using an IP-based method of communication has many advantages, it also comes with a new set of challenges; voice networks are now accessible to a multitude of internet-based attackers from anywhere in the world. One of the most prevalent threats to a VoIP network are Denial-of-Service attacks, which consume network bandwidth to congest or disable the communication service. This paper looks at the current state of research into the mitigation of these attacks against VoIP networks, to see if the mechanisms in place are enough. A new framework is proposed titled the “Call Me Maybe” framework, combining elements of latency monitoring with dynamic protocol switching to mitigate DoS attacks against VoIP systems. Research conducted around routing VoIP over TCP rather than UDP is integrated into the proposed design, along with a latency monitoring mechanism to detect when the service is under attack. Data gathered from a Cisco Packet Tracer simulation was used to evaluate the effectiveness of the solution. The gathered results have shown that there is a statistically significant improvement in the response times of voice traffic when using the “Call Me Maybe” framework in a network experiencing a DoS attack. The research and findings therefore aim to provide a contribution to the enhancement of the security of VoIP and future IP-based voice communication systems.
]]>Network doi: 10.3390/network2040031
Authors: Babatunde Ojetunde Naoto Egashira Kenta Suzuki Takuya Kurihara Kazuto Yano Yoshinori Suzuki
The rapid growth in the IoT network comes with a huge security threat. Network scanning is considered necessary to identify vulnerable IoT devices connected to IP networks. However, most existing network scanning tools or system do not consider the burden of scan packet traffic on the network, especially in the IoT network where resources are limited. It is necessary to know the status of the communication environment and the reason why network scanning failed. Therefore, this paper proposes a multimodel-based approach which can be utilized to estimate the cause of failure/delay of network scanning over wireless networks where a scan packet or its response may sometimes be dropped or delayed. Specifically, the factors that cause network scanning failure/delay were identified and categorized. Then, using a machine learning algorithm, we introduced a multimodel linear discriminant analysis (MM-LDA) to estimate the cause of scan failure/delay based on the results of network scanning. In addition, a one-to-many model and a training data filtering technique were adopted to ensure that the estimation error was drastically reduced. The goal of our proposed method was to correctly estimate the causes of scan failure/delay in IP-connected devices. The performance of the proposed method was evaluated using computer simulation assuming a cellular (LTE) network as the targeted IoT wireless network and using LTE-connected devices as the targeted IoT devices. The proposed MM-LDA correctly estimates the cause of failure/delay of the network scan at an average probability of 98% in various scenarios. In comparison to other conventional machine learning classifiers, the proposed MM-LDA outperforms various classification methods in the estimation of the cause of scan failure/delay.
]]>Network doi: 10.3390/network2040030
Authors: Ahmed Osama Basil Al-Mashhadani Mu Mu Ali Al-Sharbaz
Quality of experience (QoE) metrics can be used to assess user perception and satisfaction in data services applications delivered over the Internet. End-to-end metrics are formed because QoE is dependent on both the users’ perception and the service used. Traditionally, network optimization has focused on improving network properties such as the quality of service (QoS). In this paper we examine adaptive streaming over a software-defined network environment. We aimed to evaluate and study the media streams, aspects affecting the stream, and the network. This was undertaken to eventually reach a stage of analysing the network’s features and their direct relationship with the perceived QoE. We then use machine learning to build a prediction model based on subjective user experiments. This will help to eliminate future physical experiments and automate the process of predicting QoE.
]]>Network doi: 10.3390/network2040029
Authors: Gia Khanh Tran Masanori Ozasa Jin Nakazato
In the event of a major disaster, base stations in the disaster area will cease to function, making it impossible to obtain life-saving information. Therefore, it is necessary to provide a wireless communication infrastructure as soon as possible. To cope with this situation, we focus on NFV/SDN (Network Function Virtualization/Software-Defined Networking)-enabled UAVs equipped with a wireless communication infrastructure to provide services. The access link between the UAV and the user is assumed to be equipped with a millimeter-wave interface to achieve high throughput. However, the use of millimeter-waves increases the effect of attenuation, making the deployment of UAVs problematic. In addition, if multiple UAVs are deployed in a limited frequency band, co-channel interference will occur between the UAVs, resulting in a decrease in the data rate. Therefore, in this paper, we propose a method that combines UAV placement and frequency division for a non-uniform user distribution in an environment with multiple UAVs. As a result, it is found that the offered data rate is improved by using our specific placement method, in terms of not only the average but also the outage user rate.
]]>Network doi: 10.3390/network2030028
Authors: Stavros Karageorgiou Vasileios Karyotis
In this paper, we focus on the dynamics of the spread of malicious software (malware) in multi-layer networks of various types, e.g., cyber-physical systems. Recurring malware has been one of the major challenges in modern networks, and significant research and development has been dedicated to mitigating it. The majority of relevant works has focused on networks characterized by “flat” topologies, namely topologies in which all nodes consist of a single layer, studying the dynamics of propagation of a specific threat or various types of malware over a homogeneous topology. As cyber-physical systems and multi-layer networks in general are gaining in popularity and penetration, more targeted studies are needed. In this work, we focus on the propagation dynamics of recurring malware, namely Susceptible–Infected–Susceptible (SIS type) in multi-layer topologies consisting of combinations of two different types of networks, e.g., a small-world overlaying a random geometric, or other such combinations. We utilize a stochastic modeling framework based on Markov Random Fields for analyzing the propagation dynamics of malware over such networks. Through analysis and simulation, we discover the most vulnerable and the most robust topology among the six considered combinations, as well as a result of rather practical use, namely that the denser the network, the more flexibility it provides for malware mitigation eventually.
]]>Network doi: 10.3390/network2030027
Authors: Dennis Krummacker Benedikt Veith Christoph Fischer Hans Dieter Schotten
As 5G enters the application field of industrial communications, compatibility with technologies of wired deterministic communications such as Time-Sensitive Networking (TSN) needs to be considered during the standardization process. While consideration of underlying integration architectures and basic resource mapping are already part of the standard, necessary traffic forwarding schemes are currently planned to be deployed in additional interfaces located at the edge of a 5G System. This analysis highlights the extent to which internal 5G mechanisms can be used to execute the traffic forwarding of TSN streams according to the requirements of the TSN control plane. It concludes with the recognition that a static prioritization of logical channels is not appropriate for the treatment of TSN streams over the 5G air interface. A novel prioritization mechanism of logical data channels is derived, which enables the execution of TSN-compliant traffic shaping over 5G RAN. Subsequently, a proof of concept is implemented and simulated for evaluation.
]]>Network doi: 10.3390/network2030026
Authors: Ed Kamya Kiyemba Edris Mahdi Aiash Jonathan Loo
Fifth Generation Mobile Network (5G) is a heterogeneous network in nature, made up of multiple systems and supported by different technologies. It will be supported by network services such as device-to-device (D2D) communications. This will enable the new use cases to provide access to other services within the network and from third-party service providers (SPs). End-users with their user equipment (UE) will be able to access services ubiquitously from multiple SPs that might share infrastructure and security management, whereby implementing security from one domain to another will be a challenge. This highlights a need for a new and effective security approach to address the security of such a complex system. This article proposes a network service security (NSS) modular framework for 5G and beyond that consists of different security levels of the network. It reviews the security issues of D2D communications in 5G, and it is used to address security issues that affect the users and SPs in an integrated and heterogeneous network such as the 5G enabled D2D communications network. The conceptual framework consists of a physical layer, network access, service and D2D security levels. Finally, it recommends security mechanisms to address the security issues at each level of the 5G-enabled D2D communications network.
]]>Network doi: 10.3390/network2030025
Authors: Zongze Li Shuai Wang Qingfeng Lin Yang Li Miaowen Wen Yik-Chung Wu H. Vincent Poor
Reconfigurable intelligent surfaces (RISs) offer the potential to customize the radio propagation environment for wireless networks. To fully exploit the advantages of RISs in wireless systems, the phases of the reflecting elements must be jointly designed with conventional communication resources, such as beamformers, the transmit power, and computation time. However, due to the unique constraints on the phase shifts and the massive numbers of reflecting units and users in large-scale networks, the resulting optimization problems are challenging to solve. This paper provides a review of the current optimization methods and artificial-intelligence-based methods for handling the constraints imposed by RISs and compares them in terms of the solution quality and computational complexity. Future challenges in phase-shift optimization involving RISs are also described, and potential solutions are discussed.
]]>Network doi: 10.3390/network2030024
Authors: Tomofumi Matsuzawa Kyosuke Ichikawa
The HTTP Alternative Services (Alt-Svc) method is defined as an application to check connectivity in HTTP/3. This method is designed based on the fact that communication with old HTTP is guaranteed and the HTTP/3 adoption rate is not necessarily dominant, and it is considered effective in the early stages of transition. However, once HTTP/3 has reached its peak and the transitional period has passed, the uncertainty and redundancy of the Alt-Svc procedure become detrimental. In Alt-Svc, the procedure involves first completing the old HTTP connection to use HTTP/3, and then migrating to HTTP/3 if possible; however, because HTTP/3 is a protocol that eliminates the waste of the old HTTP handshake (TCP handshake followed by TLS handshake), HTTP/3 does not fully benefit from the rapid connection establishment of HTTP/3. Therefore, we propose a method to apply the Happy Eyeballs algorithm, which is used for IPv4 and IPv6 connectivity checks, to the old HTTP and HTTP/3 connectivity checks. The Happy Eyeballs algorithm performs the two selections in parallel to eliminate the delay that occurs in sequential processing, but the proposed method differs from the conventional Happy Eyeballs algorithm in that, even if the old HTTP is adopted once, it switches to the HTTP/3 connection if it is possible to connect using HTTP/3. The proposed method differs from the conventional Happy Eyeballs algorithm by introducing a mechanism to switch to HTTP/3 connections when HTTP/3 connections are available, even when the old HTTP is adopted. Results of the evaluation experiments demonstrated that the adoption rate of HTTP/3 increases in environments with high communication latency because the old HTTP performs the TLS handshake after the TCP handshake, but with this improvement, HTTP/3 is preferentially selected even in low latency environments when it is selectable.
]]>Network doi: 10.3390/network2030023
Authors: Ralph Voltaire J. Dayot In-Ho Ra Hyung-Jin Kim
5G networks have been experiencing challenges in handling the heterogeneity and influx of user requests brought upon by the constant emergence of various services. As such, network slicing is considered one of the critical technologies for improving the performance of 5G networks. This technology has shown great potential for enhancing network scalability and dynamic service provisioning through the effective allocation of network resources. This paper presents a Deep Reinforcement Learning-based network slicing scheme to improve resource allocation in 5G networks. First, a Contextual Bandit model for the network slicing process is created, and then a Deep Reinforcement Learning-based network slicing agent (NSA) is developed. The agent’s goal is to maximize every action’s reward by considering the current network state and resource utilization. Additionally, we utilize network theory concepts and methods for node selection, ranking, and mapping. Extensive simulation has been performed to show that the proposed scheme can achieve higher action rewards, resource efficiency, and network throughput compared to other algorithms.
]]>Network doi: 10.3390/network2020022
Authors: Konstantinos I. Roumeliotis Nikolaos D. Tselikas
App development is a steadily growing industry. Progressive web apps (PWAs) constitute a technology inspired by native and hybrid apps; they use web technologies to create web and mobile apps. Based on a service worker, a caching mechanism, and an app shell, PWAs aim to offer web apps with features and user interfaces similar to those of native apps. Furthermore, technological development has created a greater need for accessibility. An increasing number of websites, even government ones, are overlooking the need for equal access to new technologies among people with disabilities. This article presents, in a systematic review format, both PWAs and web accessibility and aims to evaluate PWAs’ effectiveness as regards the corresponding accessibility provided.
]]>Network doi: 10.3390/network2020021
Authors: Muntadher Alsabah Marwah Abdulrazzaq Naser Basheera M. Mahmmod Sadiq H. Abdulhussain
Future wireless networks will require advance physical-layer techniques to meet the requirements of Internet of Everything (IoE) applications and massive communication systems. To this end, a massive MIMO (m-MIMO) system is to date considered one of the key technologies for future wireless networks. This is due to the capability of m-MIMO to bring a significant improvement in the spectral efficiency and energy efficiency. However, designing an efficient downlink (DL) training sequence for fast channel state information (CSI) estimation, i.e., with limited coherence time, in a frequency division duplex (FDD) m-MIMO system when users exhibit different correlation patterns, i.e., span distinct channel covariance matrices, is to date very challenging. Although advanced iterative algorithms have been developed to address this challenge, they exhibit slow convergence speed and thus deliver high latency and computational complexity. To overcome this challenge, we propose a computationally efficient conjugate gradient-descent (CGD) algorithm based on the Riemannian manifold in order to optimize the DL training sequence at base station (BS), while improving the convergence rate to provide a fast CSI estimation for an FDD m-MIMO system. To this end, the sum rate and the computational complexity performances of the proposed training solution are compared with the state-of-the-art iterative algorithms. The results show that the proposed training solution maximizes the achievable sum rate performance, while delivering a lower overall computational complexity owing to a faster convergence rate in comparison to the state-of-the-art iterative algorithms.
]]>Network doi: 10.3390/network2020020
Authors: Rasmus Rettig Christoph Schöne Frederik Fröhlich Christopher Niemöller
Smart logistics, combining the capabilities of logistics with methods and techniques of the Internet of Things, Information and Communication Technologies, and the highest levels of automation are key to addressing the challenges of the 21st century and minimizing emissions while maximizing logistic performance. High-performance cellular networks are a prerequisite to fully using and leveraging their possibilities. These communication networks were developed based on the need for voice communication and streaming services. While the upcoming requirements are included in the latest versions of cellular networks, the existing infrastructure requires significant improvements and will have to adapt significantly. This study evaluates the performance of the current state of implementation of cellular networks on the German highway experimentally and analytically. The known indicators RSRP, RSSI, and RSRQ are analyzed spatially, over time, and for different driving conditions. The results indicate a high level of spatial correlation and a sufficient level of confidence, which are needed to ensure consistency and repeatability of these measurements. The procedure and the results can be used to assess the suitability of cellular networks for smart logistics applications and continuously monitor their improvement. The results indicate the status of the cellular network on the German highway which is worse compared to the network operator’s self-assessment.
]]>Network doi: 10.3390/network2020019
Authors: Yi-Hang Zhu Gilles Callebaut Hatice Çalık Liesbet Van der Perre François Rottenberg
Distributed massive multiple-input multiple-output (D-mMIMO) is one of the key candidate technologies for future wireless networks. A D-mMIMO system has multiple, geographically distributed, access points (APs) jointly serving its users. First of all, this paper reports on where to position these APs to minimize the overall transmit power in actual deployments. As a second contribution, we show that it is essential to take into account both the radiation pattern of the antenna array and the environment information when optimizing AP placement. Neglecting the radiation pattern and environment information, as generally assumed in the literature, can lead to a power penalty in the order of 15 dB and 20 dB, respectively. These results have been obtained by formulating the AP placement optimization problem as a combinatorial optimization problem, which can be solved with different approaches where different channel models are applied. The proposed graph-based channel model drastically lowers the computational time with respect to using an ray-tracing simulator (RTS) for channel evaluation. The performance of the graph-based approach is validated via the RTS, showing that it achieves 5 dB power saving on average compared with a Euclidean distance-based approach, which is the most commonly used approach in the literature.
]]>Network doi: 10.3390/network2020018
Authors: Hongliang Mao Jie Zhong Siyuan Yu Pei Xiao Xinghao Yang Gaoyuan Lu
Free-space optics (FSO) communication enjoys desirable modulation rates at unexploited frequency bands, however, its application is hindered by atmospheric turbulence which causes phase shifting in laser links. Although a single deformable mirror (DM) adaptive optics (AO) system is a good solution, its performance remains unsatisfactory as the proportion of tilts aberrations becomes relatively high. This condition happens when the incident angle of the laser beam for the optical receiver dynamically shifts. To tackle this problem, we introduce a fast steering mirror (FSM), DM cascaded AO architecture, based upon which we also propose an atmospheric turbulence compensation algorithm. In this paper, we compare the compensation ability of FSM and DM towards tilts aberrations. Furthermore, we gain model matrices for FSM and DM from testbed and simulatively verify the effectiveness of our work. For a Kolmogorov theory-based atmospheric turbulence disturbed incident laser beam where the tilt components take up 80% of the total proportion of wavefront aberrations, our proposed architecture compensates the input wavefront to a residual wavefront root mean square (RMS) of 116 wavelength, compared to 16 wavelength for single DM architecture. The study intends to overcome atmospheric turbulence and has the potential to guide the development of future FSO communications.
]]>Network doi: 10.3390/network2020017
Authors: Haider W. Oleiwi Nagham Saeed Hamed Al-Raweshidy
In this paper, cooperative simultaneous wireless information and power transfer (SWIPT) terahertz (THz) multiple-input multiple-output (MIMO) nonorthogonal multiple access (NOMA) are considered. The aim is to improve wireless connectivity, resource management, scalability, and user fairness, as well as to enhance the overall performance of wireless communications and reliability. We optimized the current wireless communication systems by utilizing MIMO-NOMA technology and THz frequencies, exploring the performance and gains obtained. Hence, we developed a path-selection mechanism for the far user to enhance the system performance. The EH SWIPT approach used to improve THz communications performance was investigated. Moreover, we proposed a reliable transmission mechanism with a non-LoS (NLoS) line of THz communications for open areas or any location where the intelligent reflecting surface (IRS) cannot be deployed, in addition to using the cheap decode-forward (DF) relaying instead of IRS. The performance and scalability of the upgradeable system were examined, using adjustable parameters and the simplest modulation scheme. The system presents a noticeable improvement in energy efficiency (EE) and spectral efficiency (SE), in addition to reliability. Accordingly, the outcome showed an improvement in the overall reliability, SE, EE, and outage probability as compared to the conventional cooperative networks of the recent related work (e.g., cooperative MIMO-NOMA with THz) by multiple times with a simpler design, whereas it outperformed our previous work, i.e., cooperative SWIPT SISO-NOMA with THz, by more than 50%, with a doubled individual user gain. This system reduces the transceiver hardware and improves reliability with increasing transmission rates.
]]>Network doi: 10.3390/network2020016
Authors: Muneera Alsayegh Tarek Moulahi Abdulatif Alabdulatif Pascal Lorenz
There are significant data privacy implications associated with Electronic Health Records (EHRs) sharing among various untrusted healthcare entities. Recently, a blockchain-based EHRs sharing system has provided many benefits. Decentralization, anonymity, unforgeability, and verifiability are all unique properties of blockchain technology. In this paper, we propose a secure, blockchain-based EHR sharing system. After receiving the data owner’s authorization, the data requester can use the data provider’s keyword search to discover relevant EHRs on the EHR consortium blockchain and obtain the re-encryption ciphertext from the proxy server. To attain privacy, access control and data security, the proposed technique uses asymmetric searchable encryption and conditional proxy re-encryption. Likewise, proof of permission serves in consortium blockchains as the consensus method to ensure the system’s availability. The proposed protocol can achieve the specified security goals, according to the security analysis. In addition, we simulate basic cryptography and put the developed protocol into practice on the Ethereum platform. The analysis results suggest that the developed protocol is computationally efficient.
]]>Network doi: 10.3390/network2020015
Authors: Issa Dia Ehsan Ahvar Gyu Myoung Lee
Finding an available parking place has been considered a challenge for drivers in large-size smart cities. In a smart parking application, Artificial Intelligence of Things (AIoT) can help drivers to save searching time and automotive fuel by predicting short-term parking place availability. However, performance of various Machine Learning and Neural Network-based (MLNN) algorithms for predicting parking segment availability can be different. To find the most suitable MLNN algorithm for the above mentioned application, this paper evaluates performance of a set of well-known MLNN algorithms as well as different combinations of them (i.e., known as Ensemble Learning or Voting Classifier) based on a real parking datasets. The datasets contain around five millions records of the measured parking availability in San Francisco. For evaluation, in addition to the cross validation scores, we consider resource requirements, simplicity and execution time (i.e., including both training and testing times) of algorithms. Results show that while some ensemble learning algorithms provide the best performance in aspect of validation score, they consume a noticeable amount of computing and time resources. On the other hand, a simple Decision Tree (DT) algorithm provides a much faster execution time than ensemble learning algorithms, while its performance is still acceptable (e.g., DT’s accuracy is less than 1% lower than the best ensemble algorithm). We finally propose and simulate a recommendation system using the DT algorithm. We have found that around 77% of drivers can not find a free spot in their selected destinations (i.e., street or segment) and estimated that the recommendation system, by introducing alternative closest vacant locations to destinations, can save, in total, 3500 min drivers searching time for 1000 parking spot requests. It can also help to reduce the traffic and save a noticeable amount of automotive fuel.
]]>Network doi: 10.3390/network2020014
Authors: Gizem Akman Philip Ginzboorg Valtteri Niemi
Multi-access edge computing (MEC) is one of the emerging key technologies in fifth generation (5G) mobile networks, providing reduced end-to-end latency for applications and reduced load in the transport network. This paper proposes mechanisms to enhance user privacy in MEC within 5G. We consider a basic MEC usage scenario, where the user accesses an application hosted in the MEC platform via the radio access network of the mobile network operator (MNO). First, we create a system model based on this scenario. Second, we define the adversary model and give the list of privacy requirements for this system model. We also analyze the impact on user privacy when some of the parties in our model share information that is not strictly needed for providing the service. Third, we introduce a privacy-aware access protocol for the system model and analyze this protocol against the privacy requirements.
]]>Network doi: 10.3390/network2010013
Authors: Yingyuan Yang Jiangnan Li Sunshin Lee Xueli Huang Jinyuan Sun
Implicit authentication (IA) transparently authenticates users by utilizing their behavioral data sampled from various sensors. Identifying the illegitimate user through constantly analyzing current users’ behavior, IA adds another layer of protection to the smart device. Due to the diversity of human behavior, existing research tends to utilize multiple features to identify users, which is less efficient. Irrelevant features may increase the system delay and reduce the authentication accuracy. However, dynamically choosing the best suitable features for each user (personal features) requires a massive calculation, making it infeasible in the real environment. In this paper, we propose EchoIA to find personal features with a small amount of calculation by leveraging user feedback derived from the correct rate of inputted passwords. By analyzing the feedback, EchoIA can deduce the true identities of current users and achieve a human-centered implicit authentication. In the authentication phase, our approach maintains transparency, which is the major advantage of IA. In the past two years, we conducted a comprehensive experiment to evaluate EchoIA. We compared it with four state-of-the-art IA schemes in the aspect of authentication accuracy and efficiency. The experiment results show that EchoIA has better authentication accuracy (93%) and less energy consumption (23-h battery lifetimes) than other IA schemes.
]]>Network doi: 10.3390/network2010012
Authors: Md. Amirul Hasan Shanto Binodon Amit Karmaker Md. Mahfuz Reza Md. Abir Hossain
Intra-vehicular communication is an emerging technology explored spontaneously due to higher wireless sensor-based application demands. To meet the upcoming market demands, the current intra-vehicular communication transmission reliability and latency should be improved significantly to fit with the existing 5G and upcoming 6G communication domains. Ultra-Reliable Low-Latency Communication (URLLC) can be widely used to enhance the quality of communication and services of 5G and beyond. The 5G URLLC service is highly dependable for transmission reliability and minimizing data transmission latency. In this paper, a multiple-access scheme named Cluster-based Orthogonal Frequency Subcarrier-based Multiple Access (C-OFSMA) is proposed with 5G URLLC’s high requirement adaptation for intra-vehicular data transmission. The URLLC demanded high reliability of approximately 99.999% of the data transmission within the extremely short latency of less than 1 ms. C-OFSMA enhanced the transmission diversity, which secured more successful data transmission to fulfill these high requirements and adapt to such a network environment. In C-OFSMA, the available sensors transmit data over specific frequency channels where frequency selection is random and special sensors (audio and video) transmit data over dedicated frequency channels. The minimum number of subcarrier channels was evaluated for different arrival rates and different packet duplication conditions in order to achieve 99.999% reliability within an air-interface latency of 0.2 ms. For the fixed frequency channel condition, C-OFSMA and OFSMA were compared in terms of reliability response and other packet duplication. Moreover, the optimal number of clusters was also evaluated in the aspects of the reliability response for the C-OFSMA system.
]]>Network doi: 10.3390/network2010011
Authors: Stan Wong Bin Han Hans D. Schotten
This article reveals an adequate comprehension of basic defense, security challenges, and attack vectors in deploying multi-network slicing. Network slicing is a revolutionary concept of providing mobile network on-demand and expanding mobile networking business and services to a new era. The new business paradigm and service opportunities are encouraging vertical industries to join and develop their own mobile network capabilities for enhanced performances that are coherent with their applications. However, a number of security concerns are also raised in this new era. In this article, we focus on the deployment of multi-network slicing with multi-tenancy. We identify the security concerns and discuss the defense approaches such as network slice isolation and insulation in a multi-layer network slicing security model. Furthermore, we identify the importance to appropriately select the network slice isolation points and propose a generic framework to optimize the isolation policy regarding the implementation cost while guaranteeing the security and performance requirements.
]]>Network doi: 10.3390/network2010010
Authors: Hassan Mistareehi D. Manivannan
Given the enormous interest shown by customers as well as industry in autonomous vehicles, the concept of Internet of Vehicles (IoV) has evolved from Vehicular Ad hoc NETworks (VANETs). VANETs are likely to play an important role in Intelligent Transportation Systems (ITS). VANETs based on fixed infrastructures, called Road Side Units (RSUs), have been extensively studied. Efficient, authenticated message dissemination in VANETs is important for the timely delivery of authentic messages to vehicles in appropriate regions in the VANET. Many of the approaches proposed in the literature use RSUs to collect events (such as accidents, weather conditions, etc.) observed by vehicles in its region, authenticate them, and disseminate them to vehicles in appropriate regions. However, as the number of messages received by RSUs increases in the network, the computation and communication overhead for RSUs related to message authentication and dissemination also increases. We address this issue and propose a low-overhead message authentication and dissemination scheme in this paper. We compare the overhead, related to authentication and message dissemination, of our approach with an existing approach and also present an analysis of privacy and security implications of our approach.
]]>Network doi: 10.3390/network2010009
Authors: Frank Akpan Gueltoum Bendiab Stavros Shiaeles Stavros Karamperidis Michalis Michaloliakos
Cyberattacks have been rapidly increasing over the years, resulting to big financial losses to businesses for recovery, regulatory sanctions, as well as collateral damages, such as reputation and trust. In this respect, the maritime sector, which until now was considered safe due to the lack of Internet connectivity and the isolated nature of ships in the sea, is showing a 900% increase in cybersecurity breaches on operational technology as it enters the digital era. Although some research is being conducted in this area, maritime cybersecurity has not been deeply investigated. Hence, this paper provides a close investigation of the landscape of cybersecurity in the maritime sector with the aim of highlighting security problems and challenges. First, it explores the systems available on ships that could be targeted by attackers, their possible vulnerabilities that an attacker could exploit, the consequences if the system is accessed, and actual incidents. Then, it describes and analyses possible mitigation actions that can be utilised in advance to prevent such attacks. Finally, several challenges and open problems are discussed for future research.
]]>Network doi: 10.3390/network2010008
Authors: Shuaibing Lu Jie Wu Jiamei Shi Pengfan Lu Juan Fang Haiming Liu
Mobile edge computing is an emerging paradigm that supplies computation, storage, and networking resources between end devices and traditional cloud data centers. With increased investment of resources, users demand a higher quality-of-service (QoS). However, it is nontrivial to maintain service performance under the erratic activities of end-users. In this paper, we focus on the service placement problem under the continuous provisioning scenario in mobile edge computing for multiple mobile users. We propose a novel dynamic placement framework based on deep reinforcement learning (DSP-DRL) to optimize the total delay without overwhelming the constraints on physical resources and operational costs. In the learning framework, we propose a new migration conflicting resolution mechanism to avoid the invalid state in the decision module. We first formulate the service placement under the migration confliction into a mixed-integer linear programming (MILP) problem. Then, we propose a new migration conflict resolution mechanism to avoid the invalid state and approximate the policy in the decision modular according to the introduced migration feasibility factor. Extensive evaluations demonstrate that the proposed dynamic service placement framework outperforms baselines in terms of efficiency and overall latency.
]]>Network doi: 10.3390/network2010007
Authors: Pu Gong Thomas M. Chen Peng Xu
This paper proposes a routing protocol for wireless sensor networks to deal with energy-depleting vampire attacks. This resource-conserving protection against energy-draining (RCPED) protocol is compatible with existing routing protocols to detect abnormal signs of vampire attacks and identify potential attackers. It responds to attacks by selecting routes with the maximum priority, where priority is an indicator of energy efficiency and estimation of security level calculated utilizing an analytic hierarchy process (AHP). RCPED has no dependence on cryptography, which consumes less energy and hardware resources than previous approaches. Simulation results show the benefits of RCPED in terms of energy efficiency and security awareness.
]]>Network doi: 10.3390/network2010006
Authors: Network Editorial Office Network Editorial Office
Rigorous peer-reviews are the basis of high-quality academic publishing [...]
]]>Network doi: 10.3390/network2010005
Authors: Tieming Geng Laurent Njilla Chin-Tser Huang
With the rapid advancement and wide application of blockchain technology, blockchain consensus protocols, which are the core part of blockchain systems, along with the privacy issues, have drawn much attention from researchers. A key aspect of privacy in the blockchain is the sensitive content of transactions in the permissionless blockchain. Meanwhile, some blockchain applications, such as cryptocurrencies, are based on low-efficiency and high-cost consensus protocols, which may not be practical and feasible for other blockchain applications. In this paper, we propose an efficient and privacy-preserving consensus protocol, called Delegated Proof of Secret Sharing (DPoSS), which is inspired by secure multiparty computation. Specifically, DPoSS first uses polynomial interpolation to select a dealer group from many nodes to maintain the consensus of the blockchain system, in which the dealers in the dealer group take turns to pack the new block. In addition, since the content of transactions is sensitive, our proposed design utilizes verifiable secret sharing to protect the privacy of transmission and defend against the malicious attacks. Extensive experiments show that the proposed consensus protocol achieves fairness during the process of reaching consensus.
]]>Network doi: 10.3390/network2010004
Authors: Sunghwan Cho Gaojie Chen Justin P. Coon Pei Xiao
This article highlights challenges associated with securing visible light communication (VLC) systems by using physical layer security (PLS) techniques. Motivated by the achievements in PLS studies for radio frequency (RF) communication, many PLS techniques for VLC systems were also rigorously investigated by tailoring the RF techniques to the VLC environment. However, careful consideration of the inherent differences between RF and VLC systems is still needed. By disregarding these differences, an eavesdropper could be given an opportunity to wiretap the VLC systems, even when PLS techniques are employed to protect them. Crucially, the fact that it is often not possible to know the number and locations of eavesdroppers in real VLC systems may allow eavesdroppers to devise various cooperative eavesdropping methods. By examining a few examples of the possible eavesdropper threats that can occur in VLC systems, this article offers novel insights into the vulnerabilities of state-of-the-art PLS schemes for VLC systems. Although the focus of the paper is mostly on these weaknesses, some potential solutions are also briefly proposed with a view to stimulating discourse in the community.
]]>Network doi: 10.3390/network2010003
Authors: Miguel Rosendo Jorge Granjal
The constant evolution in communication infrastructures will enable new Internet of Things (IoT) applications, particularly in areas that, up to today, have been mostly enabled by closed or proprietary technologies. Such applications will be enabled by a myriad of wireless communication technologies designed for all types of IoT devices, among which are the Long-Range Wide-Area Network (LoRaWAN) or other Low-power and Wide-Area Networks (LPWAN) communication technologies. This applies to many critical environments, such as industrial control and healthcare, where wireless communications are yet to be broadly adopted. Two fundamental requirements to effectively support upcoming critical IoT applications are those of energy management and security. We may note that those are, in fact, contradictory goals. On the one hand, many IoT devices depend on the usage of batteries while, on the other hand, adequate security mechanisms need to be in place to protect devices and communications from threats against their stability and security. With thismotivation in mind, we propose a solution to address the management, in tandem, of security and energy in LoRaWAN IoT communication environments. We propose and evaluate an architecture in the context of which adaptation logic is used to manage security and energy dynamically, with the goal of guaranteeing appropriate security, while promoting the lifetime of constrained sensing devices. The proposed solution was implemented and experimentally evaluated and was observed to successfully manage security and energy. Security and energy are managed in line with the requirements of the application at hand, the characteristics of the constrained sensing devices employed and the detection, as well as the threat, of particular types of attacks.
]]>Network doi: 10.3390/network2010002
Authors: Hai Trieu Le Tran Thanh Lam Nguyen Tuan Anh Nguyen Xuan Son Ha Nghia Duong-Trung
Due to the rapid change of population structure, leading to lower birth rates and quick aging rates, the demand for blood supply is increasing significantly. In most countries, blood quality and origin are managed by blood management information systems, such as national authorities. Nevertheless, the traditional system has limitations in this field, such as a lack of detailed blood information, making it challenging to manage blood quality, supply, and demand. Hence, to solve these issues, this paper proposes a blockchain-based system called BloodChain, an improved system to support blood information management, providing more detailed information about blood, such as blood consumption and disposal. BloodChain exploits private blockchain techniques with a limited number of relatively fast and reliable participants, making them suitable for B2B (Business to Business) transactions. In this paper, we also develop a proposed system based on the architecture of Hyperledger Fabric. The evaluation of BloodChain is performed in several scenarios to prove our proposed model’s effectiveness.
]]>Network doi: 10.3390/network2010001
Authors: Marjo Heikkilä Jani Suomalainen Ossi Saukko Tero Kippola Kalle Lähetkangas Pekka Koskela Juha Kalliovaara Hannu Haapala Juho Pirttiniemi Anastasia Yastrebova Harri Posti
The need for high-quality communications networks is urgent in data-based farming. A particular challenge is how to achieve reliable, cost-efficient, secure, and broadband last-mile data transfer to enable agricultural machine control. The trialed ad hoc private communications networks built and interconnected with different alternative wireless technologies, including 4G, 5G, satellite and tactical networks, provide interesting practical solutions for connectivity. A remotely controlled tractor is exemplified as a use case of machine control in the demonstrated private communication network. This paper describes the results of a comparative technology analysis and a field trial in a realistic environment. The study includes the practical implementation of video monitoring and the optimization of the control channel for remote-controlled unmanned agricultural tractors. The findings from this study verify and consolidate the requirements for network technologies and for cybersecurity enablers. They highlight insights into the suitability of different wireless technologies for smart farming and tractor scenarios and identify potential paths for future research.
]]>Network doi: 10.3390/network1030020
Authors: Marius Corici Pousali Chakraborty Thomas Magedanz
With the wide adoption of edge compute infrastructures, an opportunity has arisen to deploy part of the functionality at the edge of the network to enable a localized connectivity service. This development is also supported by the adoption of “on-premises” local 5G networks addressing the needs of different vertical industries and by new standardized infrastructure services such as Mobile Edge Computing (MEC). This article introduces a comprehensive set of deployment options for the 5G network and its network management, complementing MEC with the connectivity service and addressing different classes of use cases and applications. We have also practically implemented and tested the newly introduced options in the form of slices within a standard-based testbed. Our performed validation proved their feasibility and gave a realistic perspective on their impact. The qualitative assessment of the connectivity service gives a comprehensive overview on which solution would be viable to be deployed for each vertical market and for each large-scale operator situation, making a step forward towards automated distributed 5G deployments.
]]>Network doi: 10.3390/network1030019
Authors: Nelson Batista Rui Melicio Luis Filipe Santos
This paper proposes an aerial data network infrastructure for Large Geographical Area Surveillance Systems. The work presents a review of previous works from the authors, existing technologies in the market, and other scientific work, with the goal of creating a data network supported by Autonomous Tethered Aerostat Airships used for sensor fixing, a drones deployment base, and meshed data network nodes installation. The proposed approach for data network infrastructure supports several independent and heterogeneous services from independent, private, and public companies. The presented solution employs Edge Artificial Intelligence (AI) systems for autonomous infrastructure management. The Edge AI used in the presented solution enables the AI management solution to work without the need for a permanent connection to cloud services and is constantly fed by the locally generated sensor data. These systems interact with other network AI services to accomplish coordinated tasks. Blockchain technology services are deployed to ensure secure and auditable decisions and operations, which are validated by the different involved ledgers.
]]>Network doi: 10.3390/network1030018
Authors: Yuan Cao Harsha Kandula Xinrong Li
Bluetooth low energy (BLE)-based location service technology has become one of the fastest growing applications for Bluetooth. Received signal strength (RSS) is often used in localization techniques for ranging or location fingerprinting. However, RSS-based localization solutions have poor performance in multipath environments. In this paper, we present a measurement system designed using multiple ESP32 BLE modules and the Bluetooth mesh networking technology, which is capable of exploiting the space, time, and frequency diversities in measurements. To enable channel-aware multi-device RSS measurements, we also designed a communication protocol to associate channel ID information to advertising messages. Based on channel measurement and analysis, we demonstrate that with a six-receiver configuration and space-time-frequency diversity combining, we can significantly reduce the residual linear regression fitting errors in path loss models. Such a reduction leads to more accurately correlating RSS measurements to the distance between the transmitter and receiver devices and thus to achieving improved performance with the RSS-based localization techniques. More importantly, the reduction in the fitting errors is achieved without differentiating the three advertising channels, making it possible to conveniently implement the proposed six-receiver configuration using commercially available BLE devices and the standard Bluetooth mesh networking protocol stack.
]]>Network doi: 10.3390/network1030017
Authors: Laith Farhan Rasha Subhi Hameed Asraa Safaa Ahmed Ali Hussein Fadel Waled Gheth Laith Alzubaidi Mohammed A. Fadhel Muthana Al-Amidie
The last decade has witnessed the rise of the proliferation of Internet-enabled devices. The Internet of Things (IoT) is becoming ever more pervasive in everyday life, connecting an ever-greater array of diverse physical objects. The key vision of the IoT is to bring a massive number of smart devices together in integrated and interconnected heterogeneous networks, making the Internet even more useful. Therefore, this paper introduces a brief introduction to the history and evolution of the Internet. Then, it presents the IoT, which is followed by a list of application domains and enabling technologies. The wireless sensor network (WSN) is revealed as one of the important elements in IoT applications, and the paper describes the relationship between WSNs and the IoT. This research is concerned with developing energy-efficiency techniques for WSNs that enable the IoT. After having identified sources of energy wastage, this paper reviews the literature that discusses the most relevant methods to minimizing the energy exhaustion of IoT and WSNs. We also identify the gaps in the existing literature in terms of energy preservation measures that could be researched and it can be considered in future works. The survey gives a near-complete and up-to-date view of the IoT in the energy field. It provides a summary and recommendations of a large range of energy-efficiency methods proposed in the literature that will help and support future researchers. Please note that the manuscript is an extended version and based on the summary of the Ph.D. thesis. This paper will give to the researchers an introduction to what they need to know and understand about the networks, WSNs, and IoT applications from scratch. Thus, the fundamental purpose of this paper is to introduce research trends and recent work on the use of IoT technology and the conclusion that has been reached as a result of undertaking the Ph.D. study.
]]>Network doi: 10.3390/network1030016
Authors: AbdulHaseeb Ahmed Sethuraman Trichy Viswanathan MD Rashed Rahman Ashwin Ashok
Optical camera communication is an emerging technology that enables communication using light beams, where information is modulated through optical transmissions from light-emitting diodes (LEDs). This work conducts empirical studies to identify the feasibility and effectiveness of using deep learning models to improve signal reception in camera communication. The key contributions of this work include the investigation of transfer learning and customization of existing models to demodulate the signals transmitted using a single LED by applying the classification models on the camera frames at the receiver. In addition to investigating deep learning methods for demodulating a single VLC transmission, this work evaluates two real-world use-cases for the integration of deep learning in visual multiple-input multiple-output (MIMO), where transmissions from a LED array are decoded on a camera receiver. This paper presents the empirical evaluation of state-of-the-art deep neural network (DNN) architectures that are traditionally used for computer vision applications for camera communication.
]]>Network doi: 10.3390/network1030015
Authors: Andrea Di Domenico Gianluca Perna Martino Trevisan Luca Vassio Danilo Giordano
Cloud gaming is a class of services that promises to revolutionize the videogame market. It allows the user to play a videogame with essential equipment while using a remote server for the actual execution. The multimedia content is streamed through the network from the server to the user. Hence, this service requires low latency and a large bandwidth to work properly with low response time and high-definition video. Three of the leading tech companies (Google, Sony, and NVIDIA) entered this market with their products, and others, like Microsoft and Amazon, are also launching their platforms. However, these companies have released little information about their cloud gaming operation and how they utilize the network. In this work, we study cloud gaming services from the network point of view. We collect more than 200 packet traces under different application settings and network conditions from a broadband network to poor mobile network conditions, for 3 cloud gaming services, namely Stadia from Google, GeForce Now from NVIDIA and PS Now from Sony. We analyze the employed protocols and the workload that they impose on the network. We find that GeForce Now and Stadia use the RTP protocol to stream the multimedia content, with the latter relying on the standard WebRTC APIs. Depending on the network and video quality, they result in bandwidth-hungry services consuming up to 45 Mbit/s. PS Now instead uses only undocumented protocols and never exceeds 13 Mbit/s. 4G mobile networks can often sustain these loads, while traditional 3G connections struggle. The systems quickly react to deteriorated network conditions, and packet losses up to 5% do not cause a reduction in resolution.
]]>Network doi: 10.3390/network1030014
Authors: Joshua Ogbebor Xiangyu Meng
This paper examines the roles of the matrix weight elements in matrix-weighted consensus. The consensus algorithms dictate that all agents reach consensus when the weighted graph is connected. However, it is not always the case for matrix weighted graphs. The conditions leading to different types of consensus have been extensively analysed based on the properties of matrix-weighted Laplacians and graph theoretic methods. However, in practice, there is concern on how to pick matrix-weights to achieve some desired consensus, or how the change of elements in matrix weights affects the consensus algorithm. By selecting the elements in the matrix weights, different clusters may be possible. In this paper, we map the roles of the elements of the matrix weights in the systems consensus algorithm. We explore the choice of matrix weights to achieve different types of consensus and clustering. Our results are demonstrated on a network of three agents where each agent has three states.
]]>Network doi: 10.3390/network1020013
Authors: Florian Wamser Anika Seufert Andrew Hall Stefan Wunderer Tobias Hoßfeld
Crowdsourced network measurements (CNMs) are becoming increasingly popular as they assess the performance of a mobile network from the end user’s perspective on a large scale. Here, network measurements are performed directly on the end-users’ devices, thus taking advantage of the real-world conditions end-users encounter. However, this type of uncontrolled measurement raises questions about its validity and reliability. The problem lies in the nature of this type of data collection. In CNMs, mobile network subscribers are involved to a large extent in the measurement process, and collect data themselves for the operator. The collection of data on user devices in arbitrary locations and at uncontrolled times requires means to ensure validity and reliability. To address this issue, our paper defines concepts and guidelines for analyzing the precision of CNMs; specifically, the number of measurements required to make valid statements. In addition to the formal definition of the aspect, we illustrate the problem and use an extensive sample data set to show possible assessment approaches. This data set consists of more than 20.4 million crowdsourced mobile measurements from across France, measured by a commercial data provider.
]]>Network doi: 10.3390/network1020012
Authors: Mahshid Mehrabi Shiwei Shen Yilun Hai Vincent Latzko George Koudouridis Xavier Gelabert Martin Reisslein Frank Fitzek
Cooperative edge offloading to nearby end devices via Device-to-Device (D2D) links in edge networks with sliced computing resources has mainly been studied for end devices (helper nodes) that are stationary (or follow predetermined mobility paths) and for independent computation tasks. However, end devices are often mobile, and a given application request commonly requires a set of dependent computation tasks. We formulate a novel model for the cooperative edge offloading of dependent computation tasks to mobile helper nodes. We model the task dependencies with a general task dependency graph. Our model employs the state-of-the-art deep-learning-based PECNet mobility model and offloads a task only when the sojourn time in the coverage area of a helper node or Multi-access Edge Computing (MEC) server is sufficiently long. We formulate the minimization problem for the consumed battery energy for task execution, task data transmission, and waiting for offloaded task results on end devices. We convert the resulting non-convex mixed integer nonlinear programming problem into an equivalent quadratically constrained quadratic programming (QCQP) problem, which we solve via a novel Energy-Efficient Task Offloading (EETO) algorithm. The numerical evaluations indicate that the EETO approach consistently reduces the battery energy consumption across a wide range of task complexities and task completion deadlines and can thus extend the battery lifetimes of mobile devices operating with sliced edge computing resources.
]]>Network doi: 10.3390/network1020011
Authors: Selim Ickin Markus Fiedler Konstantinos Vandikas
The development of Quality of Experience (QoE) models using Machine Learning (ML) is challenging, since it can be difficult to share datasets between research entities to protect the intellectual property of the ML model and the confidentiality of user studies in compliance with data protection regulations such as General Data Protection Regulation (GDPR). This makes distributed machine learning techniques that do not necessitate sharing of data or attribute names appealing. One suitable use case in the scope of QoE can be the task of mapping QoE indicators for the perception of quality such as Mean Opinion Scores (MOS), in a distributed manner. In this article, we present Distributed Ensemble Learning (DEL), and Vertical Federated Learning (vFL) to address this context. Both approaches can be applied to datasets that have different feature sets, i.e., split features. The DEL approach is ML model-agnostic and achieves up to 12% accuracy improvement of ensembling various generic and specific models. The vFL approach is based on neural networks and achieves on-par accuracy with a conventional Fully Centralized machine learning model, while exhibiting statistically significant performance that is superior to that of the Isolated local models with an average accuracy improvement of 26%. Moreover, energy-efficient vFL with reduced network footprint and training time is obtained by further tuning the model hyper-parameters.
]]>Network doi: 10.3390/network1020010
Authors: Wei Chen H. Vincent Poor
Caching has attracted much attention recently because it holds the promise of scaling the service capability of radio access networks (RANs). We envision that caching will ultimately make next-generation RANs more than bit-pipelines and emerge as a multi-disciplinary area via the union with communications, pricing, recommendation, compression, and computation units. By summarizing cutting-edge caching policies, we trace a common root of their gains to the prolonged transmission time, which is then traded for higher spectral or energy efficiency. To realize caching, the physical layer and higher layers have to function together, with the aid of prediction and memory units, which substantially broadens the concept of cross-layer design to a multi-unit collaboration methodology. We revisit caching from a generalized cross-layer perspective, with a focus on its emerging opportunities, challenges, and theoretical performance limits. To motivate the application and evolution of caching, we conceive a hierarchical pricing infrastructure that provides incentives to network operators and users. To make RANs even more proactive, we design caching and recommendation jointly, showing a user what it might be interested in and what has been done for it. Furthermore, the user-specific demand prediction motivates edge compression and proactive MEC as new applications. The beyond-bit-pipeline RAN is a paradigm shift that brings with it many cross-disciplinary research opportunities.
]]>Network doi: 10.3390/network1020009
Authors: Konstantinos Demertzis Konstantinos Tsiknas Dimitrios Taketzis Dimitrios N. Skoutas Charalabos Skianis Lazaros Iliadis Kyriakos E. Zoiros
Upgrading the existing energy infrastructure to a smart grid necessarily goes through the provision of integrated technological solutions that ensure the interoperability of business processes and reduce the risk of devaluation of systems already in use. Considering the heterogeneity of the current infrastructures, and in order to keep pace with the dynamics of their operating environment, we should aim to the reduction of their architectural complexity and the addition of new and more efficient technologies and procedures. Furthermore, the integrated management of the overall ecosystem requires a collaborative integration strategy which should ensure the end-to-end interconnection under specific quality standards together with the establishment of strict security policies. In this respect, every design detail can be critical to the success or failure of a costly and ambitious project, such as that of smart energy networks. This work presents and classifies the communication network standards that have been established for smart grids and should be taken into account in the process of planning and implementing new infrastructures.
]]>Network doi: 10.3390/network1020008
Authors: Ricardo Lent
A cognitive networking approach to the anycast routing problem for delay-tolerant networking (DTN) is proposed. The method is suitable for the space–ground and other domains where communications are recurrently challenged by diverse link impairments, including long propagation delays, communication asymmetry, and lengthy disruptions. The proposed method delivers data bundles achieving low delays by avoiding, whenever possible, link congestion and long wait times for contacts to become active, and without the need of duplicating data bundles. Network gateways use a spiking neural network (SNN) to decide the optimal outbound link for each bundle. The SNN is regularly updated to reflect the expected cost of the routing decisions, which helps to fine-tune future decisions. The method is decentralized and selects both the anycast group member to be used as the sink and the path to reach that node. A series of experiments were carried out on a network testbed to evaluate the method. The results demonstrate its performance advantage over unicast routing, as anycast routing is not yet supported by the current DTN standard (Contact Graph Routing). The proposed approach yields improved performance for space applications that require as-fast-as-possible data returns.
]]>Network doi: 10.3390/network1020007
Authors: Charithri Yapa Chamitha de Alwis Madhusanka Liyanage
Emergence of the Energy Internet (EI) demands restructuring of traditional electricity grids to integrate heterogeneous energy sources, distribution network management with grid intelligence and big data management. This paradigm shift is considered to be a breakthrough in the energy industry towards facilitating autonomous and decentralized grid operations while maximizing the utilization of Distributed Generation (DG). Blockchain has been identified as a disruptive technology enabler for the realization of EI to facilitate reliable, self-operated energy delivery. In this paper, we highlight six key directions towards utilizing blockchain capabilities to realize the envisaged EI. We elaborate the challenges in each direction and highlight the role of blockchain in addressing them. Furthermore, we summarize the future research directive in achieving fully autonomous and decentralized electricity distribution networks, which will be known as Energy Internet.
]]>Network doi: 10.3390/network1020006
Authors: Ed Kamya Kiyemba Edris Mahdi Aiash Jonathan Loo
Fifth Generation mobile networks (5G) promise to make network services provided by various Service Providers (SP) such as Mobile Network Operators (MNOs) and third-party SPs accessible from anywhere by the end-users through their User Equipment (UE). These services will be pushed closer to the edge for quick, seamless, and secure access. After being granted access to a service, the end-user will be able to cache and share data with other users. However, security measures should be in place for SP not only to secure the provisioning and access of those services but also, should be able to restrict what the end-users can do with the accessed data in or out of coverage. This can be facilitated by federated service authorization and access control mechanisms that restrict the caching and sharing of data accessed by the UE in different security domains. In this paper, we propose a Data Caching and Sharing Security (DCSS) protocol that leverages federated authorization to provide secure caching and sharing of data from multiple SPs in multiple security domains. We formally verify the proposed DCSS protocol using ProVerif and applied pi-calculus. Furthermore, a comprehensive security analysis of the security properties of the proposed DCSS protocol is conducted.
]]>Network doi: 10.3390/network1020005
Authors: Divyanshu Pandey Adithya Venugopal Harry Leib
Most modern communication systems, such as those intended for deployment in IoT applications or 5G and beyond networks, utilize multiple domains for transmission and reception at the physical layer. Depending on the application, these domains can include space, time, frequency, users, code sequences, and transmission media, to name a few. As such, the design criteria of future communication systems must be cognizant of the opportunities and the challenges that exist in exploiting the multi-domain nature of the signals and systems involved for information transmission. Focussing on the Physical Layer, this paper presents a novel mathematical framework using tensors, to represent, design, and analyze multi-domain systems. Various domains can be integrated into the transceiver design scheme using tensors. Tools from multi-linear algebra can be used to develop simultaneous signal processing techniques across all the domains. In particular, we present tensor partial response signaling (TPRS) which allows the introduction of controlled interference within elements of a domain and also across domains. We develop the TPRS system using the tensor contracted convolution to generate a multi-domain signal with desired spectral and cross-spectral properties across domains. In addition, by studying the information theoretic properties of the multi-domain tensor channel, we present the trade-off between different domains that can be harnessed using this framework. Numerical examples for capacity and mean square error are presented to highlight the domain trade-off revealed by the tensor formulation. Furthermore, an application of the tensor framework to MIMO Generalized Frequency Division Multiplexing (GFDM) is also presented.
]]>Network doi: 10.3390/network1010004
Authors: Ehsan Ahvar Shohreh Ahvar Syed Mohsan Raza Jose Manuel Sanchez Vilchez Gyu Myoung Lee
In recent years, the number of objects connected to the internet have significantly increased. Increasing the number of connected devices to the internet is transforming today’s Internet of Things (IoT) into massive IoT of the future. It is predicted that, in a few years, a high communication and computation capacity will be required to meet the demands of massive IoT devices and applications requiring data sharing and processing. 5G and beyond mobile networks are expected to fulfill a part of these requirements by providing a data rate of up to terabits per second. It will be a key enabler to support massive IoT and emerging mission critical applications with strict delay constraints. On the other hand, the next generation of software-defined networking (SDN) with emerging cloudrelated technologies (e.g., fog and edge computing) can play an important role in supporting and implementing the above-mentioned applications. This paper sets out the potential opportunities and important challenges that must be addressed in considering options for using SDN in hybrid cloud-fog systems to support 5G and beyond-enabled applications.
]]>Network doi: 10.3390/network1010003
Authors: Elisa Rojas Joaquin Alvarez-Horcajo Isaias Martinez-Yelmo Jose M. Arco Miguel Briso-Montiano
Today, most user services are based on cloud computing, which leverages data center networks (DCNs) to efficiently route its communications. These networks process high volumes of traffic and require exhaustive failure management. Furthermore, expanding these networks is usually costly due to their constraint designs. In this article, we present enhanced Torii (eTorii), an automatic, scalable, reliable and flexible multipath routing protocol that aims to accomplish the demanding requirements of DCNs. We prove that eTorii is, by definition, applicable to a wide range of DCNs or any other type of hierarchical network and able to route with minimum forwarding table size and capable of rerouting around failed links on-the-fly with almost zero cost. A proof of concept of the eTorii protocol has been implemented using the Ryu SDN controller and the Mininet framework. Its evaluation shows that eTorii balances the load and preserves high-bandwidth utilization. Thus, it optimizes the use of DCN resources in comparison to other approaches, such as Equal-Cost Multi-Path (ECMP).
]]>Network doi: 10.3390/network1010002
Authors: Nils Morozs Paul D. Mitchell Yuriy Zakharov
This paper investigates the use of underwater acoustic sensor networks (UASNs) for subsea asset monitoring. In particular, we focus on the use cases involving the deployment of networks with line topologies, e.g., for monitoring oil and gas pipelines. The Linear Transmit Delay Allocation MAC (LTDA-MAC) protocol facilitates efficient packet scheduling in linear UASNs without clock synchronization at the sensor nodes. It is based on the real-time optimization of a packet schedule for a given network deployment. In this paper, we present a novel greedy algorithm for real-time optimization of LTDA-MAC schedules. It produces collision-free schedules with significantly shorter frame duration, and is 2–3 orders of magnitude more computationally efficient than our previously proposed solution. Simulations of a subsea pipeline monitoring scenario show that, despite no clock synchronization, LTDA-MAC equipped with the proposed schedule optimization algorithm significantly outperforms Spatial TDMA.
]]>Network doi: 10.3390/network1010001
Authors: Alexey Vinel
Network (ISSN 2673-8732) provides full coverage of all topics of interest involved in the networking area [...]
]]>