Next Issue
Volume 3, June
Previous Issue
Volume 2, December
 
 

Network, Volume 3, Issue 1 (March 2023) – 11 articles

Cover Story (view full-size image): The introduction of 5G technology along with the exponential growth in connected devices is expected to cause a challenge for the efficient and reliable network resource allocation. The recent progress in artificial intelligence and machine learning is theorised to be a potential solution for this problem. It is therefore expected that future generation mobile networks will heavily depend on its artificial intelligence components which may result in those components becoming a high-value attack target. Thus, our study focuses on the analysis of adversarial example generation attacks against machine-learning-based frameworks that may be present in the next-generation networks. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
21 pages, 6348 KiB  
Article
SDN-Based Routing Framework for Elephant and Mice Flows Using Unsupervised Machine Learning
by Muna Al-Saadi, Asiya Khan, Vasilios Kelefouras, David J. Walker and Bushra Al-Saadi
Network 2023, 3(1), 218-238; https://doi.org/10.3390/network3010011 - 02 Mar 2023
Cited by 3 | Viewed by 2167
Abstract
Software-defined networks (SDNs) have the capabilities of controlling the efficient movement of data flows through a network to fulfill sufficient flow management and effective usage of network resources. Currently, most data center networks (DCNs) suffer from the exploitation of network resources by large [...] Read more.
Software-defined networks (SDNs) have the capabilities of controlling the efficient movement of data flows through a network to fulfill sufficient flow management and effective usage of network resources. Currently, most data center networks (DCNs) suffer from the exploitation of network resources by large packets (elephant flow) that enter the network at any time, which affects a particular flow (mice flow). Therefore, it is crucial to find a solution for identifying and finding an appropriate routing path in order to improve the network management system. This work proposes a SDN application to find the best path based on the type of flow using network performance metrics. These metrics are used to characterize and identify flows as elephant and mice by utilizing unsupervised machine learning (ML) and the thresholding method. A developed routing algorithm was proposed to select the path based on the type of flow. A validation test was performed by testing the proposed framework using different topologies of the DCN and comparing the performance of a SDN-Ryu controller with that of the proposed framework based on three factors: throughput, bandwidth, and data transfer rate. The results show that 70% of the time, the proposed framework has higher performance for different types of flows. Full article
Show Figures

Figure 1

19 pages, 5893 KiB  
Article
Machine Learning Applied to LoRaWAN Network for Improving Fingerprint Localization Accuracy in Dense Urban Areas
by Andrea Piroddi and Maurizio Torregiani
Network 2023, 3(1), 199-217; https://doi.org/10.3390/network3010010 - 09 Feb 2023
Cited by 1 | Viewed by 1826
Abstract
In the area of low-power wireless networks, one technology that many researchers are focusing on relates to positioning methods such as fingerprinting in densely populated urban areas. This work presents an experimental study aimed at quantifying mean location estimation error in populated areas. [...] Read more.
In the area of low-power wireless networks, one technology that many researchers are focusing on relates to positioning methods such as fingerprinting in densely populated urban areas. This work presents an experimental study aimed at quantifying mean location estimation error in populated areas. Using a dataset provided by the University of Antwerp, a neural network was implemented with the aim of providing end-device location. In this way, we were able to measure the mean localization error in areas of high urban density. The results obtained show a deviation of less than 150 m in locating the end device. This offset can be decreased up to a few meters, provided that there is a greater density of nodes per square meter. This result could enable Internet of Things (IoT) applications to use fingerprinting in place of energy-consuming alternatives. Full article
Show Figures

Figure 1

19 pages, 742 KiB  
Article
Improving Bundle Routing in a Space DTN by Approximating the Transmission Time of the Reliable LTP
by Ricardo Lent
Network 2023, 3(1), 180-198; https://doi.org/10.3390/network3010009 - 03 Feb 2023
Viewed by 1520
Abstract
Because the operation of space networks is carefully planned, it is possible to predict future contact opportunities from link budget analysis using the anticipated positions of the nodes over time. In the standard approach to space delay-tolerant networking (DTN), such knowledge is used [...] Read more.
Because the operation of space networks is carefully planned, it is possible to predict future contact opportunities from link budget analysis using the anticipated positions of the nodes over time. In the standard approach to space delay-tolerant networking (DTN), such knowledge is used by contact graph routing (CGR) to decide the paths for data bundles. However, the computation assumes nearly ideal channel conditions, disregarding the impact of the convergence layer retransmissions (e.g., as implemented by the Licklider transmission protocol (LTP)). In this paper, the effect of the bundle forwarding time estimation (i.e., the link service time) to routing optimality is analyzed, and an accurate expression for lossy channels is discussed. The analysis is performed first from a general and protocol-agnostic perspective, assuming knowledge of the statistical properties and general features of the contact opportunities. Then, a practical case is studied using the standard space DTN protocol, evaluating the performance improvement of CGR under the proposed forwarding time estimation. The results of this study provide insight into the optimal routing problem for a space DTN and a suggested improvement to the current routing standard. Full article
Show Figures

Figure 1

22 pages, 1811 KiB  
Article
A Federated Learning-Based Approach for Improving Intrusion Detection in Industrial Internet of Things Networks
by Md Mamunur Rashid, Shahriar Usman Khan, Fariha Eusufzai, Md. Azharuddin Redwan, Saifur Rahman Sabuj and Mahmoud Elsharief
Network 2023, 3(1), 158-179; https://doi.org/10.3390/network3010008 - 30 Jan 2023
Cited by 23 | Viewed by 4176
Abstract
The Internet of Things (IoT) is a network of electrical devices that are connected to the Internet wirelessly. This group of devices generates a large amount of data with information about users, which makes the whole system sensitive and prone to malicious attacks [...] Read more.
The Internet of Things (IoT) is a network of electrical devices that are connected to the Internet wirelessly. This group of devices generates a large amount of data with information about users, which makes the whole system sensitive and prone to malicious attacks eventually. The rapidly growing IoT-connected devices under a centralized ML system could threaten data privacy. The popular centralized machine learning (ML)-assisted approaches are difficult to apply due to their requirement of enormous amounts of data in a central entity. Owing to the growing distribution of data over numerous networks of connected devices, decentralized ML solutions are needed. In this paper, we propose a Federated Learning (FL) method for detecting unwanted intrusions to guarantee the protection of IoT networks. This method ensures privacy and security by federated training of local IoT device data. Local IoT clients share only parameter updates with a central global server, which aggregates them and distributes an improved detection algorithm. After each round of FL training, each of the IoT clients receives an updated model from the global server and trains their local dataset, where IoT devices can keep their own privacy intact while optimizing the overall model. To evaluate the efficiency of the proposed method, we conducted exhaustive experiments on a new dataset named Edge-IIoTset. The performance evaluation demonstrates the reliability and effectiveness of the proposed intrusion detection model by achieving an accuracy (92.49%) close to that offered by the conventional centralized ML models’ accuracy (93.92%) using the FL method. Full article
(This article belongs to the Special Issue Networking Technologies for Cyber-Physical Systems)
Show Figures

Figure 1

16 pages, 407 KiB  
Article
Formal Algebraic Model of an Edge Data Center with a Redundant Ring Topology
by Pedro Juan Roig, Salvador Alcaraz, Katja Gilly, Cristina Bernad and Carlos Juiz
Network 2023, 3(1), 142-157; https://doi.org/10.3390/network3010007 - 30 Jan 2023
Cited by 3 | Viewed by 1549
Abstract
Data center organization and optimization presents the opportunity to try and design systems with specific characteristics. In this sense, the combination of artificial intelligence methodology and sustainability may lead to achieve optimal topologies with enhanced feature, whilst taking care of the environment by [...] Read more.
Data center organization and optimization presents the opportunity to try and design systems with specific characteristics. In this sense, the combination of artificial intelligence methodology and sustainability may lead to achieve optimal topologies with enhanced feature, whilst taking care of the environment by lowering carbon emissions. In this paper, a model for a field monitoring system has been proposed, where an edge data center topology in the form of a redundant ring has been designed for redundancy purposes to join together nodes spread apart. Additionally, a formal algebraic model of such a design has been exposed and verified. Full article
(This article belongs to the Special Issue Emerging Networks and Systems for Edge Computing)
Show Figures

Figure 1

27 pages, 721 KiB  
Article
IoT and Blockchain Integration: Applications, Opportunities, and Challenges
by Naresh Adhikari and Mahalingam Ramkumar
Network 2023, 3(1), 115-141; https://doi.org/10.3390/network3010006 - 24 Jan 2023
Cited by 7 | Viewed by 3696
Abstract
During the recent decade, two variants of evolving computing networks have augmented the Internet: (i) The Internet of Things (IoT) and (ii) Blockchain Network(s) (BCNs). The IoT is a network of heterogeneous digital devices embedded with sensors and software for various automation and [...] Read more.
During the recent decade, two variants of evolving computing networks have augmented the Internet: (i) The Internet of Things (IoT) and (ii) Blockchain Network(s) (BCNs). The IoT is a network of heterogeneous digital devices embedded with sensors and software for various automation and monitoring purposes. A Blockchain Network is a broadcast network of computing nodes provisioned for validating digital transactions and recording the “well-formed” transactions in a unique data storage called a blockchain ledger. The power of a blockchain network is that (ideally) every node maintains its own copy of the ledger and takes part in validating the transactions. Integrating IoT and BCNs brings promising applications in many areas, including education, health, finance, agriculture, industry, and the environment. However, the complex, dynamic and heterogeneous computing and communication needs of IoT technologies, optionally integrated by blockchain technologies (if mandated), draw several challenges on scaling, interoperability, and security goals. In recent years, numerous models integrating IoT with blockchain networks have been proposed, tested, and deployed for businesses. Numerous studies are underway to uncover the applications of IoT and Blockchain technology. However, a close look reveals that very few applications successfully cater to the security needs of an enterprise. Needless to say, it makes less sense to integrate blockchain technology with an existing IoT that can serve the security need of an enterprise. In this article, we investigate several frameworks for IoT operations, the applicability of integrating them with blockchain technology, and due security considerations that the security personnel must make during the deployment and operations of IoT and BCN. Furthermore, we discuss the underlying security concerns and recommendations for blockchain-integrated IoT networks. Full article
Show Figures

Figure 1

22 pages, 459 KiB  
Article
Edge Data Center Organization and Optimization by Using Cage Graphs
by Pedro Juan Roig, Salvador Alcaraz, Katja Gilly, Cristina Bernad and Carlos Juiz
Network 2023, 3(1), 93-114; https://doi.org/10.3390/network3010005 - 18 Jan 2023
Cited by 2 | Viewed by 1452
Abstract
Data center organization and optimization are increasingly receiving attention due to the ever-growing deployments of edge and fog computing facilities. The main aim is to achieve a topology that processes the traffic flows as fast as possible and that does not only depend [...] Read more.
Data center organization and optimization are increasingly receiving attention due to the ever-growing deployments of edge and fog computing facilities. The main aim is to achieve a topology that processes the traffic flows as fast as possible and that does not only depend on AI-based computing resources, but also on the network interconnection among physical hosts. In this paper, graph theory is introduced, due to its features related to network connectivity and stability, which leads to more resilient and sustainable deployments, where cage graphs may have an advantage over the rest. In this context, the Petersen graph cage is studied as a convenient candidate for small data centers due to its small number of nodes and small network diameter, thus providing an interesting solution for edge and fog data centers. Full article
(This article belongs to the Special Issue Advances in Edge and Cloud Computing)
Show Figures

Figure 1

2 pages, 146 KiB  
Editorial
Acknowledgment to the Reviewers of Network in 2022
by Network Editorial Office
Network 2023, 3(1), 91-92; https://doi.org/10.3390/network3010004 - 17 Jan 2023
Viewed by 996
Abstract
High-quality academic publishing is built on rigorous peer review [...] Full article
52 pages, 590 KiB  
Review
On Attacking Future 5G Networks with Adversarial Examples: Survey
by Mikhail Zolotukhin, Di Zhang, Timo Hämäläinen and Parsa Miraghaei
Network 2023, 3(1), 39-90; https://doi.org/10.3390/network3010003 - 30 Dec 2022
Cited by 2 | Viewed by 3451
Abstract
The introduction of 5G technology along with the exponential growth in connected devices is expected to cause a challenge for the efficient and reliable network resource allocation. Network providers are now required to dynamically create and deploy multiple services which function under various [...] Read more.
The introduction of 5G technology along with the exponential growth in connected devices is expected to cause a challenge for the efficient and reliable network resource allocation. Network providers are now required to dynamically create and deploy multiple services which function under various requirements in different vertical sectors while operating on top of the same physical infrastructure. The recent progress in artificial intelligence and machine learning is theorized to be a potential answer to the arising resource allocation challenges. It is therefore expected that future generation mobile networks will heavily depend on its artificial intelligence components which may result in those components becoming a high-value attack target. In particular, a smart adversary may exploit vulnerabilities of the state-of-the-art machine learning models deployed in a 5G system to initiate an attack. This study focuses on the analysis of adversarial example generation attacks against machine learning based frameworks that may be present in the next generation networks. First, various AI/ML algorithms and the data used for their training and evaluation in mobile networks is discussed. Next, multiple AI/ML applications found in recent scientific papers devoted to 5G are overviewed. After that, existing adversarial example generation based attack algorithms are reviewed and frameworks which employ these algorithms for fuzzing stat-of-art AI/ML models are summarised. Finally, adversarial example generation attacks against several of the AI/ML frameworks described are presented. Full article
24 pages, 1495 KiB  
Article
Towards Software-Defined Delay Tolerant Networks
by Dominick Ta, Stephanie Booth and Rachel Dudukovich
Network 2023, 3(1), 15-38; https://doi.org/10.3390/network3010002 - 28 Dec 2022
Cited by 3 | Viewed by 2763
Abstract
This paper proposes a Software-Defined Delay Tolerant Networking (SDDTN) architecture as a solution to managing large Delay Tolerant Networking (DTN) networks in a scalable manner. This work is motivated by the planned deployments of large DTN networks on the Moon and beyond in [...] Read more.
This paper proposes a Software-Defined Delay Tolerant Networking (SDDTN) architecture as a solution to managing large Delay Tolerant Networking (DTN) networks in a scalable manner. This work is motivated by the planned deployments of large DTN networks on the Moon and beyond in deep space. Current space communication involves relatively few nodes and is heavily deterministic and scheduled, which will not be true in the future. It is unclear how these large space DTN networks, consisting of inherently intermittent links, will be able to adapt to dynamically changing network conditions. In addition to the proposed SDDTN architecture, this paper explores data plane programming and the Programming Protocol-Independent Packet Processors (P4) language as a possible method of implementing this SDDTN architecture, enumerates the challenges of this approach, and presents intermediate results. Full article
(This article belongs to the Special Issue Advanced Technologies in Network and Service Management)
Show Figures

Figure 1

14 pages, 1441 KiB  
Article
A Performance Evaluation of In-Memory Databases Operations in Session Initiation Protocol
by Ali Al-Allawee, Pascal Lorenz, Abdelhafid Abouaissa and Mosleh Abualhaj
Network 2023, 3(1), 1-14; https://doi.org/10.3390/network3010001 - 28 Dec 2022
Cited by 1 | Viewed by 1599
Abstract
Real-time communication has witnessed a dramatic increase in recent years in user daily usage. In this domain, Session Initiation Protocol (SIP) is a well-known protocol found to provide trusted services (voice or video) to end users along with efficiency, scalability, and interoperability. Just [...] Read more.
Real-time communication has witnessed a dramatic increase in recent years in user daily usage. In this domain, Session Initiation Protocol (SIP) is a well-known protocol found to provide trusted services (voice or video) to end users along with efficiency, scalability, and interoperability. Just like other Internet technology, SIP stores its related data in databases with a predefined data structure. In recent, SIP technologies have adopted the real advantages of in-memory databases as cache systems to ensure fast database operations during real-time communication. Meanwhile, in industry, there are several names of in-memory databases that have been implemented with different structures (e.g., query types, data structure, persistency, and key/value size). However, there are limited resources and poor recommendations on how to select a proper in-memory database in SIP communications. This paper provides recommended and efficient in-memory databases which are most fitted to SIP servers by evaluating three types of databases including Memcache, Redis, and Local (OpenSIPS built-in). The evaluation has been conducted based on the experimental performance of the impact of in-memory operations (store and fetch) against the SIP server by applying heavy load traffic through different scenarios. To sum up, evaluation results show that the Local database consumed less memory compared to Memcached and Redis for read and write operations. While persistency was considered, Memcache is the preferable database selection due to its 25.20 KB/s for throughput and 0.763 s of call–response time. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop