Advanced Technologies in Network and Service Management

A special issue of Network (ISSN 2673-8732).

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 16592

Special Issue Editors


E-Mail Website
Guest Editor
Computer Science & Software Engineering Department, Gina Cody School of Engineering, Concordia University, Montreal, QC, Canada
Interests: M2M communication; wireless networks; optical networks; machine learning

E-Mail Website
Guest Editor
Electrical Engineering Department, University at Buffalo, Buffalo, NY 14260, USA
Interests: Internet of Things; wireless communications; cellular networks (4G/5G); network performance analysis and simulation; communications for the smart grid; optimization; machine learning

Special Issue Information

Dear Colleagues,

Networks and their provided services keep increasing in speed, size, and complexity, creating ongoing challenges to their management and control requirements. As a result, emerging tools and advanced technologies are being solicited to provide key solutions for reliable and effective network and service management.

Due to the diversity of networks and services, many tools and technologies, ranging from enabling infrastructures to innovative protocols and efficient algorithms, are necessary for scalable and streamlined management. Furthermore, a reliable and cost-effective management solution combines different technologies and exploits new network and service management venues. Such a solution is the target for infrastructure and service providers as well as the research community.

This Special Issue invites researchers to contribute original papers in areas including (but not limited to) the following:

  • Machine learning and artificial intelligence for network and service management;
  • Innovative architectures and protocols for network and service management;
  • Innovative technologies for network and service management;
  • Cloud/fog/edge computing;
  • Prototype implementation and testbed experimentation;
  • Software-defined networks (SDN);
  • Network function virtualization (NFV);
  • Network orchestration;
  • Network monitoring and measurements;
  • Data mining and (big) data analysis;
  • Fault management;
  • Network security;
  • Management based on the quality of experience;
  • Energy-aware management;
  • Simulations for network and service management;
  • Analytical model for network and service management;
  • Network optimization.

Dr. Hakim Mellah
Dr. Filippo Malandra
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Network is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • network management
  • service management
  • software-defined networks
  • network optimization
  • network orchestration
  • network function virtualization

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 696 KiB  
Article
A Hierarchical Security Event Correlation Model for Real-Time Threat Detection and Response
by Herbert Maosa, Karim Ouazzane and Mohamed Chahine Ghanem
Network 2024, 4(1), 68-90; https://doi.org/10.3390/network4010004 - 11 Feb 2024
Viewed by 610
Abstract
An intrusion detection system (IDS) perform postcompromise detection of security breaches whenever preventive measures such as firewalls do not avert an attack. However, these systems raise a vast number of alerts that must be analyzed and triaged by security analysts. This process is [...] Read more.
An intrusion detection system (IDS) perform postcompromise detection of security breaches whenever preventive measures such as firewalls do not avert an attack. However, these systems raise a vast number of alerts that must be analyzed and triaged by security analysts. This process is largely manual, tedious, and time-consuming. Alert correlation is a technique that reduces the number of intrusion alerts by aggregating alerts that are similar in some way. However, the correlation is performed outside the IDS through third-party systems and tools, after the IDS has already generated a high volume of alerts. These third-party systems add to the complexity of security operations. In this paper, we build on the highly researched area of alert and event correlation by developing a novel hierarchical event correlation model that promises to reduce the number of alerts issued by an intrusion detection system. This is achieved by correlating the events before the IDS classifies them. The proposed model takes the best features from similarity and graph-based correlation techniques to deliver an ensemble capability not possible by either approach separately. Further, we propose a correlation process for events rather than alerts as is the case in the current art. We further develop our own correlation and clustering algorithm which is tailor-made to the correlation and clustering of network event data. The model is implemented as a proof of concept with experiments run on standard intrusion detection sets. The correlation achieves an 87% data reduction through aggregation, producing nearly 21,000 clusters in about 30 s. Full article
(This article belongs to the Special Issue Advanced Technologies in Network and Service Management)
Show Figures

Figure 1

25 pages, 1100 KiB  
Article
Optimized MLP-CNN Model to Enhance Detecting DDoS Attacks in SDN Environment
by Mohamed Ali Setitra, Mingyu Fan, Bless Lord Y. Agbley and Zine El Abidine Bensalem
Network 2023, 3(4), 538-562; https://doi.org/10.3390/network3040024 - 01 Dec 2023
Cited by 3 | Viewed by 1309
Abstract
In the contemporary landscape, Distributed Denial of Service (DDoS) attacks have emerged as an exceedingly pernicious threat, particularly in the context of network management centered around technologies like Software-Defined Networking (SDN). With the increasing intricacy and sophistication of DDoS attacks, the need for [...] Read more.
In the contemporary landscape, Distributed Denial of Service (DDoS) attacks have emerged as an exceedingly pernicious threat, particularly in the context of network management centered around technologies like Software-Defined Networking (SDN). With the increasing intricacy and sophistication of DDoS attacks, the need for effective countermeasures has led to the adoption of Machine Learning (ML) techniques. Nevertheless, despite substantial advancements in this field, challenges persist, adversely affecting the accuracy of ML-based DDoS-detection systems. This article introduces a model designed to detect DDoS attacks. This model leverages a combination of Multilayer Perceptron (MLP) and Convolutional Neural Network (CNN) to enhance the performance of ML-based DDoS-detection systems within SDN environments. We propose utilizing the SHapley Additive exPlanations (SHAP) feature-selection technique and employing a Bayesian optimizer for hyperparameter tuning to optimize our model. To further solidify the relevance of our approach within SDN environments, we evaluate our model by using an open-source SDN dataset known as InSDN. Furthermore, we apply our model to the CICDDoS-2019 dataset. Our experimental results highlight a remarkable overall accuracy of 99.95% with CICDDoS-2019 and an impressive 99.98% accuracy with the InSDN dataset. These outcomes underscore the effectiveness of our proposed DDoS-detection model within SDN environments compared to existing techniques. Full article
(This article belongs to the Special Issue Advanced Technologies in Network and Service Management)
Show Figures

Figure 1

24 pages, 1495 KiB  
Article
Towards Software-Defined Delay Tolerant Networks
by Dominick Ta, Stephanie Booth and Rachel Dudukovich
Network 2023, 3(1), 15-38; https://doi.org/10.3390/network3010002 - 28 Dec 2022
Cited by 3 | Viewed by 2877
Abstract
This paper proposes a Software-Defined Delay Tolerant Networking (SDDTN) architecture as a solution to managing large Delay Tolerant Networking (DTN) networks in a scalable manner. This work is motivated by the planned deployments of large DTN networks on the Moon and beyond in [...] Read more.
This paper proposes a Software-Defined Delay Tolerant Networking (SDDTN) architecture as a solution to managing large Delay Tolerant Networking (DTN) networks in a scalable manner. This work is motivated by the planned deployments of large DTN networks on the Moon and beyond in deep space. Current space communication involves relatively few nodes and is heavily deterministic and scheduled, which will not be true in the future. It is unclear how these large space DTN networks, consisting of inherently intermittent links, will be able to adapt to dynamically changing network conditions. In addition to the proposed SDDTN architecture, this paper explores data plane programming and the Programming Protocol-Independent Packet Processors (P4) language as a possible method of implementing this SDDTN architecture, enumerates the challenges of this approach, and presents intermediate results. Full article
(This article belongs to the Special Issue Advanced Technologies in Network and Service Management)
Show Figures

Figure 1

27 pages, 3774 KiB  
Article
Cloud Workload and Data Center Analytical Modeling and Optimization Using Deep Machine Learning
by Tariq Daradkeh and Anjali Agarwal
Network 2022, 2(4), 643-669; https://doi.org/10.3390/network2040037 - 18 Nov 2022
Cited by 1 | Viewed by 2501
Abstract
Predicting workload demands can help to achieve elastic scaling by optimizing data center configuration, such that increasing/decreasing data center resources provides an accurate and efficient configuration. Predicting workload and optimizing data center resource configuration are two challenging tasks. In this work, we investigate [...] Read more.
Predicting workload demands can help to achieve elastic scaling by optimizing data center configuration, such that increasing/decreasing data center resources provides an accurate and efficient configuration. Predicting workload and optimizing data center resource configuration are two challenging tasks. In this work, we investigate workload and data center modeling to help in predicting workload and data center operation that is used as an experimental environment to evaluate optimized elastic scaling for real data center traces. Three methods of machine learning are used and compared with an analytical approach to model the workload and data center actions. Our approach is to use an analytical model as a predictor to evaluate and test the optimization solution set and find the best configuration and scaling actions before applying it to the real data center. The results show that machine learning with an analytical approach can help to find the best prediction values of workload demands and evaluate the scaling and resource capacity required to be provisioned. Machine learning is used to find the optimal configuration and to solve the elasticity scaling boundary values. Machine learning helps in optimization by reducing elastic scaling violation and configuration time and by categorizing resource configuration with respect to scaling capacity values. The results show that the configuration cost and time are minimized by the best provisioning actions. Full article
(This article belongs to the Special Issue Advanced Technologies in Network and Service Management)
Show Figures

Figure 1

15 pages, 1175 KiB  
Article
Detection of Malicious Network Flows with Low Preprocessing Overhead
by Garett Fox and Rajendra V. Boppana
Network 2022, 2(4), 628-642; https://doi.org/10.3390/network2040036 - 04 Nov 2022
Cited by 4 | Viewed by 3013
Abstract
Machine learning (ML) is frequently used to identify malicious traffic flows on a network. However, the requirement of complex preprocessing of network data to extract features or attributes of interest before applying the ML models restricts their use to offline analysis of previously [...] Read more.
Machine learning (ML) is frequently used to identify malicious traffic flows on a network. However, the requirement of complex preprocessing of network data to extract features or attributes of interest before applying the ML models restricts their use to offline analysis of previously captured network traffic to identify attacks that have already occurred. This paper applies machine learning analysis for network security with low preprocessing overhead. Raw network data are converted directly into bitmap files and processed through a Two-Dimensional Convolutional Neural Network (2D-CNN) model to identify malicious traffic. The model has high accuracy in detecting various malicious traffic flows, even zero-day attacks, based on testing with three open-source network traffic datasets. The overhead of preprocessing the network data before applying the 2D-CNN model is very low, making it suitable for on-the-fly network traffic analysis for malicious traffic flows. Full article
(This article belongs to the Special Issue Advanced Technologies in Network and Service Management)
Show Figures

Figure 1

23 pages, 9511 KiB  
Article
An Efficient Information Retrieval System Using Evolutionary Algorithms
by Doaa N. Mhawi, Haider W. Oleiwi, Nagham H. Saeed and Heba L. Al-Taie
Network 2022, 2(4), 583-605; https://doi.org/10.3390/network2040034 - 28 Oct 2022
Cited by 5 | Viewed by 4769
Abstract
When it comes to web search, information retrieval (IR) represents a critical technique as web pages have been increasingly growing. However, web users face major problems; unrelated user query retrieved documents (i.e., low precision), a lack of relevant document retrieval (i.e., low recall), [...] Read more.
When it comes to web search, information retrieval (IR) represents a critical technique as web pages have been increasingly growing. However, web users face major problems; unrelated user query retrieved documents (i.e., low precision), a lack of relevant document retrieval (i.e., low recall), acceptable retrieval time, and minimum storage space. This paper proposed a novel advanced document-indexing method (ADIM) with an integrated evolutionary algorithm. The proposed IRS includes three main stages; the first stage (i.e., the advanced documents indexing method) is preprocessing, which consists of two steps: dataset documents reading and advanced documents indexing method (ADIM), resulting in a set of two tables. The second stage is the query searching algorithm to produce a set of words or keywords and the related documents retrieving. The third stage (i.e., the searching algorithm) consists of two steps. The modified genetic algorithm (MGA) proposed new fitness functions using a cross-point operator with dynamic length chromosomes with the adaptive function of the culture algorithm (CA). The proposed system ranks the most relevant documents to the user query by adding a simple parameter (∝) to the fitness function to guarantee the convergence solution, retrieving the most relevant user’s document by integrating MGA with the CA algorithm to achieve the best accuracy. This system was simulated using a free dataset called WebKb containing Worldwide Webpages of computer science departments at multiple universities. The dataset is composed of 8280 HTML-programed semi-structured documents. Experimental results and evaluation measurements showed 100% average precision with 98.5236% average recall for 50 test queries, while the average response time was 00.46.74.78 milliseconds with 18.8 MB memory space for document indexing. The proposed work outperforms all the literature, comparatively, representing a remarkable leap in the studied field. Full article
(This article belongs to the Special Issue Advanced Technologies in Network and Service Management)
Show Figures

Figure 1

Back to TopTop