Next Article in Journal
Echo Preprocessing-Based Smeared Spectrum Interference Suppression
Next Article in Special Issue
Verification of an Evolving Security Scheme in the Internet of Vehicles
Previous Article in Journal
A Symmetric Key and Elliptic Curve Cryptography-Based Protocol for Message Encryption in Unmanned Aerial Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ABLA: Application-Based Load-Balanced Approach for Adaptive Mapping of Datacenter Networks

1
Department of Computer Engineering, Faculty of Engineering, The Hashemite University, Zarqa 13133, Jordan
2
Department of Information Technology, Faculty of Prince Al-Hussein Bin AbdAllah II for Information Technology, The Hashemite University, Zarqa 13133, Jordan
3
Department of Mathematics and Computer Science, Eastern Illinois University, Charleston, IL 61920, USA
4
Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC H3G 1S6, Canada
5
School of Computing and Informatics, Al Hussein Technical University, Amman 11831, Jordan
6
Department of Computer Science, Faculty of Information Technology, Zarqa University, Zarqa 13110, Jordan
7
Department of Computer Science, Faculty of Computing and Information Technology, Northern Border University, Rafha 91431, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(17), 3689; https://doi.org/10.3390/electronics12173689
Submission received: 30 July 2023 / Revised: 21 August 2023 / Accepted: 28 August 2023 / Published: 31 August 2023
(This article belongs to the Special Issue State-of-the-Art Technology in Cloud and Network Infrastructure)

Abstract

:
Cloud-based services are growing more rapidly than ever, and so does the management challenge on their providers’ side. Cloud-based datacenter networks are built with nodes of huge processing power, connected by high bandwidth capacities to carry their interior traffic requirements. However, such cloud networks still have limits that are imposed not necessarily by their physical components, but by the schemes of resource management being deployed. Traditionally, for an institute to provide services, it needs to have its own datacenter facility that interconnects its servers through a topology that matches its desired administrative policies and scaling objectives. With the theme of cloud-based IaaS, such datacenter topologies can be created virtually over the cloud. Nowadays, a significant part of those institutes who provide us with our daily services have their infrastructures hosted over cloud ones. Therefore, resources of such cloud networks need to be efficiently utilized, in order to keep their performance and hosting prices competitive. A typical datacenter network mainly consists of server nodes and network links. Besides the resources of the server nodes, the network bandwidth resources are considered a crucial key determinant for the whole datacenter performance. Indeed, a server without sufficient bandwidth capacities is almost useless. Proposals in the literature present schemes for resource utilization on either side of the problem at a time: the nodes or the links. Working in isolation can never deliver efficient mapping solutions. ABLA is an Application-Based, and Load balancing Approach for adaptive mapping proposal. ABLA’s methodology tackles both sides of the datacenter, the nodes and links. It starts by (1) breaking down the node’s resource requirement for the requested applications to be hosted over the virtual server machines besides (2) reading the topological connectivity and bandwidth requirements for each virtual node to all other nodes in the virtual datacenter topology. Compared to other models in the literature, the simulation results show that our proposed ABLA model provides for complete mapping services via load-balanced hosting networks. This allows for competitive hosting prices, with higher performance and service satisfaction rates.

1. Introduction and Problem Statement

The adoption of e-service formats in almost all aspects of life is developing everyday. Such an option that is adopted by most of the institutions that provide us with our daily services, starting from education, marketing, banking, and even public governmental institutions. However, a successful e-service adoption is in fact very much dependent on the computer networks’ infrastructures. Service servers do not work in isolation, but connected to their clients and to other servers too, if the service so requires. In this context, for a datacenter network to work properly, healthy network connections are required internally and externally [1]. The build and use of such hosting datacenter networks need to be carefully carried out in order to deliver the anticipated success for those demanding e-service platforms. Indeed, the hierarchically structured datacenter network topologies necessitate special resource management techniques that may differ from others employed in other places on the Internet.
Nowadays, a significant part of those institutes who provide us with our daily services have their infrastructures hosted over cloud ones.
Due to the varying resource requirements of the datacenter servers along with the aggregate bandwidth limits that are imposed by their structures, physical datacenter networks usually run under poor utilization rates [1]. Accordingly, for clients, the option of cloud-based datacenter networks has proved to be a cost-effective and a reliable alternative to those privately owned networks. On the other hand, for cloud providers, the challenge of resource management in such cloud-based networks may become greater. Indeed, resource fragmentation occurs among different virtual datacenter networks being hosted, besides the dynamic arrivals of the hosting requests making the resource management process rather difficult, not to mention the network resources required to interconnect the hosted datacenter’s networked nodes together. Placement models may help in choosing optimal mapping places (i.e., hosting server machines) to host the required Virtual server Machines (VMs), with satisfying utilization rates to the hosting datacenter nodes. However, this may be further constrained with the goals of load-balancing and bandwidth utilization. When considering node hosting, this includes a set of resources being required for the node to run, such as processor’s capacity, main memory, disk space, input/output, and others. Working on an optimization model that considers the aforementioned set of node resources would impose high levels of complexity with too many restrictions and constraints to satisfy. This is for the nodes’ level only, so how complex would it become when considering the links that interconnect the datacenter nodes too?

1.1. Problem

In this work, we are tackling the problem of mapping virtual datacenter networks over cloud-based datacenters that are run by service providers. In these, the mapping process of a virtual datacenter includes both: (1) the nodes (i.e., server machines), and (2) the network links (i.e., bandwidth resources) that interconnect the server nodes together. Due to their complexity, mapping models in the literature considered them separately [2], ending with suboptimal mapping schemes. However, to reduce the complexity, in our model we choose to consider the required applications to run over the requested virtual datacenter network machines in the mapping process. By doing so, we can reduce the matrix of resources mentioned before to one type of resources or two for each different application. We also choose to consider the network connectivity among the hosting nodes. This includes the limited links bandwidth resources that interconnect the server nodes. Therefore, our ABLA model is an Application-Based and Load balancing Approach, though which, a mapping decision for a couple of nodes which will guarantee the availability of the links bandwidth resources that are necessary to interconnect the chosen nodes together. Hence, based on such ties between the datacenter resources of nodes and links, we developed our model with reference to the specific attributes that are common in many datacenter networks’ structures.

1.2. Datacenter Requirements of Nodes and Links

Different structures of datacenter networks are proposed in the literature, and to some extent deployed in real-life sites. Few structures are switch-centric, and others are server-centric [3]. From a network management perspective, performance optimization of such networks is a key point for the success of the services provided by the company/organization that owns the datacenter. Resources of such physically built networks are costly, and huge investments in expensive assets may be paid, while real utilization rates are usually modest. Recently, the theme of cloud-service datacenter networks has became a dominant mainstream platform that provides the service of hosting others’ datacenter networks in a virtual format, a cloud service model that is known as Infrastructure as a Service (IaaS). Well-known companies like Microsoft and Amazon are examples of such service providers. For such cloud-based platforms to be efficient and able to provide the required services as appropriate with competing pricescompared to those traditional datacenter networks, the process of resource (i.e., nodes and links) mapping needs to be well engineered. This requires the cloud-service providers to invest in balancing the physical nodes’ requirements, with the communication resources of network links among the nodes.

1.2.1. Node Requirements

Varying sorts of services may require varying types of node resources. Indeed, some services may need heavy CPU processing capacities to run while others may not; they may only need disk space or input/output resources with little processing power [4]. Indeed, scientific simulations and forecasting models are examples of applications that need high processing capacities, but low disk space and input/output requirements. On the other hand, backup services like those provided by Google and Apple can be considered as an example of applications that require high disk space and input/output resource, but minimal processing capacities. This is achieved by considering that we can model an adaptive mapping scheme that considers the different resource requirements by the different applications of the virtual datacenter VM nodes to be hosted. Back in the literature, traditional mapping algorithms of VMs are mostly greedy heuristic ones. Those algorithms choose their mapping decisions based on a single resource availability at the hosting physical machine [5]. Such algorithms are simple and accordingly attractive; however, when we consider mapping based on multiple types of resources, they become complex [6]. The existing mapping proposals of VMs have mostly tackled the objectives of reducing the power consumption, maximizing revenue and profit rates, satisfying the migration requirements, and attaining the contracted service level agreements. In such combinatorial problems, the need to output one exact solution at the scale of the network results in making such problems hard to solve. It has been hoped that this would change with the help of Quantum computing networks [7,8]. However, for now, to lessen such complexity, several approximation approaches may be followed. The authors of [9] proposed solving the placement problem of virtual network functions’ (vNF) service chains through a distributed Markov approximation model with a matching algorithm, such that energy and networking costs are reduced. In the same context, the authors of [10] proposed a greedy mapping algorithm for service embeddings, and in [11], a model that considers both placement places and scheduling is proposed to fasten the run time of the hosted applications.
Unlike the traditional mapping schemes that consider the resource availability of the nodes in general, our model’s methodology is different, as it is adapted to consider the applications’ type first, and it accordingly defines the specific requirements of nodes’ resources as an input to the mapping process.

1.2.2. Link Requirements

Nodes in datacenter networks do not work in isolation; instead, they need to be interconnected by sufficient amounts of bandwidth resources to satisfy their running applications and connectivity requirements [12]. Mostly, datacenter networks work with scaling objectives that are met by relying on path redundancy beside other structural attributes. However, path selection in traditional mapping models does not consider the choice of those nodes who satisfy the balanced mapping solution better [4]. In this context, as a background from the related literature, in terms of network traffic and management proposals for link bandwidth capacities, the authors of [13] proposed a traffic-differentiated load balancing model that allows for adaptive traffic switching. In their proposal, the network switches are configured to adapt the switching granularity of the incoming flows according to different classes of services. Therefore, they have three classes of services for three different types of traffic with different requirements that vary between time, throughput, or best effort delivery. Moreover, in terms of traffic load balancing in datacenter networks, the authors of [14,15] proposed addressing the datacenter flows at the packet level, through which they divide the network flows into packets, and route the packets throughout the datacenter switches according to their queue lengths. Such a packet-switching approach can somehow balance the traffic loads over the network links; however, this will result in out-of-order delivery while loads at the datacenter servers are still not considered too. Similarly, but at the flow level, the authors of [16,17] proposed a traffic routing scheme that routes the network flows at circuit routes. Compared to that of [14,15], this may allow for in-order delivery, but is still an inflexible routing scheme with no guarantees to satisfy time and traffic-balancing objectives [18]. The authors of [19] presented a traffic admission scheme that helps in meeting the service-level objectives. In their work, they proposed a classification model that classifies the datacenter network traffic according to the remote procedure calls (RPCs) and their service-level objectives (SLOs). In this way, they give priorities to those time and throughput sensitive flows, without being competed by the best-effort ones.
The process of choosing these two players of the optimal mapping solution is usually carried out separately. In ABLA, we worked for a well-planned mapping scheme that finds the best-possible maps. This includes the network server nodes beside the links that interconnect them, while keeping the load on the whole cloud network balanced as much possible.

1.3. Paper Contribution

Through this work, we are presenting ABLA, a proposed model for an adaptive mapping of a virtual datacenter network over the cloud infrastructures. Briefly, this should help the cloud service operators with a solution that allows for:
  • Adaptive mapping: where it maps the different allocation requests of the virtual datacenter networks according to the different applications required by each machine of the requested datacenter. Therefore, the mapping process takes into account the resource requirements that vary from one application to another.
  • Complete solution: in the sense that nodes’ mapping decisions are not carried out in isolation, bandwidth availability over the network links needs instead to be examined, and accordingly, the mapping decisions need to provide a placement map that chooses the network nodes and the links that interconnect them together.
  • Load-balanced mapping: as among the candidate nodes to host a particular application, the model chooses those nodes that satisfy the connection requirements and allow for greater resource availability in order to keep the end-to-end loads in the hosting cloud datacenter networks balanced.

1.4. Paper Organization

The rest of this paper is organized as follows: Section 2 presents related work from the literature. We discuss our proposed model in Section 3, its definitions and mathematical modelling in Section 3.1 and Section 3.2. Section 4 presents the benchmark First Available First Map (FAFM) model, and then the simulation results are presented and discussed in Section 5. Finally, Section 6 concludes this paper.

2. Related Work

Several studies and proposed models for a diversity of applications are found in the literature. Almost all can help in classifying the requirements of each sort of applications, which can help later in scheduling them for the sake of better performance and resource utilization. Few use algorithms like k-mean clustering [20], while others follow the required QoS classes to create a kind of probability function for the expected resource preferences [21]. In [22], the authors chose to consider the tasks’ completion times in their proposed model to classify the anticipated resource requirements.
The work of [23] presents a model that plans the tasks’ assignments and any capacity modification plans by reading the workloads on the hosts’ resources, and accordingly classifies the appropriate tasks’ mapping decisions. As in [20], they used k-mean clustering, which can be considered a complex enough choice to be followed in such combinatorial problems. In [24], the authors proposed an interesting scheduling method that dynamically read the loads on the cloud resources, and accordingly built separate queues; one for each type of resource (i.e., CPU, memory, and I/O). In the same way, their model classifies the demands based on their resource requirements in similar queues. After that, the model interlaces the queues together in a way that considers load-balancing. Another work that presents a tasks’ classification model is found in [2], in which the authors use the Cloudsim platform to simulate their proposal of VMs’ classification. Both [2] and [24] show interesting approaches for VM placement; however, they consider the nodes’ resources only without any attention to the links’ bandwidth resources, which are crucial to maintain efficient connectivity among the cloud datacenter network nodes. Indeed, considering load-balancing over the cloud nodes only does not necessarily deliver optimal placement maps. Indeed, as uneven loads over the network links may impose performance bottlenecks that negatively affect the whole datacenter.
The authors in [25] proposed a resource allocation model which, besides load-balancing, claims to provide for cloud profit maximization and SLA commitments [26,27]. In this context, we may expect that profit maximization may somehow contradict with the goals of load-balancing. Indeed, it may be more profitable to drive the processing loads and the various nodes’ requirements toward certain zones of the datacenter and shut down the nodes in the other zones. This may allow for savings in power, cooling, and administration overheads, which results in profit savings in the end. However, would this guarantee the optimal performance and the desired clients’ satisfaction rates? The manuscript of [28] presents an overview of the recent works conducted in the direction of resource utilization in datacenter networks from a machine learning (ML) perspective. They include flow prediction and flow classification techniques, and they shed light on any opportunities that may help in solving the datacenter-related challenges.

3. The Model

A cloud datacenter network allows for several “on-demand” virtual format services that include both platforms and software solutions. Moreover, it also extends to deliver Infrastructures as a Service (IaaS). In the context of IaaS, efficient service allocation decisions need to consider the whole infrastructural components from nodes to edges. In our terminology, this breaks down into: (1) Resources of the hosting server nodes, and (2) the resources over the network links and switching components that are needed to interconnect those server nodes together over the cloud datacenter network. Therefore, in ABLA, as demonstrated in Figure 1, we choose to demonstrate our proposed solution through a mapping scheme that takes the machine’s application (i.e., those required to be hosted) as an identifier that helps in defining the required resources for the virtual machines to be mapped over the physical server cloud machines.
Traditionally, mapping models for such a problem do not carry out the mapping of both node and link resources together. Instead, they carry them out separately. However, we found that they are both closely coupled, and any solution that does not consider them both as a joint factor lacks efficiency and optimality. Accordingly, in ABLA, we developed our mapping model with two factors to consider: (1) Resources of the hosting cloud server nodes and (2) resources of the cloud links that interconnect the candidate cloud nodes together.

3.1. Model Definitions

Our proposed model assumes that the IaaS requests of virtual datacenter networks are submitted by the clients with predefined topologies, and applications/service types are used to run over such infrastructures of virtual networks. Accordingly, the model: (1) Reads the required datacenter “topology” to map, (2) classifies the required “applications”, and (3) finds the required “path attributes” to interconnect the applications’ nodes. To do so, we set the following definitions for our mapping model:
Definition 1. 
Cloud Datacenter Network: a datacenter network mainly consists of a collection of nodes S c that are interconnected through a set of links ζ c . Therefore, in this work, we choose to represent the hosting cloud datacenter network as a weighted graph G c that consists of sets of S c and  ζ c together. The “weight” term here refers to the availability metrics used to show the status of the examined resources; whether they are a node or a link. As a simulation test bed, we chose to build a fat-tree structured cloud datacenter [12] as a hosting network. Fat-Tree structures are arranged in pods, wherein the pod degree defines the total network capacity of the servers. A network that is arranged in k pods allows for the following: each pod contains k switches with k ports each, and among the pod switch, k / 2 are aggregate and k / 2 are edge switches, with each edge switch interconnected to k / 2 servers, which results in a datacenter connecting k 2 servers all together [29]. As an example, as depicted in Figure 2, the fat-tree datacenter we built has the degree of k = 4 , supporting connectivity to 16 different servers through a set of network links, using 16 switches distributed among the three levels of edge, aggregate, and core. It allows for up to four paths between any couple of servers residing in different pods.
Definition 2. 
Datacenter Server Nodes: a server node in a cloud datacenter network s i c is described by the resources it contains. For hosting services, we are more concerned by its resource availability. In our work, we model that as a row vector that lists the real-time status of the node’s resource availability. More precisely, this vector presents numerical values that show the node’s current status regarding the available processing capacity ρ s i c , disk space δ s i c , and the input/output capacity σ s i c . In this context, we choose to set the specifications shown in Table 1 for every new server node in our cloud datacenter network. This can be easily modified, and varying specifications for the nodes may also be set.
For the simulation environment, variable usage rates are assumed to simulate the case of having “dynamic” loads being hosted over the hosting cloud datacenter network at a given time slot, assumed to be t.
Definition 3. 
Datacenter Links: nodes in a cloud datacenter network do not work in isolation, but rather, they work as a mesh with the other nodes in the network via a set of network links, forming an interconnection fabric to cover the whole datacenter network. Hence, in our model, a collection of network links ζ together form a path ζ Z that is defined by the couple of nodes it connects.
Definition 4. 
Virtual Datacenter Network: a virtual datacenter network can be considered as a form of those IaaS requests that a cloud-based datacenter can fulfill in the form of a service. This can serve in building an infrastructure for many organizations like universities, companies, or any other business class institutions. Hosting requests may come in different structures of virtual datacenter networks, and due to the limits on presentation, in this work we choose to solve it for virtual fat-tree structures. Hence, a virtual network is also modelled as a graph G v = S v , ζ v , wherein the symbols S v and ζ v represent the network’s virtual nodes and virtual links, respectively.
Definition 5. 
Servers’ Applications: server nodes in virtual datacenter networks (i.e., the hosted VMs) are usually used to run specific-type applications that serve for a specific type of service. Intuitively, applications vary in their resource requirements. Hence, services like data processing, data storage, and data movement vary in their requirements. Therefore, we choose to classify the requests of virtual machines (i.e., those who represent the virtual server nodes of the virtual datacenter network) according to the applications they are required to run. Table 2 shows an example of different applications, and their base resource requirements that are assumed to be needed to run properly.

3.2. Mathematical Modeling

Network engineers of such cloud-based datacenter networks work with the aim of developing a mapping strategy that satisfies the following objectives: (1) To maximize the resource utilization rates of their datacenter networks, (2) to maintain their service commitments to their clients represented in the contracted service level agreements (SLAs), (3) to keep their service prices market-competitive, and (4) to prove service availability represented through low blocking ratios for the received hosting requests. Evidently, from a technical perspective, this requires a load-balanced allocation scheme that maps the hosting requests without imposing any bottlenecks or congestion spots in the network. In our ABLA model, this is mathematically expressed in the following:
M c , v = m a x s c G c s v G i v ( w ρ s i v ρ s i c ρ s i v ρ s i v + w δ s i v δ s i c δ s i v δ s i v + w σ s i v σ s i c σ s i v σ s i v )
Accordingly, for each virtual machine s i v that belongs to the virtual datacenter network G i v that requires hosting, the model reads the type of application it needs to run, and therefore, defines the key-determinant weight for the specific type of resource it mainly requires, represented by the weight parameters w ρ s i v , w δ s i v , w σ s i v . The objective is to choose those cloud nodes who show higher availability values represented by the results of M c , v . This should lead to a load-balanced mapping scheme that delivers higher utilization rates for the available nodes and links’ resources in an efficient way. For that efficiency to be achieved, our model is bounded with the following set of constraints to comply.

3.2.1. Application-Based Mapping

For each virtual server node s v among the whole virtual datacenter network G v being required, the model reads its vector of required resources s i v shown in (2), and accordingly finds a mapping node s c in G c that matches the required vector s i v the best. This is defined according to the real-time loads of the hosting nodes in G c , which is found for each physical node s i c in the matrix G c shown in (3). According to the application type, namely processing, storage, or networking defined according the vector s i v , the model chooses one key attribute among [ ρ s i v , δ s i v , σ s i v ] for the hosting process, and then lists those cloud nodes s c that represent candidate hosting places for the assigned virtual node s i v in a new vector named s v i c , as presented in (4):
s i v = [ ρ s i v , δ s i v , σ s i v ]
G c = ρ s i c δ s i c σ s i c ρ s j c δ s j c σ s j c . . . . . . . . . . . . ρ s k c δ s k c σ s k c ,
s v i c = [ s x c , s y c , . . . , s z c ]
Therefore, the model limits the search space for each virtual node to those physical cloud nodes that have more resources from the type required to run the virtual application. Therefore, the search methodology is directed by a specific type of resources within the capacity space of the cloud nodes.

3.2.2. Topology-Based Mapping

The process of choosing a hosting node among the other candidate nodes listed in vector s v i c shown in (4) depends on the ratio of resource availability. The goal is to choose the node that gives the highest availability score, while guaranteeing: (1) The existence of the other virtual nodes required to fulfill the other topological requirements of G i v , and (2) the availability of link resources to interconnect each candidate hosting node to the other nodes too. Therefore, the chosen node s v i c needs to satisfy the other topological requirements of G i v , which includes the existence of the other nodes that node s i v is connected to, in the virtual datacenter network topology G i v .

3.2.3. Multiple Hosting Is Not Allowed

A virtual node needs to be hosted on one physical node only; multiple hosting is not allowed. To verify that, we defined a binary variable named H s i v c for hosting node s i v at the network G c , such that
H s i v c = 1 if   mapping   request   s i v   is   granted   at   G c 0 otherwise
This is restricted by the following condition to guarantee only one hosting node to be assigned to carry each of the virtual nodes s v , so multiple allocations are not allowed:
s v i G v H s i v c 1 ; H s i v c { 0 , 1 }

3.2.4. Complete Hosting Map

A hosting request for virtual datacenter network G i v needs to be fully fulfilled to be considered mapped. This includes the whole virtual topological requirements and components of nodes and links. To verify that, we defined a binary variable named H G i v c for hosting the topology G i v at the network G c , such that
H G i v c = 1 mapping   for   all   components   of   G i v   is   available 0 otherwise

3.2.5. Links’ Bandwidth Availability

Besides nodes availability, links’ bandwidth and switching capacities to interconnect the candidate nodes in H G i v c are required to be available too. This is constrained by the following condition described in (8). Hence, a mapping solution is considered incomplete if sufficient links’ capacities do not exist to satisfy the nodes’ traffic requirements.   
H G i v c s v G v ζ Z s j v s i v ; ζ Z s j v s i v Z G i v c , H G i v c { 0 , 1 }
where the paths for all couples of nodes of s v G i v are stored in a matrix of candidate paths Z G i v c built in accordance to the required structure of G i v , the set of candidate hosting nodes H G i v c , and the hosting network G c links’ availability. The set of candidate node couples along with their corresponding candidate path connections are stored in the matrix H Z G i v G c .

3.3. ABLA’s Mapping Algorithm

Algorithm 1 below presents the proposed algorithm of ABLA. It summarizes the aforementioned discussion, and as can be clearly seen, the ABLA model runs at the level of the virtual datacenter structure as a whole, taking its nodes’ applications as a key starting point. Besides being adaptive, the model delivers a complete mapping solution that considers both nodes and links’ resources.
Algorithm 1: ABLA mapping algorithm: An Application-Based, and Load-Balanced Mapping Approach Algorithm
  1: input: ABLA reads the Cloud Datacenter Network Structure G i c first, then:
  2:   it reads: (1) The real-time nodes’ availability status ρ s i c , δ s i c , σ s i c ;
  3:               (2) The real-time links’ and switching nodes’ status ζ s i c ;
  4:               (3) The virtual datacenter networks’ structures G i v to be hosted;
  5:               (4) The application type required by each virtual node s i v .
  6: for each node in G i v , find its resource vector s i v = [ ρ s i v , δ s i v , σ s i v ] ,
  7:  for each virtual node’s application type s i v , and according to the
         structural node’s location set by G v ;
  8:   find (1) the vector of candidate nodes s v i c to application of node s v i c ,
                 (2) Relatively, the hosting space H s i v c of the other nodes in G v ;
                 (3) Links resources to serve the candidate nodes listed in H s i v c ;
                 (4) The set of candidate nodes-links couples stored in H Z G i v G c .
  9: for each couple of virtual nodes s i v and s i v , do:
  10:   sort those couples of H Z G i v G c in an descending order,
  11:   choose among H Z G i v G c , the couples with minimum weight scores,
  12:   update the hosting nodes’ availability status ρ s i c , δ s i c , σ s i c ,
  13:   update the hosting links’ and switching nodes’ status ζ s v ,
  14: output the mapping scheme for the client G i v ,
In the first step, the virtual datacenter network operators are required to submit their structural details G i v = ( S v , ζ v ) to the ABLA model. Initially, the model needs to know the real-time availability status of the hosting cloud datacenter network G c . This is performed at the infrastructure level based on the nodes’ components beside the links that interconnect the network nodes together. Next, it reads the required virtual network specifications and the application types to be hosted. This includes the datacenter network nodes S v and links ζ v , too. For the applications, it breaks that down to the requirements at the components’ level: processing capacity ρ s i v , disk space δ s i v , and networking resources  σ s i v .
After that, the model runs its mapping functions to find the vector of candidate nodes s v i c suitable to host each application node s i v . It then verifies its connectivity to other physical nodes in G c that are candidates to host the other virtual nodes of G v in a way that matches the structural requirements revealed in step (1). This includes work at the nodes’ and links’ levels.
Once connectivity between the structural couples is examined and guaranteed to be delivered, the candidate mapping options are weighted according to specific metric parameters to maintain load-balancing. The model then chooses those options with the least weight values in a way to avoid bottlenecks and congestion spots. Finally, the resource availability vectors are updated to reflect the current status of the hosting network.

4. Benchmark Model

Autonomic resource utilization is a self-management theme that generally aims to reduce the management overhead and enhance the resource utilization rates. In [30], an autonomic service architecture is proposed to allow for a uniform framework of autonomic resource management in virtual networks. In that proposal, the network resources are allocated to the individual network requests based on their arrival times. A First Ask First Allocate (FAFA) service model is presented and compared in [31] too. Accordingly, the admission decisions merely depend on the requests’ arrival times and the resource availability, regardless of the other allocation requests or the resource dependencies in the other places in the network. Once admitted, the resource availability information is updated. The admission policy in such model allows for resource mapping, but with no real consideration to the resource utilization or performance optimization objectives at the network scale. This can be considered an inefficient resource allocation model. Indeed, regardless of the arrival time, resource allocations in such environments need to map any resource request as being part of the whole other requests arrived for allocation. In this way, the model checks for the best mapping scheme that allows for the desired management objective while satisfying the required performance rates for the hosted network requests. To assess the proposal of ABLA, we chose to compare our results with those of a mapping scheme that follows a First Available First Map (FAFM) model. This is a simulation to an allocation methodology similar to that of the aforementioned resource allocation model FAFA. Through which, as shown in Algorithm 2, the model receives the mapping requests for the network nodes, and with no consideration to the required application types or network links requirements, it maps the virtual machines to the first available cloud nodes.
Algorithm 2: FAFM mapping algorithm: A First Available First Map Algorithm
  1: input: FAFM reads the Cloud Datacenter Network Structure G i c first, then:
  2:  it reads: (1) The real-time nodes’ availability status ρ s i c , δ s i c , σ s i c ;
  3:               (2) The virtual datacenter nodes S v to be hosted.
  4: for each node in s i v , find its resource vector s i v = [ ρ s i v , δ s i v , σ s i v ] ,
  5:   choose the first candidate node s v i c to host the virtual node s v i c ,
  6:   update the hosting nodes’ availability status ρ s i c , δ s i c , σ s i c ,
  7: output the node-based mapping scheme for any node s v G i v ,
Accordingly, the mapping model first reads the structure of the hosting cloud datacenter network G i c and its nodes’ resource availability. Then, it reads the set of virtual nodes to be hosted; for each, it also reads the requirements represented in a row vector of specifications s i v as shown in (2). Next, in a node-based and for each virtual node s i v , it runs the mapping model to find a candidate cloud node to host it, starting left-to-right over the cloud datacenter network nodes S c G i c . Based on availability, the model runs for each virtual node in a round-robin fashion to find a suitable mapping node that provides enough availability; otherwise, the node is not mapped, and the round starts for the next virtual node in sequence. After working on each virtual node, and for those mapped to cloud nodes, the availability of the chosen cloud resource is updated.
Finally, the model outputs the set of virtual nodes who found mapping and their corresponding cloud nodes. This may ease the mapping process and require shorter times to run and reveal the mapping decision. However, such mapping solution proved to be partial, and provides for: (1) No load-balancing and (2) incomplete network solutions.

5. Simulation Results and Discussion

In this section, we are presenting samples of the results obtained from the simulation that we developed using the Microsoft Visual Studio 2022 and C++ to assess the performance of our proposed mapping model ABLA. Hence, to simulate a cloud-based datacenter network that offers the services of IaaS, we chose to build a fat-tree datacenter with a degree k = 4 like that shown in Figure 3. Fat-tree topologies are known to allow for recursively scalable fabrics, with cost-effective structures [12]. Accordingly, they are a good candidate to provide cloud hosting services. Over this cloud datacenter, we simulate a set of hosting requests for a number of different virtual datacenter networks, all with the same degree of k = 4 . From the results, we aim to assess the behavior of the proposed mapping model in terms of load-balancing over the whole cloud datacenter network nodes and links.

5.1. Discussion

It is worth to note that working on such scale of mapping a whole datacenter network infrastructure requires bandwidth capacities to be available over the network links that interconnect the server nodes together. Without sufficient link capacities, the datacenter nodes will be working in isolation. Therefore, in order to ensure the infrastructural requirements of the virtual datacenter networks, our simulation runs with the goal of only providing complete mapping solutions that satisfy the whole infrastructural components. Accordingly, partial mapping solutions are not considered.
To assess the results of the load-balancing objective over the hosting network infrastructure, our simulation discussions will cover both network nodes and network links.

5.1.1. The Hosting Nodes

Table 3 shows the resource availability at the cloud physical network nodes (i.e., the 16 servers) in terms of CPU capacities, RAMs, disk space, and network Ethernet cards availability. It shows their status before running the mapping models, neither ABLA nor FAFM. This status is assumed in a way that represents random semi-balanced loads being distributed over the hosting cloud network.
By receiving a new hosting request for a virtual datacenter network, ABLA runs to map the request’s infrastructural requirements, and so the availability readings presented in Table 3 are updated. In parallel, we also run the model of FAFM over the same physical infrastructure, and the readings are updated too, but to a different table. Having had both models (i.e., ABLA and FAFM) run four subsequent hosting requests of different virtual datacenter networks, the resultant availability metrics are shown in Figure 4 and Figure 5, respectively.
Figure 4 shows how the server nodes of the hosting datacenter network are almost used in the same range while considering the status of the hosting datacenter resource availability presented in Table 3, which shows varying percentages for the 16 simulated servers before running the ABLA model, representing the resultant availability after having the load-balancing mapping approach ABLA being deployed. As an example, Table 3 shows higher availability for the server nodes s 4 c , s 13 c , and s 14 c , and this makes them candidate nodes to host new virtual machines whom resource requirements become matched with what is available at the aforementioned server nodes. Indeed, if we look at the figure, we can clearly notice that the resources of the physical servers are nearly used in the same range. Taking into account the bounds that are imposed by: (1) The structure of the hosting datacenter network, (2) the required structure of the virtual datacenter networks, as well as (3) the VMs’ application requirements and their (4) connectivity constraints too, such results show significant improvement in the mapping process. Yet, this becomes evident when we compare these readings to those of the FAFM model presented in Figure 5. The readings of the FAFM model are best explained by referring to its mapping methodology which hosts the required VM at the first available node, regardless of the application type or even the load-balancing objectives that are required at both levels: the nodes and the network links. Nevertheless, those mapping decisions of the FAFM model provide no guarantees to deliver complete solutions. Hence, partial incomplete hosting decisions and high blocking ratios for future hosting requests are expected.
The trend-line plotted in blue in Figure 4 and in red in Figure 5 shows the average CPU capacity availability at the whole server node of the hosting datacenter network. These are the readings after mapping four different virtual networks. The lines can tell how balancing the ABLA model is as compared to that of the FAFM. In is worth noting that adopting a load-balancing model, for such problems in particular, greatly helps in providing far more space and higher performance metrics to new hosting requests to come. Not to mention the ability to allow for competing service price units, and higher profits too. Indeed, with such model, providers can serve more hosting requests using less physical resources, and so lower cost units.
The same conclusions can also be drawn if we read the results shown in Figure 6 and Figure 7. The trend-lines plotted in these figures show the behavior of the average disk space capacity. Beside the CPU, the lines in the aforementioned figures show how the ABLA model provides for an almost balanced use for the server nodes’ disk space compared to that of the FAFM model.

5.1.2. The Hosting Links

Network bandwidth resources are considered precious, especially when it comes to datacenter networks. Indeed, in such environments, a running server node with no bandwidth resources to reach is useless. Hence, in the mapping approach of our model ABLA, we choose to carry out the mapping process while considering to keep both network nodes and links resources balanced. Therefore, the mapping decisions are built in accordance to their potential loads on nodes and links’ bandwidth resources. For the physical datacenter network, and before running any of the mapping models, Figure 8 shows the status of the Core switches downlink ports, while the status of the Aggregate and Edge switches’ ports is shown in Figure 9 and Figure 10, respectively.
Having the models of ABLA and FAFM run for mapping the same four requests of the aforementioned virtual datacenter networks, Figure 11 compares the bandwidth loads over the downlink ports of the physical core switches. Evidently, we can clearly notice the difference in balance between the two examined models. Working on a fat-tree datacenter network infrastructure, we may guess that the unbalanced loads over these ports of the network core switches indicate unbalanced loads in the underlying networks they interconnect. Indeed, and this is what is revealed by the results shown in Figure 12 and Figure 13, this shows how different the bandwidth loads of the mapped virtual networks are. By reading the results of the FAFM model, we can somehow connect such resultant bandwidth loads with the distribution of the VM machines over the hosting cloud network. Moreover, we also need to stress the point that, in ABLA, the mapping decisions take into account the application type besides the other objectives of connectivity and load-balancing attributes.

6. Conclusions

Cloud-based environments are founded to deliver cost-effective and reliable themes of services that come in virtual formats, replacing the world of physical infrastructures, platforms, functions, and even software requirements for individuals and business societies. To keep such an alternative trending, waiting for Quantum computing to finish its infancy and show how it may help by becoming integrated with existing technologies, cloud resources need to be efficiently utilized. In these environments, the required types of services are expected to be diverse and dynamic. Therefore, special management schemes are needed to serve the required loads in a way that assures service optimality and availability. For that to deliver, this work presented our proposal ABLA, which is an Application-Based and Load-balanced Approach for the adaptive mapping of virtual datacenter networks. In ABLA, we proposed a mapping scheme that maps whole virtual datacenter networks to physical ones while maintaining the hosting network load-balanced. To do so, the mapping methodology of ABLA starts by reading the required application types to run over the virtual server nodes of the virtual datacenter networks. This helps in defining the required types of resources to run such applications, and therefore also the candidate hosting nodes. Nevertheless, a node in a datacenter cannot run in isolation; hence, it next reads the virtual datacenter topology which defined the network connection requirements between the virtual servers to host. Accordingly, the difference in ABLA as compared to other proposals in the literature comes in the way of the mapping it follows. It chooses the candidate nodes to host the virtual machines not only based on nodes’ resource availability, but on the availability of bandwidth resources over the links that connect the nodes to the rest of the datacenter as well. Among the candidate mapping places, ABLA chooses those places that guarantee load-balancing all over the hosting cloud datacenter network. Compared to the benchmark model FAFM, the simulation results show that ABLA provides for load-balanced hosting datacenter networks, which allows for less blocking rates and higher availability. Moreover, ABLA’s mapping decisions are complete, allowing for competitive service costs and performance rates.

Author Contributions

Conceptualization, A.N.Q.; methodology, A.N.Q.; software, A.N.Q. and S.N.; validation, A.N.Q.; formal analysis, A.N.Q. and A.M.; investigation, A.N.Q., A.A. (Ayoub Alsarhan), A.M., O.A. and S.N.; resources, A.N.Q., A.A. (Ayoub Alsarhan), A.M., O.A., S.N., F.K., M.A. (Mohammad Aljaidi), M.A. (Mohammed Alshammari), and A.A. (Anjali Awasthi); data curation, A.N.Q. and S.N.; writing original draft preparation, A.N.Q.; review and editing, A.N.Q., A.A. (Ayoub Alsarhan), A.M., O.A., F.K., M.A. (Mohammad Aljaidi), M.A. (Mohammed Alshammari), and A.A. (Anjali Awasthi); visualization, A.N.Q., A.A. (Ayoub Alsarhan), A.M., O.A., S.N., F.K., M.A. (Mohammad Aljaidi), M.A. (Mohammed Alshammari), and A.A. (Anjali Awasthi); supervision, A.N.Q.; project administration, A.N.Q.; funding acquisition, A.N.Q. and M.A. (Mohammed Alshammari). All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially funded by the Deanship of Scientific Research at the Hashemite University. The authors extend their appreciation to the Deanship of Scientific Research at the Northern Border University, Arar, KSA for their contribution in funding this research work through the project number “NBU-FFR-2023-0103”.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest, and the funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Tso, F.P.; Pezaros, D.P. Improving Data Center Network Utilization Using Near-Optimal Traffic Engineering. IEEE Trans. Parallel Distrib. Syst. 2013, 24, 1139–1148. [Google Scholar] [CrossRef]
  2. Peng, J.; Zhi, X.; Xie, X. Application type based resource allocation strategy in cloud environment. J. Microprocess. Microsystems 2016, 47, 385–391. [Google Scholar] [CrossRef]
  3. Quttoum, A.N. Interconnection Structures, Management and Routing Challenges in Cloud-Service Data Center Networks: A Survey. Int. J. Interact. Mob. Technol. IJIM 2018, 12, 36–60. [Google Scholar] [CrossRef]
  4. Greenberg, A.; Hamilton, J.; Maltz, D.A.; Patel, P. The cost of a cloud: Research problems in data center networks. ACM SIGCOMM Comput. Commun. Rev. 2009, 39, 68–73. [Google Scholar] [CrossRef]
  5. Beloglazov, A.; Buyya, R. Optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient dynamic consolidation of virtual machines in cloud data centers. Concurr. Comput. Pract. Exper. 2012, 24, 1397–1420. [Google Scholar] [CrossRef]
  6. Kayal, P.; Liebeherr, J. Autonomic Service Placement in Fog Computing. In Proceedings of the IEEE 20th International Symposium on “A World of Wireless, Mobile and Multimedia Networks” (WoWMoM), Washington, DC, USA, 10–12 June 2019; pp. 1–9. [Google Scholar]
  7. Au-Yeung, R.; Chancellor, N.; Halffmann, P. NP-hard but no longer hard to solve? Using quantum computing to tackle optimization problems. Front. Quantum Sci. Technol. Sec. Quantum Eng. 2023, 2, 112857. [Google Scholar] [CrossRef]
  8. Hasanpour, M.; Shariat, S.; Barnaghi, P.; Hoseinitabatabaei, S.A.; Vahid, S.; Tafazolli, R. Quantum load balancing in ad hoc networks. Quantum Inf. Process. 2017, 16, 148. [Google Scholar] [CrossRef]
  9. Pham, C.; Tran, N.H.; Ren, S.; Saad, W.; Hong, C.S. Traffic-aware and energy-efficient vnf placement for service chaining: Joint sampling and matching approach. IEEE Trans. Serv. Comput. 2017, 13, 172–185. [Google Scholar] [CrossRef]
  10. Németh, B.; Szalay, M.; Dóka, J.; Rost, M.; Schmid, S.; Toka, L.; Sonkoly, B. Fast and efficient network service embedding method with adaptive offloading to the edge. In Proceedings of the IEEE INFOCOM 2018—IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Honolulu, HI, USA, 15–19 April 2018; pp. 178–183. [Google Scholar]
  11. Zeng, D.; Gu, L.; Guo, S.; Cheng, Z.; Yu, S. Joint optimization of task scheduling and image placement in fog computing supported software-defined embedded system. IEEE Trans. Comput. 2016, 65, 3702–3712. [Google Scholar]
  12. Al-Fares, M.; Loukissas, A.; Vahdat, A. A scalable, commodity datacenter network architecture. In Proceedings of the ACM SIGCOMM’08 Conference on Data Communication, Seattle, WA, USA, 17–22 August 2008; pp. 63–74. [Google Scholar]
  13. Wang, J.; Rao, S.; Liu, Y.; Sharma, P.K.; Hu, J. Load balancing for heterogeneous traffic in datacenter networks. J. Netw. Comput. Appl. 2023, 217, 103692. [Google Scholar] [CrossRef]
  14. Dixit, A.; Prakash, P.; Hu, Y.C.; Kompella, R.R. On the impact of packet spraying in data center networks. In Proceedings of the IEEE INFOCOM, Turin, Italy, 14–19 April 2013; pp. 2130–2138. [Google Scholar]
  15. Ghorbani, S.; Yang, Z.; Godfrey, P.B.; Ganjali, Y.; Firoozshahian, A. DRILL: Micro Load Balancing for Low-latency Data Center Networks. In Proceedings of the Conference of the ACM Special Interest Group on Data Communication (SIGCOMM’17), Los Angeles, CA, USA, 21–25 August 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 225–238. [Google Scholar]
  16. Alizadeh, M.; Edsall, T.; Dharmapurikar, S.; Vaidyanathan, R.; Chu, K.; Fingerhut, A.; Lam, V.T.; Matus, F.; Pan, R.; Yadav, N.; et al. CONGA: Distributed congestion-aware load balancing for datacenters. In Proceedings of the 2014 ACM Conference on SIGCOMM, Chicago, IL, USA, 17–22 August 2014; Association for Computing Machinery: New York, NY, USA, 2014; pp. 503–514. [Google Scholar]
  17. Vanini, E.; Pan, R.; Alizadeh, M.; Taheri, P.; Edsall, T. Let it flow: Resilient asymmetric load balancing with flowlet switching. In Proceedings of the 14th USENIX Conference on Networked Systems Design and Implementation (NSDI’17), Boston, MA, USA, 27–29 March 2017; USENIX Association: Berkeley, CA, USA, 2017; pp. 407–420. [Google Scholar]
  18. Huang, J.; Lyu, W.; Li, W.; Wang, J.; He, T. Mitigating Packet Reordering for Random Packet Spraying in Data Center Networks. IEEE/ACM Trans. Netw. 2021, 29, 1183–1196. [Google Scholar] [CrossRef]
  19. Zhang, Y.; Kumar, G.; Dukkipati, N.; Wu, X.; Jha, P.; Chowdhury, M.; Vahdat, A. Aequitas: Admission control for performance-critical RPCs in datacenters. In Proceedings of the ACM SIGCOMM 2022 Conference, Amsterdam, The Netherlands, 22–26 August 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 1–18. [Google Scholar]
  20. Zhang, Q.; Zhani, M.F.; Boutaba, R.; Hellerstein, J.L. Harmony: Dynamic Heterogeneity-Aware Resource Provisioning in the Cloud. In Proceedings of the 33rd IEEE International Conference on Distributed Computing Systems, Philadelphia, PA, USA, 8–11 July 2013; pp. 510–519. [Google Scholar]
  21. Xu, B.; Zhao, C.; Hu, E.; Hu, B. Job scheduling algorithm based on Berger model in cloud environment. J. Adv. Eng. Softw. 2011, 42, 419–425. [Google Scholar] [CrossRef]
  22. Zhang, Q.; Hellerstein, J.L.; Boutaba, R. Characterizing Task Usage Shapes in Google Compute Clusters. In Proceedings of the Large Scale Distributed Systems Middleware Workshop, LADIS, Seattle, WA, USA, 2–3 September 2011; pp. 1–6. [Google Scholar]
  23. Mishra, A.K.; Hellerstein, K.L.; Cirne, W.; Das, C.R. Towards characterizing cloud backend workloads: Insights from Google compute clusters. ACM Sigmetrics Perform. Eval. Rev. 2010, 37, 34–41. [Google Scholar] [CrossRef]
  24. Zuo, L.; Dong, S.; Shu, L.; Zhu, C.; Han, G. A Multiqueue Interlacing Peak Scheduling Method Based on Tasks’ Classification in Cloud Computing. IEEE Syst. J. 2018, 12, 1518–1530. [Google Scholar] [CrossRef]
  25. Chhabra, S.; Singh, A.K. Dynamic Resource Allocation Method for Load Balance Scheduling over Cloud Data Center Networks. J. Comput. Sci. Distrib. Parallel Clust. Comput. 2022. [Google Scholar] [CrossRef]
  26. Feng, G.; Garg, S.; Buyya, R.; Li, W. Revenue maximization using adaptive resource provisioning in cloud computing environments. In Proceedings of the IEEE International Conference on Grid Computing, Beijing, China, 20–23 September 2012; pp. 192–200. [Google Scholar]
  27. Srinivasan, K.; Yuuw, S.; Adelmeyer, T.J. Dynamic vm migration: Assessing its risks and rewards using a benchmark. In Proceedings of theInternational SIGSOFT Conference, Karlsruhe, Germany, 14–16 March 2011; pp. 317–322. [Google Scholar]
  28. Li, B.; Wang, T.; Yang, P.; Chen, M.; Hamdi, M. Rethinking Data Center Networks: Machine Learning Enables Network Intelligence. J. Commun. Inf. Netw. 2022, 7, 157–169. [Google Scholar] [CrossRef]
  29. Couto, R.D.; Secci, S.; Camptisa, M.E.; Costa, L.H. Reliability and Survivability Anal-ysis of Data Center Network Topologies. Acm J. Netw. Syst.-Manag. 2016, 24, 346–392. [Google Scholar] [CrossRef]
  30. Cheng, Y.; Farha, R.; Kim, M.S.; Leon-Garcia, A.; Hong, J.W. A generic architecture for autonomic service and network management. Comput. Commun. 2006, 29, 3691–3709. [Google Scholar] [CrossRef]
  31. Quttoum, A.N.; Otrok, H.; Dziong, Z. A collusion-resistant mechanism for autonomic resource management in Virtual Private Networks. J. Comput. Commun. 2010, 33, 2070–2078. [Google Scholar] [CrossRef]
Figure 1. A general demonstration of the mapping process in ABLA.
Figure 1. A general demonstration of the mapping process in ABLA.
Electronics 12 03689 g001
Figure 2. The hosting fat-tree datacenter network.
Figure 2. The hosting fat-tree datacenter network.
Electronics 12 03689 g002
Figure 3. A virtual fat-tree datacenter network infrastructure.
Figure 3. A virtual fat-tree datacenter network infrastructure.
Electronics 12 03689 g003
Figure 4. Cloud datacenter status after mapping 4 virtual datacenter networks using ABLA.
Figure 4. Cloud datacenter status after mapping 4 virtual datacenter networks using ABLA.
Electronics 12 03689 g004
Figure 5. Cloud datacenter status after mapping 4 virtual datacenter networks using FAFM.
Figure 5. Cloud datacenter status after mapping 4 virtual datacenter networks using FAFM.
Electronics 12 03689 g005
Figure 6. Disk space trend-line after mapping four virtual datacenter networks using ABLA.
Figure 6. Disk space trend-line after mapping four virtual datacenter networks using ABLA.
Electronics 12 03689 g006
Figure 7. Disk space trend-line after mapping four virtual datacenter networks using FAFM.
Figure 7. Disk space trend-line after mapping four virtual datacenter networks using FAFM.
Electronics 12 03689 g007
Figure 8. Bandwidth usage rates at the downlink ports of the core switches (before mapping).
Figure 8. Bandwidth usage rates at the downlink ports of the core switches (before mapping).
Electronics 12 03689 g008
Figure 9. Bandwidth usage rates at the downlink ports of the aggregate switches (before mapping).
Figure 9. Bandwidth usage rates at the downlink ports of the aggregate switches (before mapping).
Electronics 12 03689 g009
Figure 10. Bandwidth usage rates at the downlink ports of the edge switches (before mapping).
Figure 10. Bandwidth usage rates at the downlink ports of the edge switches (before mapping).
Electronics 12 03689 g010
Figure 11. ABLA and FAFM models’ bandwidth usage rates at the core switches’ downlink ports.
Figure 11. ABLA and FAFM models’ bandwidth usage rates at the core switches’ downlink ports.
Electronics 12 03689 g011
Figure 12. ABLA and FAFM models’ bandwidth usage rates at the aggregate switches’ downlink port 2.
Figure 12. ABLA and FAFM models’ bandwidth usage rates at the aggregate switches’ downlink port 2.
Electronics 12 03689 g012
Figure 13. ABLA and FAFM models’ bandwidth usage rates at the edge switches’ downlink port 1.
Figure 13. ABLA and FAFM models’ bandwidth usage rates at the edge switches’ downlink port 1.
Electronics 12 03689 g013
Table 1. Cloud servers’ specifications *.
Table 1. Cloud servers’ specifications *.
Processing CapacityDisk SpaceEthernet
3.8 Ghz24 TB10 GB
* These values are only assumed for the sake of simulation, and do not numerically represent the case in real life.
Table 2. Virtual nodes applications’ requirements *.
Table 2. Virtual nodes applications’ requirements *.
TypeProcessing CapacityDisk SpaceEthernet
Processing VM0.8 Ghz500 Ghz500 Mhz
Storage VM0.3 Ghz2000 Ghz500 Mhz
Networking VM0.3 Ghz500 Ghz2000 Mhz
* These values are only assumed for the sake of simulation, and do not numerically represent the case in real life.
Table 3. Resource availability at the hosting fat-tree cloud datacenter network *.
Table 3. Resource availability at the hosting fat-tree cloud datacenter network *.
Type s 1 c s 2 c s 3 c s 4 c s 5 c s 6 c s 7 c s 8 c s 9 c s 10 c s 11 c s 12 c s 13 c s 14 c s 15 c s 16 c
CPU52585566474049464536423963645465
RAM44525060403441393933363456584453
Disk41445037443640354232393546555061
Ethernet62646472584758565444514572716469
* These values represent percentages of the whole resource capacity in the physical servers whose specifications are presented in Table 1.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Quttoum, A.N.; Alsarhan, A.; Moh’d, A.; Alshareet, O.; Nawaf, S.; Khasawneh, F.; Aljaidi, M.; Alshammari, M.; Awasthi, A. ABLA: Application-Based Load-Balanced Approach for Adaptive Mapping of Datacenter Networks. Electronics 2023, 12, 3689. https://doi.org/10.3390/electronics12173689

AMA Style

Quttoum AN, Alsarhan A, Moh’d A, Alshareet O, Nawaf S, Khasawneh F, Aljaidi M, Alshammari M, Awasthi A. ABLA: Application-Based Load-Balanced Approach for Adaptive Mapping of Datacenter Networks. Electronics. 2023; 12(17):3689. https://doi.org/10.3390/electronics12173689

Chicago/Turabian Style

Quttoum, Ahmad Nahar, Ayoub Alsarhan, Abidalrahman Moh’d, Osama Alshareet, Suhieb Nawaf, Fawaz Khasawneh, Mohammad Aljaidi, Mohammed Alshammari, and Anjali Awasthi. 2023. "ABLA: Application-Based Load-Balanced Approach for Adaptive Mapping of Datacenter Networks" Electronics 12, no. 17: 3689. https://doi.org/10.3390/electronics12173689

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop