Next Article in Journal
A Generalized Family of Exponentiated Composite Distributions
Previous Article in Journal
Semi-Hyers–Ulam–Rassias Stability via Laplace Transform, for an Integro-Differential Equation of the Second Order
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Jellyfish Search Algorithm Based on Simulated Annealing and Disruption Operators for Global Optimization with Applications to Cloud Task Scheduling

1
Department of Mathematics, Faculty of Science, Zagazig University, Zagazig 44519, Egypt
2
Faculty of Computer Sciences and Informatics, Amman Arab University, Amman 11953, Jordan
3
School of Computer Sciences, Universiti Sains Malaysia, Pulau Pinang 11800, Malaysia
4
Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
5
Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt
6
Artificial Intelligence Research Center (AIRC), Ajman University, Ajman P.O. Box 346, United Arab Emirates
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(11), 1894; https://doi.org/10.3390/math10111894
Submission received: 11 March 2022 / Revised: 7 May 2022 / Accepted: 24 May 2022 / Published: 1 June 2022

Abstract

:
This paper presents a novel dynamic Jellyfish Search Algorithm using a Simulated Annealing and disruption operator, called DJSD. The developed DJSD method incorporates the Simulated Annealing operators into the conventional Jellyfish Search Algorithm in the exploration stage, in a competitive manner, to enhance its ability to discover more feasible regions. This combination is performed dynamically using a fluctuating parameter that represents the characteristics of a hammer. The disruption operator is employed in the exploitation stage to boost the diversity of the candidate solutions throughout the optimization operation and avert the local optima problem. A comprehensive set of experiments is conducted using thirty classical benchmark functions to validate the effectiveness of the proposed DJSD method. The results are compared with advanced well-known metaheuristic approaches. The findings illustrated that the developed DJSD method achieved promising results, discovered new search regions, and found new best solutions. In addition, to further validate the performance of DJSD in solving real-world applications, experiments were conducted to tackle the task scheduling problem in cloud computing applications. The real-world application results demonstrated that DJSD is highly competent in dealing with challenging real applications. Moreover, it achieved gained high performances compared to other competitors according to several standard evaluation measures, including fitness function, makespan, and energy consumption.

1. Introduction

Every day, new and complex optimization problems arise in fields such as mathematics, industry, and engineering [1]. When the problems became more complex, traditional optimization approaches were discovered with high computing costs and they were trapped in local optima while solving them [2]. As a result, scientists have been looking for new techniques to address these problems [3]. Metaheuristic algorithms are promising solutions proposed by drawing inspiration from herd animals’ food-finding habits or natural occurrences. Metaheuristic (MH) algorithms have many benefits, including the ability to resist local optima, use a gradient-free mechanism, and provide rational solutions regardless of problem structure [4].
Mathematical optimization is the process of locating an item within an accessible domain for a given problem that has a maximum or minimum value [5]. The advancement of optimization methods is critical since optimization problems arise in different fields of analysis. The majority of traditional optimization approaches are deterministic and rely on derivative knowledge [6]. However, in real life, determining the optimal values of a problem is not always feasible [7]. Because of their derivative-free behavior and promising optimization ability, MH techniques are becoming increasingly popular. Other benefits of these approaches include flexibility, ease of execution, and the avert ability to skip local optima [8].
Exploration and exploitation of the MH technique are two main methods used in metaheuristics [9]. Exploration refers to the opportunity to discover or visit new areas of the solution space. In contrast, exploitation refers to retrieving valuable knowledge about nearby regions from the found search domain. The balance between manipulation and discovery determines the consistency of the solutions found by any MH. These algorithms (i.e., MH), look for a single candidate agent or a set of agents. Single agent-oriented approaches are those that are based on a single candidate agent, whereas population-based approaches are those that are based on a group of candidate solutions. MH are made to look like natural phenomena. MH techniques can be divided into three categories depending on their source of inspiration: swarm intelligence-based algorithms, evolutionary algorithms, and physics-based algorithms.
Swarm intelligence has risen to prominence in the world of nature-inspired strategies in recent years [10]. It is also utilized to address real-life optimization problems and is focused on the mutual actions of swarms or colonies of animals [11]. Swarm-based optimization algorithms use the collaborative trial and error approach to find the optimal solution. The Arithmetic Optimization Algorithm (AOA) [10], the Aquila Optimizer (AO) [12], and the Barnacles Mating Optimizer (BMO) [13] are well-known methods in this category. Thus, many complicated optimization problems have been solved using this class of optimization algorithm, such as scheduling problems [14].
Recently, a new swarm-based optimization technique has been proposed which named is named the Jellyfish Search Algorithm (JSA) [15]. This algorithm emulates the behaviour of a jellyfish swarm in nature. In accordance with the characteristics of JSA, it has been applied to solve different sets of optimization problems. For example, JSA has been used to determined the optimal solution of global benchmark functions in [15] and its efficiency over other metaheuristic (MH) techniques has been established. Gouda et al. [16] proposed an alternative method for estimating the parameters of the PEM fuel cell. In [17], a multi-objective version of JSA is proposed and it has been applied to solve multiple objective engineering problems. In [18], the JSA was implemented to solve the spectrum defragmentation problem and it showed better performance compared to other methods. Chou et al. [19] presented a time-series forecasting method for energy consumption. This method was compared with other teaching-learning-based optimization (TLBO) and symbiotic organism search (SOS) algorithms. The results of JSA are better than those algorithms. In addition, JSA was applied to other fields such as video watermarking [20].
In previous applications of JSA, its ability to solve different optimization problems has been observed. However, it still suffers from some drawbacks that can affect its performance. For example, it requires more improvement in its ability to balance between the exploration and exploitation phases during the searching process. This motivated us to propose an alternative version of JSA to avoid the limitations of conventional JSA and to apply it as a global optimization technique.
The proposed developed version of JSA is called DJSD—dynamic differential annealed technique. The proposed DJSD method integrated the Jellyfish Search Algorithm operators with active differential annealed optimization [21] and the disruption operator to gain the advantages of both approaches. The proposed method is evaluated using various benchmark problems (i.e., classical benchmark). The proposed approach’s performance is analyzed and compared with other methods to solve the same problems. The results proved that the presented method is better than other comparative approaches, and it found new best solutions for several test cases. In addition, it is extended by using it as a task scheduler technique in a cloud computing environment.
In conclusion, the following contributions are included in this paper:
  • Enhancing the Jellyfish Search Algorithm using the concept of the dynamic annealed and disruption operator to improve its exploration and exploitation abilities during the searching process.
  • Applying the developed method, named DJSD, as a global optimization method through evaluating it using different classical optimization benchmark functions against other well-known MH methods.
  • Offering an alternative task scheduling method to address cloud task scheduling problems.
The remaining sections of this paper are organized as follows. Section 2 presents the background of the jellyfish optimization algorithm, the Simulated Annealing algorithm, and the disruption operator. Section 3 introduces the steps of the proposed method. The experimental results and discussions are presented in Section 4. Finally, Section 5 concludes the paper.

2. Background

2.1. Jellyfish Search Algorithm

In this section, the basic information of the Jellyfish Search Algorithm (JSA) [15] is given. In general, JSA simulates the behaviour of jellyfish that squeeze water out of their body to move; rising ocean temperatures cause the creation of swarms [15]. The ability of these species to appear almost anywhere in the ocean is due to their movements within a swarm and the ensuing ocean currents that form jellyfish blooms. Since the amount of food at jellyfish-friendly sites varies, the best location would be determined by comparing food proportions [15].
Following [15], the ideas underpinning the jellyfish optimization algorithm can be formulated as:
  • JSA goes around the water looking for food and is drawn to areas with a higher supply of food.
  • The time control mechanism regulates shifting among motion groups. Jellyfish either move inside the swarm or follow the ocean current.
The major steps of metaheuristic algorithms are exploitation and exploration. Exploration is the motion in an ocean current, while exploitation is the motion inside a jellyfish swarm and the time control mechanism that manages the swapping between them. To locate areas with optimal locations, the possibility of exploration exceeds that of exploitation at the beginning. However, over time, exploitation’s probability exceeds that of exploration; then, the jellyfish determine the best position within the known places.

2.1.1. Population Initialization

In most cases, jellyfish populations are initiated at random positions, and the slow convergence and propensity to get stuck in the local optima due to low population diversity are some drawbacks of this strategy [15]. Therefore, many chaotic maps, for example, the tent map, the Liebovitch map, and the logistic map, have been created to increase the composition of the initial population. A logistic map generates a larger initial population than random selection and has a lower risk of premature convergence [15].
X i + 1 = η X i ( 1 X i ) , 0 X 0 1
where X 0 is a basic jellyfish population, X i denotes the logistically chaotic value of the jellyfish’s location i, η = 4.0 , X 0 ( 0 , 1 ) , and X 0 { 0.0 , 0.25 , 0.75 , 0.5 , 1.0 } .
Since the earth is roughly spherical, a jellyfish that leaves the search domain’s boundaries would go back to the direct opposite limit. This mechanism is represented in Equation (2).
X i , d = ( X i , d U B d ) + L B d i f X i , d > U B d ( X i , d L B d ) + U B d i f X i , d < L B d
where X i , d denotes the new value of X i , d at dimension d after checking the limits of the search space (i.e., U B and L B ).

2.1.2. Exploration Stage: Ocean Current

The jellyfish are drawn to the ocean current because it carries many nutrients; the ocean current’s direction ( T O ) is calculated as:
T O = 1 N i T O i = 1 N ( X b e c X i ) = X b e c X i N = X b e c μ
where T O is determined by:
T O = X b d f , d f = e c μ
In Equation (4), N is the number of jellyfish, X b is the best jellyfish in the swarm with the best fitness value, e c is the attraction’s governing factor, μ is the jellyfish’s average position, d f the distinction among the jellyfish’s current best position and the average position of all jellyfish. Based on the assertion that jellyfish have a regular spatial distribution in all directions, and a distance of ± β σ , all jellyfish are likely to be found in the vicinity of the mean position, and the standard deviation of the distribution is σ .
d f = β × σ × r a n d f
σ = μ × r a n d α
So,
d f = μ × β × r a n d f × r a n d α , d f = μ × β × r a n d
e c = β × r a n d
T O = X b β × r a n d × μ
The value of X i is updated using Equation (10).
X i ( t + 1 ) = r a n d × T O + X i ( t )
Based on the definition of T O , Equation (10) can be reformulated as:
X i ( t + 1 ) = μ × r a n d × ( X b β × r a n d ) + X i ( t )
In Equation (11), the length of T O is related to β > 0 , which is a coefficient of distribution, and T O depends on the findings of the numerical experiment’s sensitivity analysis.

2.1.3. Exploitation Stage

Jellyfish move in swarms in passive and active motions (group A and group B, respectively). When the swarm is first forming, most jellyfish move in a group A pattern. They increasingly exhibit active motion over time, and passive activity is the movement of jellyfish that surround their positions with each jellyfish’s updated position being provided as follows.
X i ( t + 1 ) = ( U b L b ) × γ × r a n d + X i ( t )
where U b denotes the upper limit and L b represents the lower limit of the search domain, and γ > 0 is the coefficient of motion.
According to the numerical results of experiments’ sensitivity analyses, it was obtained that γ = 0.1 . To emulate the active motion of jellyfish (j), more than one are chosen randomly and a vector is employed to determine the direction of motion from X i to X j . Every X i in a swarm moves in the best direction to find the food using Equations (13)–(16), to mimic the motion direction and modify the jellyfish position.
s t e p = r a n d × D i r e c t i o n
s t e p = X i ( t + 1 ) X i ( t )
D i r e c t i o n = X j ( t ) X i ( t ) i f F i t i F i t j , X i ( t ) X j ( t ) i f F i t i < F i t j
where F i t is the fitness function of position X.
X i ( t + 1 ) = s t e p + X i ( t )

2.1.4. Time Control Mechanism (TCM)

In the beginning, passive motion is preferred. However, with time, active movement is favored to mimic this case; we use a time control mechanism to organize jellyfish travel between going inside a jellyfish swarm and following the ocean current. The TCM has a time control function c ( t ) which is a time-varying random value that ranges from 0 to 1, and c o is a constant; Equation (17) represents the TCM.
c ( t ) = | 2 × ( 1 t t m a x ) × ( r a n d 1 ) |
where t m a x stands for the maximum number of generations. ( 1 c ( t ) ) is the function simulating the move inside a swarm (passive or active motion); when r a n d ( 0 , 1 ) > ( 1 c ( t ) ) , X exhibits a passive motion. Otherwise, X exhibits an active motion.
The complete steps of the jellyfish optimization algorithm are given in Algorithm 1.
Algorithm 1 Steps of jellyfish optimization algorithm
 1:
Determine initial parameters such as N number of solutions, t m a x total number of generations.
 2:
Construct population X i ( i = 1 , 2 , , N ) utilizing logistic chaotic map.
 3:
Compute fitness function ( F i t i ) for X i .
 4:
Allocate the best solution ( X b ) .
 5:
Set t = 1 .
 6:
repeat
 7:
    for  i = 1 to N do
 8:
        Update c ( t ) according to Equation (17).
 9:
        if  c ( t ) 0.5  then
10:
           (1) Update T O using Equation (9).
11:
           (2) Update X i using Equation (11).
12:
        else:
13:
           if  r a n d > ( 1 c ( t ) )  then
14:
               (1) Update X i according to Equation (12).
15:
           else:
16:
               (2) Update the direction of X i according to Equation (15).
17:
               (3) Update X i according to Equation (16).
18:
        Check boundary conditions for X i .
19:
        Compute F i t i for X i and update X b .
20:
     t = t + 1 .
21:
until t > t m a x .
22:
Output: X b .

2.2. Simulated Annealing Algorithm

The Simulated Annealing (SA) algorithm is a single-based solution optimization technique simulating the metallurgical annealing process [22,23].
The SA algorithm starts by generating a random solution with a starting value X and comes up with a new solution Y from its neighborhood. The fitness value for X and Y is calculated as the following step in SA, and if F i t ( Y ) F i t ( X ) , then X = Y . On the other hand, SA can replace X with Y even when Y’s fitness is not greater than X’s. This is determined by the probability (p), which is defined as follows:
p = e Δ E T ,
Δ E = C o s t ( X ) C o s t ( Y ) C o s t ( Y )
where T stands for the temperature variable, which should start high and steadily decrease in value as iterations progress. The probability of adopting a new solution is denoted by p. The difference between the objective value of the suggested solution Y and the solution X objective value is called Δ E . The SA algorithm is illustrated in Algorithm 2.
Algorithm 2 The SA method
  • Input: Initial temperature ( T 0 ), D is the solution dimension and t m a x is the maximum number of generations.
  • The initial solution is generated X.
  • Compute the fitness value F i t of X to evaluate its efficiency.
  • Allocate the best solution X b = X and F i t ( X b ) = F i t ( X ) .
  • Allocate t = 1 and T = T 0 .
  • while t < t m a x do
  •     Discover the neighbour Y for the X.
  •     Compute F i t ( Y ) for Y.
  •     if  F i t ( Y ) < F i t ( X i )  then
  •          X i = Y .
  •     else
  •         Update Δ = F i t ( X i ) F i t ( Y ) .
  •         if ( p r 5 ) then
  •             X i = Y .
  •     if  F i t ( X b ) > F i t ( X )  then
  •          X b = X ;
  •     Set t = t + 1 .
  • Output: X b .

2.3. Disruption Operator

The preliminaries of the disruption operator ( D o p ) are described in this section. D o p depends on the physical processes in astrophysics, where this rule supposes that when a set of gravitationally bound particles (with total mass m) is very close to a massive object (with mass M), then the group becomes torn apart [24]. The role in Equation (20) [24] can be used to implement this process:
D o p = d i s t i , j × δ ( 1 2 , 1 2 ) i f d i s t i , b e s t 1 1 + d i s t i , b e s t × δ ( 10 16 2 , 10 16 2 ) o t h e r w i s e
where d i s t i , b e s t represents the Euclidean distance between the best solution and the i t h solution. d i s t i , j denotes the Euclidean distance between the i t h solution and the nearest neighborhood ( j t h ). Additionally, δ ( a , b ) represents a random value in domain [ a , b ] .

3. Developed Method

The framework of the presented DJSD method is illustrated in Figure 1. The improved DJSD aims to enhance the performance of the traditional JSA using dynamic differential Simulated Annealing and a disruption operator. Each of these techniques is applied to enhance the exploration and exploitation of JSA.
The developed DJSD starts by producing the initial population then computing the fitness value for each agent inside this population. This is followed by determining the best agent that has the smallest fitness value. The following process updates the agents according to the time control mechanism (TCM) value, which determines whether the agents will be updated using an ocean current or jellyfish swarm. In the latter (i.e., the jellyfish swarm), the traditional operators of the JSA algorithm are applied to update the current agent. Otherwise, the competition between JSA operators, dynamic differential Simulated Annealing, and the D o p are used to update the present agent. This is performed by updating the agents using either the traditional operators of the JSA in ocean current or the DJSD. Then, the mechanism of SA to decrease the probability of choosing a new agent as the temperature is reduced is applied. Finally, after updating the current population, the D o p is used to improve the diversity of X.

3.1. Initial Stage

The developed DJSD starts at this point by constructing an initial population ( X i ) with N agents, and this is formulated as:
X i = L B + r a n d ( 1 , D ) × ( U B L B )
In Equation (21), r a n d ( 1 , D ) stands for random D values. L B and U B refer to the limits of the search domain.

3.2. Updating Stage

At this stage, the DJSD starts updating the agents within the current population (X) by calculating the fitness value F i t i for each agent X i . The next step in DJSD is to allocate the best agent X b , which has the best fitness value F i t b . Then the value of TCM is improved using Equation (17). In cases where c ( t ) < 0.5 , the operators of the jellyfish swarm are used to update X i . Otherwise, the combination of ocean current, DJSD, and D o p is used to enhance X i . This is achieved based on the dynamic characteristics of the hammer during the search for the optimal solution. This represents a fluctuating parameter between the ocean current and the operator of SA. Hence, this process is formulated as:
X i n e w = X i + r a n d × X b μ × β × r a n d if rem ( t , 2 ) = 1 ( X r 1 X r 2 ) + X r × f if rem ( t , 2 ) = 0
In Equation (22), f [ 0 , 1 ] is random number and r e m represents the remaining mathematical function. X r 1 , X r 2 are random solutions chosen from the current population X, while X r denotes a random solution generated in the search space.
After that, the new value of X i (i.e., X i n e w ) is accepted at temperatures elevated above low temperatures, and this is performed depending on the probability value p as in the traditional SA. This process is formulated as follows.
X i = X i n e w F i t i n e w < F i t i A p p l y E q u a t i o n s ( 18 ) a n d ( 19 ) O t h e r w i s e
The next step is to apply the D o p operator to the current updated population X. However, this process takes more time, and this increases the computational time of the developed method. Accordingly, D o p will be applied according to the following formula:
X i = A p p l y E q u a t i o n ( 20 ) r a n d > 0.5 X i O t h e r w i s e

3.3. Terminal Stage

The stop conditions are checked within this stage; if they are not met, the updating stage is repeated. Otherwise, the best solution found so far ( X b ) is returned.

4. Experimental Results and Discussion

In this section, the presented optimizer’s performance is evaluated using several experiments and comprehensive comparisons to demonstrate the algorithm’s abilities. The implemented experiments consist of a set of thirty classical benchmark functions. The results of the developed DJSD are compared with those of several metaheuristic optimizers including the whale optimization algorithm (WOA) [25], artificial ecosystem optimization (AEO) [26], the chimp optimization algorithm (Chimp) [27], the firefly algorithm (FA) [28], and the traditional JSA. The setting parameters for all comparison algorithms are given in Table 1. These values are chosen based on the original implementation of the algorithms. The considered algorithms are conducted 30 times with a population size of 30, and the maximum number of iterations per run is set to 1000. The experiments and analyses are executed using MATLAB R2018b on a machine equipped with an Intel Core i5 CPU and 4 GB RAM running under Windows 10 64-bit.

4.1. Experimental Series 1: Mathematical Optimization Problems

The major objective of the current experimental series is to assess the ability of the developed method to determine the ideal solution for classical benchmark functions [29]. The description of these benchmark functions is given in Table 2. It is evident from the table that there are two types of functions, namely unimodal (UM) and multimodal (MM). The unimodal type (F1–F10) is used to assess the ability of the MH technique to find the solution inside a a single solution search space. Likewise, the multimodal type (F11–F30) is applied to test the capability of the MH method to find the optimal solution inside a search domain having more extreme solutions.

Results and Discussions

The findings of DJSD and other peer algorithms in terms of solving classical global benchmark functions are given in Table 3, Table 4, Table 5 and Table 6. From Table 3, which illustrates the average of the fitness value, it is clear that DJSD outperforms other peer methods in most of the tested functions. More specifically, it achieves the smallest fitness value in eighteen functions, representing 60% of the total tested functions, followed by AEO and FA ranked in the second and third places, respectively. Conversely, the traditional JSA only provides better results than MFO, Chimp, and WOA.
Moreover, it can be observed from the standard deviation values given in Table 4 that the developed DJSD method is more stable than other MH techniques, while JSA, AEO, MFO, FA, Chimp, and WOA achieved the smallest STD values at 5, 14, 2, 8, 3, and 4 out of 30 functions, respectively. By analyzing the performance of the developed DJSD algorithm in terms of the best fitness value as provided in Table 5, one can see that AEO has the best fitness value in seventeen functions, followed by DJSD and FA, which provide better results in sixteen and fifteen functions, respectively. In addition, it can be noticed from the worst fitness values given in Table 6 that the proposed DJSD still gets better results even in its worst case. In particular, it provides a smaller fitness values in sixteen functions, followed by AEO. On the other hand, JSA and FA have nearly the same performance by attaining the smallest value at only eight and nine functions, respectively.
Figure 2 and Figure 3 depict the convergence curves for the average of the fitness value over the total number of iterations. It can be observed from these convergence curves that the developed DJSD can converge faster than other MH techniques. For example, F1–F4 and F21–F26 are examples of unimodal and multimodal functions, respectively.
To further analyze the performance of the developed DJSD, a non-parametric test named the “Friedman test” was applied to identify whether there is a significant difference between DJSD and other MH techniques. Table 7 shows the value of the mean rank accomplished by the Friedman test for all compared techniques. It can be reported that DJSD accomplishes the best mean rank value in terms of the average, STD, and worst fitness value. However, in terms of the best fitness value, DJSD allocates the second rank behind the AEO algorithm.
In summary, the reported results demonstrate the high ability of DJSD to address global mathematical optimization problems. This can be due to the integration of dynamics Simulated Annealing and disruption operator with JSA.

4.2. Experimental Series 2: Cloud Task Scheduling Problems

Cloud computing is the provision of computing resources and services over the Internet. These services are delivered to cloud consumers under the Service Level Agreement (SLA) specifications. The SLAs are made up of various quality of service (QoS) parameters promised by the cloud provider. Among them are minimal execution time, high performance, service availability, energy consumption, and low prices. These parameters can be considered separately when just one of them is crucial to the system performance or combined when both parameters are related. Keep in mind that task scheduling is a decision-making process that deals with allocating computing resources to tasks, and its primary purpose is to target one or more objectives. Therefore, efficient task scheduling is one of the critical steps to effectively leverage the power of cloud computing [30].

4.2.1. Problem Formulation

The scheduling model for the considered scheduling issue in cloud computing is described as follows. Consider a data center consisting of several physical servers or computational resources. These physical servers may vary in the number of CPU cores (processing elements), memory size, network bandwidth, and storage capacity [31]. These resources can be scaled up or down to meet the required QoS and SLAs. Suppose V M = { V M 1 , V M 2 , , V M m } are a bunch of virtual machines (VMs) available within a data center. Every V M j has its processing capability measured by MIPS (millions of instructions per second). Suppose T = { T 1 , T 2 , , T n } is a collection of user requests submitted by cloud subscribers to be performed on the set of VMs. Every task T i has a length T L i expressed in millions of instructions (MI).
In this study, an expected time to compute (ETC) matrix is used to keep the time expected to perform a certain task (service) on various VMs [31]. The element E T C i j signifies the ETC of the i t h task on the j t h VM, where 1 i n , 1 j m .
E T C i j = T L i V M P j
where T L i represents the length of the i t h task and V M P j signifies the processing power of the j t h VM.
Our objective in this study is to ensure better QoS in terms of makespan and energy efficiency. Makespan is the amount of time taken for the completion of all tasks [32]. Therefore, appropriate mapping of tasks to VMs requires a minimal makespan. In general, makespan (MKS) is computed by the following equation.
M K S = max j 1 , 2 , . . . , m i = 1 n E T C i j
Furthermore, energy consumption is referred to as the amount of energy consumed by computing machines. Therefore, the energy consumption should be minimal to enhance the system performance and provide better QoS to the users. Recall that the energy consumed by V M j is determined by the energy consumed in the active state plus the energy consumed in the idle state [31]. Additionally, the energy consumption of the idle VM is about 60% of its active state [33]. Hence, the energy consumed (in terms of Joules) by V M j can be determined as:
E n g ( V M j ) = ( T E j × β j + ( M K S T E j ) × α j ) × V M P j
β j = 10 8 × V M P j 2
α j = 0.6 × β j
where T E j represents the total execution time of V M j . β j and α j denote the consumed energy by V M j in the active and idle state, respectively. The overall energy consumption ( T E n g ) of the cloud system is computed as given in Equation (30).
T E n g = j = 1 m E n g ( V M j )
Since the whole performance of the cloud system is heavily influenced by the makespan and energy consumption factors, our main objective here is to ensure a better makespan with less energy consumption. Therefore, the considered problem is classified as a bi-objective optimization problem. Then, the fitness function is given by:
F = λ × T E n g + ( 1 λ ) × M K S
where λ signifies the balance parameter between the fitness function’s factors. Finally, the goal of our task scheduling is to search the schedule that minimizes F.

4.2.2. Experimental Environment and Datasets

To demonstrate the applicability of the developed DJSD approach, we perform computational experiments using different workload instances. Three different workload traces are used to validate the proposed algorithm; these are synthetic workload, HPC2N workload, and NASA Ames iPCS/860 workload. The synthetic workload contains 1500 tasks varying in length from 2000 to 56,000 MI created based on a uniform distribution. Table 8 describes the synthetic workload. The real workload traces, on the other hand, consisting of HPC2N and NASA Ames, are derived from the “Parallel Workload Archives” [34]. HPC2N encompasses the statistics of 527,371 tasks, while NASA Ames includes the statistics of 42,264 tasks.
The cloud environment comprises a single data center and 20 VMs with different setups hosted on two host machines in all experiments. The configurations of the host machines and VMs are displayed in Table 9. As evident from Table 9, the fastest and slowest VMs have a processing capacity of 5000 and 1000 MIPS, respectively.

4.2.3. Results and Discussions

In this paper, eight state-of-the-art metaheuristics are chosen as peer algorithms for comparative analysis, including the standard JSA [15], AEO [26], MFO [35], FA [28], Chimp [27], WOA [25], golden jackal optimization (GJO) [36], and SA [23]. Each algorithm is executed with 20 independent runs on each scheduling instance in order to produce more precise estimates of our findings. Moreover, λ is set to 0.7 as our major goal is reducing energy consumption.
To scrutinize the performance behavior of the presented DJSD algorithm, the graphs of the average fitness values for the nine comparative algorithms are plotted in Figure 4, Figure 5 and Figure 6, for a different number of tasks and datasets. The x-axis of the given graphs show the number of tasks, whereas the y-axis represents the fitness function’s value. In particular, as shown in Figure 4, DJSD achieves better fitness values for the synthetic workload when task sizes range from 300 to 1500. In a similar manner, the comparative results in Figure 5 show that on the NASA Ames iPSC/860 dataset, DJSD performs much better than the other eight peer algorithms in terms of the fitness function. Additionally, Figure 6 illustrates that when the number of tasks ranges from 1000 to 5000, DJSD performs well on the HPC2N workload. Overall, the curves affirm the superior performance and ability of the presented DJSD approach to identify near-optimum solutions on almost all datasets.
The comparisons of experimental outcomes in terms of the average makespan produced by DJSD, JSA, AEO, MFO, FA, Chimp, WOA, GJO, and SA for the synthetic and real workload traces are given in Figure 7, Figure 8 and Figure 9. In comparison to the traditional JSA and other peer algorithms, the suggested DJSD approach generates the best average makespan for the synthetic workload, as demonstrated in Figure 7. Besides that, when the NASA iPSC workload is employed, Figure 8 shows that DJSD delivers better average makespan values than other peer algorithms. In addition, when considering the HPC2N workload, Figure 9 shows that DJSD attains lower average makespan values compared to existing algorithms. Ultimately, the reported results reveal that DJSD exhibits the best average makespan among the other eight comparative methods for all investigated instances.
Figure 10, Figure 11 and Figure 12 demonstrates the comparison of total energy consumption between DJSD, JSA, AEO, MFO, FA, Chimp, WOA, GJO, and SA algorithms using synthetic and real datasets, including HPC2N and NASA. Figure 10 illustrates that, for the synthetic dataset, DJSD consumes the least amount of energy when compared to the comparaive algorithms. Similarly, Figure 11 and Figure 12 demonstrate that DJSD delivers the least amount of energy consumption when compared to the available methods for the NASA iPSC and HPC2N workload, respectively. In brief, the comparison of experimental outcomes indicates that for all tested instances and datasets, DJSD provides better energy consumption than other comparative methods.
To summarize, the results mentioned above confirm the benefit of integrating the SA strategy and D o p with the JSA algorithm. Finally, the findings demonstrate that the DJSD algorithm produces better solution diversity and quality, resulting in near-optimal solutions.

5. Conclusions

The artificial Jellyfish Search Algorithm (JSA) is a recent promising search method to simulate jellyfish in the ocean. It has been applied to solve various optimization problems. However, it faces some problems in the search process while solving complicated problems, particularly the local optima problem and the low diversity of candidate solutions. This paper suggests a novel dynamic search method based on using the artificial jellyfish search optimizer with two search techniques (Simulated Annealing and disruption operators), called DJSD. The enhancement of the proposed method occurs in two stages. In the first stage, the Simulated Annealing operators are incorporated into the artificial jellyfish search optimizer to enhance the ability to discover more feasible regions in a competitive manner. This modification is performed dynamically by using a fluctuating parameter representing a hammer’s characteristics to keep the solution diverse and balance the search processes. In the second stage, the disruption operator is employed in the exploitation frame to further improve the diversity of the candidate solutions throughout the optimization process and avert the local optima problem.
Two experiment series are conducted to validate the performance of the proposed DJSD method. In the first experiment series, thirty classical benchmark functions are used to validate the effectiveness of DJSD compared with other well-known search methods. The findings revealed that the suggested DJSD approach obtained encouraging results, discovered new search regions, and found new best solutions for most test cases. In the second experiment series, a set of tests is conducted to solve cloud computing applications’ task scheduling problems to further prove DJSD’s ability to function in real-world applications. The real-world application results confirmed that the proposed DJSD is competence in dealing with challenging real applications. Moreover, it obtained high performances compared to other similar methods using several standard evaluation measures, including fitness function, makespan, and energy consumption.
The proposed method can be tested further to find potential improvements in future works. Furthermore, it can be combined with other search methods to further improve its searchability in dealing with complicated problems. Different optimization problems can be tested to investigate the performance of the proposed technique, such as text clustering, photovoltaic cell parameters, engineering and industrial optimization problems, forecasting models, feature selection, image segmentation, and multi-objective problems.

Author Contributions

Conceptualization, I.A., L.A., S.A., D.E. and M.A.E.; methodology, I.A., L.A., S.A., D.E. and M.A.E.; software, I.A., L.A. and M.A.E.; validation, I.A., L.A., S.A., D.E. and M.A.E.; formal analysis, I.A., L.A. and M.A.E.; investigation, I.A., L.A. and M.A.E.; writing—original draft preparation, I.A., L.A., S.A., D.E. and M.A.E.; writing—review and editing, I.A., L.A., S.A., D.E. and M.A.E.; visualization, I.A., L.A. and M.A.E.; supervision, M.A.E.; project administration, I.A., L.A., S.A., D.E. and M.A.E.; and funding acquisition, S.A. All authors have read and agreed to the published version of the manuscript.

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R197), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

The Data Available upon request from corresponding Author.

Acknowledgments

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R197), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Alshinwan, M.; Abualigah, L.; Shehab, M.; Abd Elaziz, M.; Khasawneh, A.M.; Alabool, H.; Al Hamad, H. Dragonfly algorithm: A comprehensive survey of its results, variants, and applications. Multimed. Tools Appl. 2021, 80, 14979–15016. [Google Scholar] [CrossRef]
  2. Xia, W.; Wu, Z. An effective hybrid optimization approach for multi-objective flexible job-shop scheduling problems. Comput. Ind. Eng. 2005, 48, 409–425. [Google Scholar] [CrossRef]
  3. He, Q.; Wang, L. An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng. Appl. Artif. Intell. 2007, 20, 89–99. [Google Scholar] [CrossRef]
  4. Karakoyun, M.; Ozkis, A.; Kodaz, H. A new algorithm based on gray wolf optimizer and shuffled frog leaping algorithm to solve the multi-objective optimization problems. Appl. Soft Comput. 2020, 96, 106560. [Google Scholar] [CrossRef]
  5. Gupta, S.; Deep, K. A memory-based grey wolf optimizer for global optimization tasks. Appl. Soft Comput. 2020, 93, 106367. [Google Scholar] [CrossRef]
  6. Schuëller, G.I.; Jensen, H.A. Computational methods in optimization considering uncertainties—An overview. Comput. Methods Appl. Mech. Eng. 2008, 198, 2–13. [Google Scholar] [CrossRef]
  7. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  8. Afshari, H.; Hare, W.; Tesfamariam, S. Constrained multi-objective optimization algorithms: Review and comparison with application in reinforced concrete structures. Appl. Soft Comput. 2019, 83, 105631. [Google Scholar] [CrossRef]
  9. Gupta, S.; Deep, K.; Mirjalili, S. An efficient equilibrium optimizer with mutation strategy for numerical optimization. Appl. Soft Comput. 2020, 96, 106542. [Google Scholar] [CrossRef]
  10. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  11. Bansal, J.C.; Singh, S. A better exploration strategy in Grey Wolf Optimizer. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 1099–1118. [Google Scholar] [CrossRef]
  12. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-qaness, M.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization Algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  13. Sulaiman, M.H.; Mustaffa, Z.; Saari, M.M.; Daniyal, H. Barnacles mating optimizer: A new bio-inspired algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 87, 103330. [Google Scholar] [CrossRef]
  14. Abd Elaziz, M.; Attiya, I. An improved Henry gas solubility optimization algorithm for task scheduling in cloud computing. Artif. Intell. Rev. 2021, 54, 3599–3637. [Google Scholar] [CrossRef]
  15. Chou, J.S.; Truong, D.N. A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl. Math. Comput. 2021, 389, 125535. [Google Scholar] [CrossRef]
  16. Gouda, E.A.; Kotb, M.F.; El-Fergany, A.A. Jellyfish search algorithm for extracting unknown parameters of PEM fuel cell models: Steady-state performance and analysis. Energy 2021, 221, 119836. [Google Scholar] [CrossRef]
  17. Chou, J.S.; Truong, D.N. Multiobjective optimization inspired by behavior of jellyfish for solving structural design problems. Chaos Solitons Fractals 2020, 135, 109738. [Google Scholar] [CrossRef]
  18. Manivannan, S.; Selvakumar, S. A Spectrum Defragmentation Algorithm Using Jellyfish Optimization Technique in Elastic Optical Network (EON). Wirel. Pers. Commun. 2021, 1–19. [Google Scholar] [CrossRef]
  19. Chou, J.S.; Truong, D.N. Multistep energy consumption forecasting by metaheuristic optimization of time-series analysis and machine learning. Int. J. Energy Res. 2021, 45, 4581–4612. [Google Scholar] [CrossRef]
  20. Dhevanandhini, G.; Yamuna, G. An Efficient Lossless Video Watermarking Extraction Process with Multiple Watermarks Using Artificial Jellyfish Algorithm. Turk. J. Comput. Math. Educ. (TURCOMAT) 2021, 12, 3048–3055. [Google Scholar]
  21. Ghafil, H.N.; Jármai, K. Dynamic differential annealed optimization: New metaheuristic optimization algorithm for engineering applications. Appl. Soft Comput. 2020, 93, 106392. [Google Scholar] [CrossRef]
  22. Bertsimas, D.; Tsitsiklis, J. Simulated annealing. Stat. Sci. 1993, 8, 10–15. [Google Scholar] [CrossRef]
  23. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  24. Ibrahim, R.A.; Abd Elaziz, M.; Lu, S. Chaotic opposition-based grey-wolf optimization algorithm based on differential evolution and disruption operator for global optimization. Expert Syst. Appl. 2018, 108, 1–27. [Google Scholar] [CrossRef]
  25. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  26. Zhao, W.; Wang, L.; Zhang, Z. Artificial ecosystem-based optimization: A novel nature-inspired meta-heuristic algorithm. Neural Comput. Appl. 2020, 32, 9383–9425. [Google Scholar] [CrossRef]
  27. Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  28. Yang, X.S. Firefly Algorithms for Multimodal Optimization. In Stochastic Algorithms: Foundations and Applications; Watanabe, O., Zeugmann, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 169–178. [Google Scholar]
  29. Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.P.; Auger, A.; Tiwari, S. Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization. KanGAL Rep. 2005, 2005005, 2005. [Google Scholar]
  30. Attiya, I.; Abualigah, L.; Elsadek, D.; Chelloug, S.A.; Abd Elaziz, M. An Intelligent Chimp Optimizer for Scheduling of IoT Application Tasks in Fog Computing. Mathematics 2022, 10, 1100. [Google Scholar] [CrossRef]
  31. Attiya, I.; Elaziz, M.A.; Abualigah, L.; Nguyen, T.N.; Abd El-Latif, A.A. An Improved Hybrid Swarm Intelligence for Scheduling IoT Application Tasks in the Cloud. IEEE Trans. Ind. Inform. 2022. [Google Scholar] [CrossRef]
  32. Attiya, I.; Zhang, X.; Yang, X. TCSA: A dynamic job scheduling algorithm for computational grids. In Proceedings of the 2016 First IEEE International Conference on Computer Communication and the Internet (ICCCI), Wuhan, China, 13–15 October 2016; pp. 408–412. [Google Scholar]
  33. Mishra, S.K.; Puthal, D.; Rodrigues, J.J.P.C.; Sahoo, B.; Dutkiewicz, E. Sustainable Service Allocation Using a Metaheuristic Technique in a Fog Server for Industrial Applications. IEEE Trans. Ind. Inform. 2018, 14, 4497–4506. [Google Scholar] [CrossRef]
  34. Parallel Workloads Archive. Available online: http://www.cse.huji.ac.il/labs/parallel/workload/logs.html (accessed on 28 April 2021).
  35. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  36. Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
Figure 1. Schematic flowchart of the developed DJSD method.
Figure 1. Schematic flowchart of the developed DJSD method.
Mathematics 10 01894 g001
Figure 2. Convergence curves of each approach for F1–F6.
Figure 2. Convergence curves of each approach for F1–F6.
Mathematics 10 01894 g002
Figure 3. Convergence curves of each approach for F21, F23, F24, F25, F26, and F27.
Figure 3. Convergence curves of each approach for F21, F23, F24, F25, F26, and F27.
Mathematics 10 01894 g003
Figure 4. Convergence curve for the synthetic workload.
Figure 4. Convergence curve for the synthetic workload.
Mathematics 10 01894 g004
Figure 5. Convergence curve for NASA iPSC real workload.
Figure 5. Convergence curve for NASA iPSC real workload.
Mathematics 10 01894 g005
Figure 6. Convergence curve for HPC2N real workload.
Figure 6. Convergence curve for HPC2N real workload.
Mathematics 10 01894 g006
Figure 7. Average makespan for the synthetic workload.
Figure 7. Average makespan for the synthetic workload.
Mathematics 10 01894 g007
Figure 8. Average makespan for NASA iPSC real workload.
Figure 8. Average makespan for NASA iPSC real workload.
Mathematics 10 01894 g008
Figure 9. Average makespan for HPC2N real workload.
Figure 9. Average makespan for HPC2N real workload.
Mathematics 10 01894 g009
Figure 10. Total energy consumption for the synthetic workload.
Figure 10. Total energy consumption for the synthetic workload.
Mathematics 10 01894 g010
Figure 11. Total energy consumption for NASA iPSC real workload.
Figure 11. Total energy consumption for NASA iPSC real workload.
Mathematics 10 01894 g011
Figure 12. Total energy consumption for HPC2N real workload.
Figure 12. Total energy consumption for HPC2N real workload.
Mathematics 10 01894 g012
Table 1. The value of each parameter of compared methods.
Table 1. The value of each parameter of compared methods.
AlgorithmParameter Values
DJSD γ = 0.1 , η = 4.0
JSA γ = 0.1 , η = 4.0
AEO r a n d 1 , r a n d 2 , r a n d 3 , r a n d 4 [ 0 , 1 ]
MFO a = [ 2 1 ] , Spiral factor b = 1
FA β 0 = 1 , γ F A [ 0.01 100 ] , α F A [ 0 , 1 ]
Chimp a = [ 1 , 1 ] ,   f decreased from 2 0
WOA a = [ 0 , 2 ] , b = 1 , l = [ 1 , 1 ]
Table 2. Formulation of global problems.
Table 2. Formulation of global problems.
IDFormula of Function LW UW dim W Type
F1 f ( x ) = i = 1 n x i 2 −10010030UM
F2 f ( x ) = i = 1 n | x i | + Π i = 1 n | x i | −101030UM
F3 f ( x ) = i = 1 n ( j 1 i x i ) 2 −10010030UM
F4 f ( x ) = m a x i { | x i | , 1 i n } −10010030UM
F5 f ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] −303030UM
F6 f ( x ) = i = 1 n ( [ x i + 0.5 ] ) 2 −10010030UM
F7 f ( x ) = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ] −1.281.2830UM
F8 f ( x ) = i = 1 n i x i 2 −101030UM
F9 f ( x ) = i = 1 n i x i 4 −1.281.2830UM
F10 f ( x ) = i = 1 n | x i | i + 1 −1130UM
F11 f ( x ) = i = 1 n x i s i n ( | x i | ) −50050030MM
F12 f ( x ) = i = 1 n [ x i 2 10 c o s ( 2 π x i ) + 10 ] −5.125.1230MM
F13 f ( x ) = 20 e x p ( 0.2 1 n i = 1 n x i 2 ) e x p ( 1 n i = 1 n c o s ( 2 π x i ) ) + 20 + e −323230MM
F14 f ( x ) = 1 4000 i = 1 n x i 2 Π i = 1 n c o s ( x i i ) + 1 −60060030MM
F15 f ( x ) = π n { 10 s i n 2 ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 s i n 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 n u ( x i , 10 , 100 , 4 ) −505030MM
u ( x i , a , k , m ) = k ( x i a ) m , x i > a 0 , a x i a k ( x i a ) m , x i < a
F16 f ( x ) = 0.1 { s i n 2 ( 3 π x 1 ) + i = 1 n ( x i 1 ) 2 [ 1 + s i n 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + s i n 2 ( 2 π x n ) ] } + i = 1 n u ( x i , 5 , 100 , 4 ) −505030MM
F17 f ( x ) = i = 1 n ( x i 1 ) 2 + [ 1 + s i n 2 ( 3 π x i + 1 ) ] + s i n 2 ( 3 π x 1 ) + | x n 1 | [ 1 + s i n 2 ( 3 π x n ) ] −101030MM
F18 f ( x ) = i = 1 n | x i s i n ( x i ) + 0.1 x i | −101030MM
F19 f ( x ) = 0.1 n ( 0.1 i = 1 n c o s ( 5 π x i ) i = 1 n x i 2 −1130MM
F20 f ( x ) = i = 1 n x i 2 + ( i = 1 n 0.5 i x i ) 2 + ( i = 1 n 0.5 i x i ) 4 −51030MM
F21 f ( x ) = i = 1 n 0.5 + s i n 2 ( 100 x i 1 2 + x i 2 0.5 ) 1 + 0.001 ( x i 2 2 x i 1 x i + x i 2 ) 2 −51030MM
F22 f ( x ) = 0.1 s i n 2 ( 3 π x 1 ) + i = 1 n 1 ( x i 1 ) 2 ( 1 + s i n 2 ( 3 π x i + 1 ) + ( x n 1 ) 2 ( 1 + s i n 2 ( 3 π x n ) ) −5530MM
F23 f ( x ) = i = 1 n ( 10 6 ) ( i 1 ) / ( n 1 ) x i 2 −10010030MM
F24 f ( x ) = ( 1 ) n + 1 Π i = 1 n ( c o s ( x i ) ) e x p ( i = 1 n ( x i π ) 2 ) −10010030MM
F25 f ( x ) = 1 c o s ( 2 π i = 1 n x i 2 ) + 0.1 i = 1 n x i 2 −10010030MM
F26 f ( x ) = 0.5 + s i n 2 ( i = 1 n x i 2 ) 0.5 ( 1 + 0.001 ( i = 1 n x i 2 ) ) 2 −10010030MM
F27 f ( x ) = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ) 1 −65.53665.5362MM
F28 f ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 −554MM
F29 f ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 x 2 2 + 4 x 2 4 −552MM
F30 f ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) c o s x 1 + 10 −552MM
Table 3. Average of fitness value obtained by each algorithm.
Table 3. Average of fitness value obtained by each algorithm.
DJSDJSAAEOMFOFAChimpWOA
F10.00E+003.71E-610.00E+002800.053.46E-205.3E-1024.7E-148
F20.00E+001.7E-255.5E-16339.223432.82E-112.42E-596.3E-106
F30.00E+0014.629560.00E+0018238.913.72E-204.12E-833.33E-10
F40.00E+001.3E-363.5E-16274.646071.4E-101.66E-392.88E-06
F50.027520.03912121.709532057022.76E-131.85012613.89417
F60.0164251.72E-096.46E-073620.0983.56E-200.0008221.12E-06
F71.66E-050.000210.0004250.1432592.29E-050.0002350.000737
F80.00E+006.6E-120.00E+00660.00028.21E-226.5E-1032.4E-153
F90.00E+001.17E-570.00E+003.4359743E-471.7E-1815.4E-230
F100.00E+003.1E-1019.5E-2973.11E-092.48E-159.23E-966.4E-141
F11−1632.06−953.103−1295.09−1358.43−214.577−217.608−217.608
F120.00E+000.0018760.00E+00175.41670.00E+000.1171390.00E+00
F138.9E-163.45E-158.9E-1615.964151.23E-104.3E-152.59E-15
F140.00E+000.00E+000.00E+0018.245520.0124420.0331930.007454
F150.0182893.2E-104.06E-0856.905851E-210.0005772.61E-05
F160.0048511.12E-100.203345558,082.72.3E-210.1363744.53E-05
F170.0062092.57E-070.02380656.187223.7E-140.741020.007575
F180.00E+000.0006119.5E-1686.6440793.03E-120.0226770.030925
F190.00E+000.00E+000.00E+000.7658220.00E+000.00E+000.00E+00
F200.00E+000.1342794.6E-304356.85763.31E-221.2E-1065.33E-47
F21−29−8.14357−25.2567−6.91495−2.9989−2.90701−2.99528
F220.0020684E-092.31E-0627.346821.6E-220.2835751.54E-05
F230.00E+0013.790830.00E+00573853441.35E-169.6E-1023.4E-140
F24−0.989070.00E+00−0.677930.00E+000.00E+00−0.34884−0.35958
F250.00E+000.0998731.2E-1598.9198730.054930.0972490.079904
F260.00E+000.0097160.00E+000.499620.0097160.0097160.009262
F271.0397570.9980.9983.7051140.9980.9980081.785747
F280.0003240.000310.0003530.0041770.0003530.001270.00084
F29−1.03158−1.0316−1.0316−1.0316−1.0316−1.03162−1.03163
F300.3983180.397890.397890.397890.397890.3981060.397889
Table 4. Standard deviation (STD) of fitness value obtained by each algorithm.
Table 4. Standard deviation (STD) of fitness value obtained by each algorithm.
DJSDJSAAEOMFOFAChimpWOA
F10.00E+001.78E-600.00E+004582.5441.73E-202.6E-1012.3E-147
F20.00E+004.78E-252.2E-16220.958271.13E-111.21E-582.6E-105
F30.00E+0060.964660.00E+0013.082011.74E-201.18E-821.48E-09
F40.00E+005.42E-361.4E-1617.1894873.5E-114.76E-391.42E-05
F50.0402610.1140290.75832815.9870149.9E-130.51767365.63104
F60.0080163.88E-092.04E-067043.1932.06E-200.000431.08E-06
F71.3E-050.0001360.000410.0655561.72E-050.0001780.000828
F80.00E+003.18E-110.00E+00842.12034.87E-223.2E-1029.1E-153
F90.00E+005.87E-570.00E+0010.695952.64E-470.00E+000.00E+00
F100.00E+001.6E-1000.00E+001E-081.61E-153.44E-952.3E-140
F114.67E-1393.09167116.4478115.991513.559148.7E-148.7E-14
F120.00E+000.0048660.00E+0039.895990.00E+000.51030.00E+00
F130.00E+001.63E-150.00E+005.584063.04E-117.11E-161.81E-15
F140.00E+000.00E+000.00E+0036.804550.0061640.0378990.024534
F150.0135751.43E-099.34E-08266.80437.1E-220.0003022.98E-05
F160.0066161.88E-100.5359082.7902431E-210.0949324.87E-05
F170.005479.15E-070.04459787.934721.92E-140.7783260.021809
F180.00E+000.0015820.00E+006.6359057.19E-130.1130230.069025
F190.00E+000.00E+000.00E+000.553670.00E+000.00E+000.00E+00
F200.00E+000.3821230.00E+00112.43123.03E-225E-1062.28E-46
F211E-043.6412193.2472212.07390.0016840.0565240.011708
F220.0024827.74E-099.78E-0627.930487.2E-230.4555261.92E-05
F230.00E+0066.21312063.1438297.81E-173.6E-1011.2E-139
F240.0224940.00E+000.4184080.00E+000.00E+000.4747680.489318
F250.00E+002.58E-094.8E-1593.6857830.0509770.0131240.049949
F260.00E+006.52E-100.00E+000.0002962.39E-143.19E-080.006859
F270.0771130.00E+000.00E+003.7028851.76E-168.78E-061.995907
F281.17E-056.85E-190.0002050.0072250.0002052.55E-050.000569
F294.08E-056.72E-162.22E-166.8E-167.2E-176.63E-066.78E-10
F300.000440.00E+000.00E+000.00E+000.00E+000.000162.82E-06
Table 5. Best fitness values obtained by each algorithm.
Table 5. Best fitness values obtained by each algorithm.
DJSDJSAAEOMFOFAChimpWOA
F10.00E+001.73E-750.00E+000.0001251.01E-201E-1341.5E-178
F20.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
F30.00E+009.66E-150.00E+00916.75758.72E-213.4E-1027.33E-29
F40.00E+006.73E-401.9E-17552.538436.3E-111.63E-531.18E-17
F50.000314.34E-0520.3246153.543651E-171.2124140.000112
F60.0048413.77E-121.36E-090.0001275E-210.0001013.52E-08
F77.2E-084.67E-058.23E-060.0571514.27E-061.59E-052.22E-05
F80.00E+003.2E-490.00E+000.0001472.25E-233.2E-1432.7E-177
F90.00E+005.5E-1360.00E+001.99E-106.99E-492.8E-2522.8E-269
F100.00E+002.4E-1240.00E+001.19E-171.53E-167.2E-1391.2E-163
F11−1632.06−1171.81−1431.69−1632.06−217.608−217.608−217.608
F120.00E+001.91E-090.00E+00113.4960.00E+000.00E+000.00E+00
F138.88E-168.88E-168.88E-161.6463326.1E-118.88E-168.88E-16
F140.00E+000.00E+000.00E+000.0002510.00E+000.00E+000.00E+00
F153.02E-054.82E-141.65E-100.0001152.4E-227.26E-051.99E-07
F169.7E-051.35E-131.03E-070.0119845.6E-220.0001562.92E-06
F170.0002546.98E-114.93E-076.83E-059.8E-150.0093565.02E-05
F180.00E+004.53E-112.5E-1809.42E-051.63E-124.01E-691.2E-116
F190.00E+000.00E+000.00E+000.1477850.00E+000.00E+000.00E+00
F200.00E+003.13E-080.00E+00138.80819.86E-231.8E-1261.52E-67
F21−29−15.789−28.9867−10.7271−3−2.9935−3
F221.22E-061.08E-119.76E-115.07E-064.6E-230.0010672.75E-07
F230.00E+003.52E-190.00E+0024871239.49E-187.6E-1432.1E-164
F24−10−0.999990.00E+000.00E+00−0.98982−0.99999
F250.00E+000.0998730.00E+003.8998733.42E-120.0342552.55E-78
F260.00E+000.0097160.00E+000.498650.0097160.0097160.00E+00
F270.9980040.9980.9980.9980.9980.9980040.998004
F280.0003120.000310.000310.0007350.000310.0012290.000308
F29−1.03163−1.0316−1.0316−1.0316−1.0316−1.03163−1.03163
F300.3978890.397890.397890.397890.397890.3978990.397887
Table 6. Worst fitness values obtained by each algorithm.
Table 6. Worst fitness values obtained by each algorithm.
DJSDJSAAEOMFOFAChimpWOA
F10.00E+008.92E-600.00E+0010,0008.03E-201.3E-1001.1E-146
F20.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
F30.00E+00302.56860.00E+0046251.876.74E-204.51E-827.41E-09
F40.00E+002.72E-356.4E-16185.785592.01E-101.82E-387.08E-05
F50.1445370.41669223.76492799432794.4E-122.980005328.8432
F60.0389341.88E-089.16E-0620200.58.09E-200.0017674.35E-06
F73.9E-050.0006040.0014370.2885755.88E-050.0005970.002443
F80.00E+001.59E-100.00E+0036001.64E-211.6E-1014.5E-152
F90.00E+002.93E-560.00E+0053.687091.02E-464.3E-1801.4E-228
F100.00E+007.9E-1001.1E-2954.26E-085.52E-151.6E-941.1E-139
F11−1632.06−799.48−993.194−1098.47−156.97−217.608−217.608
F120.00E+000.0193790.00E+00265.15520.00E+002.537740.00E+00
F138.88E-164.44E-158.88E-1619.963191.74E-104.44E-154.44E-15
F140.00E+000.00E+000.00E+0090.556460.0246440.1452740.120817
F150.0464377.19E-093.68E-071337.4942.73E-210.0012789.44E-05
F160.0203956.96E-102.098957139512474.28E-210.3000180.000171
F170.0197584.53E-060.110464315.20737.58E-142.9848350.1112
F180.00E+000.0076341.8E-16621.760854.3E-120.5651880.231832
F190.00E+000.00E+000.00E+002.0867070.00E+000.00E+000.00E+00
F200.00E+001.4414989.2E-303553.68021.32E-212.5E-1051.14E-45
F21−28.9996−1.7987−18.2892−3.86103−2.9944−2.78942−2.95236
F220.0093143E-084.38E-0587.634833.38E-221.0201287.21E-05
F230.00E+00331.358502.38E+082.91E-161.6E-1005.4E-139
F24−0.904660.00E+00−8.4E-060.00E+000.00E+00−7E-1920.00E+00
F250.00E+000.0998732.1E-15816.599870.0998730.0998740.199873
F260.00E+000.0097160.00E+000.4999170.0097160.0097160.037224
F271.3025570.9980.99814.563050.9980.99804410.76318
F280.0003540.000310.0012230.0203630.0012230.0013110.002252
F29−1.03149−1.0316−1.0316−1.0316−1.0316−1.0316−1.03163
F300.3996970.397890.397890.397890.397890.3984160.397899
Table 7. Friedman Test.
Table 7. Friedman Test.
DJSDJSAAEOMFOFAChimpWOA
Average2.51673.78332.65006.41673.75004.80004.0833
STD2.76673.68332.98336.45003.43334.38334.3000
Best3.15004.00002.60005.98333.93334.76673.5667
Worst2.61673.80002.66676.25003.53334.63334.5000
Table 8. Attributes of the synthetic workload.
Table 8. Attributes of the synthetic workload.
ParameterValue
Number of tasks300 to 1500
Task length2000 to 56,000 MI
File size400 to 600 MB
Table 9. Experimental parameter settings.
Table 9. Experimental parameter settings.
EntityParameterValue
UserNo. of users[100, 200]
DatacenterNo. of datacenters1
HostNo. of hosts2
Storage space1 TB
RAM size20 GB
Bandwidth10 Gb/s
VMNo. of VMs20
Processing power[1000, 5000] MIPS
Storage space10 GB
Memory size1 GB
Bandwidth1 Gb/s
No. of CPUs1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Attiya, I.; Abualigah, L.; Alshathri, S.; Elsadek, D.; Abd Elaziz, M. Dynamic Jellyfish Search Algorithm Based on Simulated Annealing and Disruption Operators for Global Optimization with Applications to Cloud Task Scheduling. Mathematics 2022, 10, 1894. https://doi.org/10.3390/math10111894

AMA Style

Attiya I, Abualigah L, Alshathri S, Elsadek D, Abd Elaziz M. Dynamic Jellyfish Search Algorithm Based on Simulated Annealing and Disruption Operators for Global Optimization with Applications to Cloud Task Scheduling. Mathematics. 2022; 10(11):1894. https://doi.org/10.3390/math10111894

Chicago/Turabian Style

Attiya, Ibrahim, Laith Abualigah, Samah Alshathri, Doaa Elsadek, and Mohamed Abd Elaziz. 2022. "Dynamic Jellyfish Search Algorithm Based on Simulated Annealing and Disruption Operators for Global Optimization with Applications to Cloud Task Scheduling" Mathematics 10, no. 11: 1894. https://doi.org/10.3390/math10111894

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop