Next Article in Journal
State Estimation in Partially Observable Power Systems via Graph Signal Processing Tools
Previous Article in Journal
Research on Anomaly Detection of Surveillance Video Based on Branch-Fusion Net and CSAM
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Trust-Aware Task Scheduling Algorithm in Cloud Computing Using Firefly Optimization

by
Sudheer Mangalampalli
1,*,
Ganesh Reddy Karri
1 and
Ahmed A. Elngar
2
1
School of Computer Science and Engineering, VIT-AP University, Amaravati 522237, India
2
Faculty of Computers and Artificial Intelligence, Beni-Suef University, Beni-Suef 62511, Egypt
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(3), 1384; https://doi.org/10.3390/s23031384
Submission received: 24 December 2022 / Revised: 19 January 2023 / Accepted: 24 January 2023 / Published: 26 January 2023
(This article belongs to the Section Internet of Things)

Abstract

:
Task scheduling in the cloud computing paradigm poses a challenge for researchers as the workloads that come onto cloud platforms are dynamic and heterogeneous. Therefore, scheduling these heterogeneous tasks to the appropriate virtual resources is a huge challenge. The inappropriate assignment of tasks to virtual resources leads to the degradation of the quality of services and thereby leads to a violation of the SLA metrics, ultimately leading to the degradation of trust in the cloud provider by the cloud user. Therefore, to preserve trust in the cloud provider and to improve the scheduling process in the cloud paradigm, we propose an efficient task scheduling algorithm that considers the priorities of tasks as well as virtual machines, thereby scheduling tasks accurately to appropriate VMs. This scheduling algorithm is modeled using firefly optimization. The workload for this approach is considered by using fabricated datasets with different distributions and the real-time worklogs of HPC2N and NASA were considered. This algorithm was implemented by using a Cloudsim simulation environment and, finally, our proposed approach is compared over the baseline approaches of ACO, PSO, and the GA. The simulation results revealed that our proposed approach has shown a significant impact over the baseline approaches by minimizing the makespan, availability, success rate, and turnaround efficiency.

1. Introduction

The cloud computing paradigm provides on-demand services to users to deploy or migrate their existing on-premises infrastructure into the cloud environment, to develop cloud naïve applications in the cloud environment, or for users to make use of the existing services running in the cloud environment. This can be possible in the cloud environment by provisioning a virtual infrastructure to corresponding users with the use of virtualization technology. Users can access all types of services such as memory, computing, storage, and network on-demand whenever they need to by subscribing to those services. This subscription of services between the cloud provider and the user is named the SLA. Based on the SLA made between the user and the cloud provider, corresponding services need to be leveraged to the users. The cloud computing paradigm has characteristics [1], i.e., scalability, which renders services to its users by dynamically scaling up or down and scaling in or out based on the situation. Resource pooling is another characteristic of this paradigm which provides services to cloud users from a shared pool of resources that are, logically, synchronized with each other when a resource is allocated to a user. All these resources are given to users on demand whenever they request a cloud provider and, in turn, the cloud provider will automatically provision virtual resources to the customers based on the SLA made between them. All these services are to be billed according to their corresponding usage and pay-per-usage policy of services. Whenever a cloud user chooses services in the cloud paradigm, they are provisioned with respective services virtually from the cloud provider; in order to provide these services to customers without any issues, the cloud provider needs a scheduling mechanism that automatically assigns virtual resources to users based on their requests. Therefore, to map corresponding requests to virtual machines and assign those resources to customers, an effective task scheduler is needed in this paradigm as this paradigm is to be used by many users around the world; to assign resources to these large-scale users, an automatic provisioning mechanism should be presented in the cloud paradigm. The requests coming onto the cloud console are of variable sizes and different types that include text, images, video, streaming data, etc. These requests may come from heterogeneous resources. Therefore, to schedule these dynamic, variable, and heterogeneous requests onto precise virtual machines, an effective task scheduling algorithm is needed in this paradigm. An ineffective task scheduling mechanism leads to the degradation of the quality of the services by violating the SLAs parameters and thereby the trust between the cloud provider and the user will be degraded. Therefore, an effective task scheduling algorithm is needed for this paradigm to provide a high quality of service by maintaining the SLA and thereby improving the trust between the cloud providers and users. Many of the existing authors proposed several task scheduling mechanisms by using metaheuristic approaches, i.e., PSO [2], ACO [3], and the GA [4]. Earlier authors proposed task schedulers using metaheuristic approaches as it is the NP-hard problem to identify the best possible solution to schedule tasks to precise virtual machines. Therefore, in this research, we propose an efficient trust-aware task scheduling algorithm that calculates the priorities of dynamic-natured incoming tasks and the priorities of virtual resources based on the unit cost of electricity. This research uses the firefly optimization algorithm [5] to model scheduling in the cloud paradigm. This research mainly concentrates on the trust parameters based on the SLA metrics, i.e., the availability of virtual resources, the success rate of virtual resources, and, finally, the turnaround efficiency. The entire experiment was conducted on Cloudsim and our proposed approach, TAFFA, was evaluated against the baseline approaches for the above specified parameters.

Motivation and Contributions

The cloud computing paradigm provides on-demand flexible, scalable services to users based on their needs with the help of virtualization. These services automatically assign resources to customers based on their SLA made between the cloud vendor and user. To assign these virtual resources precisely based on the type of requests posed onto the cloud console, an efficient task scheduling algorithm is needed. An improper and ineffective task scheduler in the cloud paradigm degrades the quality of service by violating the SLAs parameters and thereby the degradation of trust in the cloud provider. This situation motivates us to develop a task scheduling algorithm that considers the priorities of tasks, VMs using the unit cost of electricity, and schedules tasks accordingly while minimizing the makespan and improving the availability, success rate, and turnaround efficiency. Below are the highlights and contributions of our research in this manuscript.
  • Developed an efficient trust-aware task scheduling algorithm using firefly optimization (TAFFA).
  • The effective scheduling of the priorities of tasks and VMs with electricity unit costs are calculated.
  • Identified the relation between the makespan and trust based on the SLAs parameters.
  • The calculation of trust based on the SLA is identified, i.e., the success rate of the virtual resource, the availability of VMs, and the turnaround efficiency.
  • A deadline constraint in used in this research in order to assign tasks to virtual resources after the current execution of the pending tasks.
  • Extensive simulations carried on Cloudsim.
  • Fabricated (different workload distributions, i.e., uniform, normal, left, and right skewed distributions) and realtime worklogs from HPC2N and NASA are used.
  • The proposed TAFFA is evaluated over PSO, ACO, and the GA and finally the simulation results proposed TAFFAs improved parameters, i.e., the availability, success rate, and turnaround efficiency while minimizing the makespan.
The rest of the manuscript is organized as follows. Section 2 discusses the related works conducted by various authors, Section 3 discusses the problem definition and system architecture, Section 4 discusses the proposed efficient trust-aware task scheduling algorithm in cloud computing using firefly optimization, Section 5 discusses the simulation and results, and Section 6 discusses the conclusion and proposed future work.

2. Related Work

In [6], the authors proposed a task scheduling approach modeled by adaptive PSO which balances between the local and global search space by modifying the inertia weights of particles. It mainly addressed the parameters makespan, throughput, and average resource utilization. The entire simulation was conducted on Cloudsim and evaluated over the baseline approaches. The results revealed that linearly descending adaptive PSO shows an impact over the existing mechanisms for the mentioned parameters. Energy consumption is an important parameter and an important aspect in the cloud computing paradigm. For this approach, the authors in [7] proposed a scheduling mechanism by a variant of PSO. In this approach, the PSO algorithm was made as an adaptive approach by changing the inertia weight and accelerated coefficient. This approach was implemented on the Cloudsim toolkit. It was evaluated over the existing max–min and min–min algorithms. From the simulation results, it is evident that the proposed approach shows a significant improvement over the baseline algorithms for the makespan and energy consumption. The cloud computing paradigm is advantageous only when users achieve a high satisfaction when resources in the cloud paradigm are properly utilized. The authors in [8] proposed a task scheduling mechanism which is a resource and deadline aware task scheduling algorithm based on PSO. It was implemented on Cloudsim. It was compared against the existing algorithms, i.e., max–min and min–min approaches. From the results, it was proved that PSO-RADL outperforms over the existing baseline algorithms for the parameters, i.e., the makespan, resource utilization, task rejection, penalty, and total cost. In [9], the authors focused on formulating a task scheduling approach using CSO and based on the behavior of cats, it was proposed and aimed at addressing the parameters, i.e., the makespan, total power cost at datacenters, migration time, and energy consumption. The scheduling model implemented on Cloudsim and two types of workloads are used for checking the efficacy of the algorithm, i.e., the random workload and real time worklogs. It was compared over PSO and CS algorithms and the results revealed that the mentioned parameters show a significant improvement over the baseline algorithms. In [10], the authors formulated a task scheduling mechanism for the balancing of the load by hybridizing lateral wolf and PSO algorithms. The LW-PSO approach works based on the fitness value of LW. It was implemented on Cloudsim with python and evaluated over the baseline approaches and it has shown a significant improvement over these algorithms while minimizing the execution time, turnaround time, and response time. In [11], the authors formulated a scheduling mechanism which addresses the parameters of the execution time and wait time. The scheduling of tasks is to be done by using the PSO approach and also by balancing the load of tasks. The entire experiment was conducted on Gridsim and evaluated with random and realtime clusters computational workloads. It was compared over different baseline approaches and from the results, it proved that Ba-PSO outperforms the existing approaches for the specified parameters. In [12], the authors formulated a task scheduling mechanism by combining PSO and opposition-based learning. This approach accelerates the searching process of PSO by using opposition-based learning to avoid a premature convergence. It was implemented on Cloudsim and evaluated over the different variations of PSO and the proposed OPSO shows a huge impact over the existing approaches for the minimization of the makespan and energy consumption. In [13], a task scheduling mechanism formulated MVO and PSO approaches. MVO works as a global search process and PSO acts as a local search. It was implemented on the Cloudsim toolkit and compared over the existing MVO and PSO algorithms and the results revealed that EMVO shows a huge impact over the baseline algorithms by minimizing the makespan while improving the resource utilization. In [14], the task scheduling algorithm was formulated by the hybridization of the GA and FPA. It was simulated on the Cloudsim toolkit and the addressed parameters were resource utilization, the completion time, and computation cost. It was compared against the existing GA and FPA and the simulation results revealed that GA-FPA outperformed the existing GA and FPA approaches for the specified parameters. The authors in [15] designed an energy efficient task scheduling mechanism in a heterogeneous cloud environment. They used a deadline constraint to execute tasks in the specified time to put the new task into the execution mode. This approach was formulated in two stages. The first stage discusses the minimization of the execution time while scheduling tasks and the second stage discusses the reassignment of tasks based on their priorities to obtain an optimized execution time and energy consumption. It was implemented on Cloudsim and compared over the existing baseline approaches of RC-GA, AMTS, and E-PAGA and from the simulation results, it is evident that the energy consumption was greatly minimized by 5 to 20% for EPETS over the baseline algorithms. In [16], a hybrid task scheduling model was developed by the authors to address the parameters of the makespan, cost, and response time. It was developed by combining two algorithms, i.e., the GA and electro search. The entire simulations were carried out on Cloudsim. It was evaluated over the baseline algorithms, i.e., HPSOGA, ES, and ACO algorithms. From the results, it was proved that HSGA was dominant over the existing algorithms for the specified parameters. Fault tolerance can be considered as one of the important parameters in the cloud paradigm as the quality of the cloud services depends on the fault tolerance. The quality of service will be preserved when the cloud provider follows the SLA promptly. For this purpose, in [17], the authors formulated a multi-phase scheduling mechanism formulated using the GA. In the initial phase, the individual fitness value was calculated and it was considered as the local fitness value; then, in the next phase, the global fitness value was identified. After calculating both fitness values, the tasks from the different users were mapped to the virtual resources according to the fitness values optimally. The total simulation was conducted on Cloudsim and it was evaluated over the baseline GA and its variants. From the results, it was observed that MFTGA showed dominance over the existing baseline approaches for an SLA violation, the execution time, memory utilization, cost, and energy consumption. When the number of user requests increased in the cloud paradigm, the computation overhead will be increased in order to schedule tasks on to respective virtual resources. The tasks in this paradigm arise from various heterogeneous resources, therefore, the authors in [18] extracted the features of the tasks and reduced the task features using mapreduce framework. It was implemented on Cloudsim and was modeled by combining the WAO and GA. It was evaluated over the existing approaches of the GA and WAO. From the results, it showed a huge impact over the baseline approaches by minimizing the processing cost, computation cost, and processing time. In [19], the authors formulated a task scheduling model based on a consciousness of energy and the cost incurred by the cloud provider. It was modeled using a GA and was formulated in two phases. In the first phase, the task priorities were calculated, but in the second phase, these tasks were allocated to the respective processors by using the GA approach. This experiment was conducted on MATLAB. It was compared against several baseline approaches and it was finally identified that it outperforms the baseline approaches while minimizing the makespan and energy consumption. For achieving a high performance in the cloud paradigm, the authors in [20] designed a task scheduling mechanism which is a hybridized approach in which the GSA is used as a local search and the GA is used as the global search. It was implemented on Cloudsim and the workload captured in this approach is a real time workflow benchmark dataset of a heterogenous nature. GSAGA was compared over Profit-MTC and IWC algorithms. From the results, it evident that GSAGA showed a huge impact over the compared approaches while maximizing its processing capacity and minimizing the energy consumption. In [21], a hybridized task scheduling approach was formulated using the FPSO and GA. The main aim of this approach was to efficiently balance the tasks among VMs. It was implemented on Cloudsim and compared over the existing PSO and GA. The results revealed that FPSO-GA had a fuzzy nature, which proved that it outperforms the existing approaches by efficiently balancing the load among VMs. In [22], the authors formulated an efficient power-aware task scheduling algorithm developed using a modified ACO approach. This algorithm was modeled based on updating pareto by accelerating the convergence using adaptive probability distribution. All the simulations were carried out on Cloudsim and evaluated over the existing state-of-the-art approaches and, finally, from the results, it was proved that MOTS-ACO was dominant over the existing approaches while distributing tasks and utilizing power efficiently. In [23], the authors designed a hybridized task scheduling algorithm which was combined using ACO and PSO, where PSO was used for the global search and ACO was used as the local search. The simulations were carried out on Cloudsim and it was evaluated over the variants of PSO and ACO approaches. From the results, it is evident that AC-PSO showed its dominance over the existing variants by minimizing the makespan, total cost, and an increase in resource utilization. In [24], the authors designed a task scheduling approach which was designed for the minimization of the makespan and improved the performance of the cloud computing paradigm. It was modeled using a R-ACO, a reinforced approach through which ant patterns can be rewarded. The experimentation was conducted on MATLAB 2015. It was evaluated over the baseline ACO approach and identified that R-ACO minimizes the makespan by 60%. In [25], a load balancing strategy was developed based on the ACO algorithm. This algorithm aims at the minimization of the makespan and cost. Cloudsim was used as a simulation tool for this approach. It was evaluated over PSO and hybrid approaches and, from the results, Mr-LBA showed its dominance over the existing algorithms for the specified parameters. In [26], a hybridized task scheduling approach was developed and aimed to address energy consumption and resource utilization. This approach was developed using ACO and BF algorithms. It was implemented using the Cloudsim toolkit. It was compared to ACO and BF approaches and the results showed its dominance in terms of the makespan, energy consumption, and resource utilization over the existing state-of-the-art algorithms. In [27], the authors designed a task scheduling approach based on the adaptive nature of the ACO algorithm. This adaptive nature was generated by using polymorphic ants based on the pheromone updation of ants. This algorithm aimed at the minimization of the execution cost and the execution time. It was implemented on Cloudsim and evaluated over the state-of-the-art approaches. From the results, it was evident that AACO outperformed the existing algorithms for the specified parameters. In [28], the authors designed a task scheduling approach for challenges such as the workflow intensity, task heterogeneity, and task dependencies. It was modeled using the hybrid approach of HEFT-ACO. It was implemented in the AWS cloud environment and it was evaluated over the existing variants of ACO. From the results, it was shown to have a huge impact over the existing variants by minimizing the makespan and cost. In [29], the authors focused on developing a task scheduling algorithm which was meant for computational- and delay-sensitive tasks at the mobile edge cloud for resource allocation. It mainly aimed at the efficient utilization of radio and computational resources to the user’s requests. It was implemented using MATLAB. It was modeled as a joint scheduling mechanism using ACO and GA approaches; ACO was used as the local and the GA was used as the global search and it was compared against the existing GA and ACO. From the results, it is evident that it increased its capability in the efficient utilization of resources and allocated them in an optimized manner. In [30], a load balancing mechanism was developed to distribute resources effectively among VMs in the cloud paradigm. Heterogeneous users’ requests arrived at a cloud console and, in order to balance these tasks properly onto the corresponding VMs, the authors in [30] used an ACO-NN based on nearest neighbor among all the ants; the best possible ants with solutions needed to be taken into the situation, and, therefore, the scheduling problem needed to be solved. It was implemented on Cloudsim and evaluated over the baseline approaches; finally, the results showed that it efficiently balanced the load among the VMs compared with the existing approaches. In [31], the authors used DGWO to model and schedule the dependent tasks in cloud computing. The main aim of this approach was to minimize the computation and transmission costs. They used the largest order value method to model scheduler in the cloud paradigm. They used workflowsim for their experimentation and, finally, they compared DGWO over PSO and BPSO algorithms; the results showed that it had a huge impact over the baseline approaches for the mentioned parameters.
In Table 1, it can be clearly observed that many of the existing authors in task scheduling addressed various parameters, i.e., the execution time, execution cost, response time, energy consumption, resource utilization, and load balancing, of the tasks. Earlier authors failed to address the parameters related to trust in the cloud provider, which is an important aspect in the cloud paradigm. An ineffective task scheduler in the cloud paradigm leads to a poorer quality of service, which in turn leads to damaging individuals’ trust in a cloud provider. In the proposed approach, the SLA-based trust parameters are evaluated which impact the quality of service and, in turn, affect one’s trust in a cloud provider. Therefore, an efficient trust-aware task scheduling algorithm based on the SLAs parameters is proposed to preserve trust in the cloud provider. For this scheduling paradigm, the priorities of incoming tasks and VMs are considered and then the tasks are carefully scheduled onto VMs while minimizing the makespan and improving the availability, success rate, and turnaround efficiency.

3. Problem Definition and System Architecture

This section carefully discusses the problem definition and system architecture in this research. The below subsection discusses the problem definition.

3.1. Problem Definition

Consider that a set of m t = m 1 , m 2 , m 3 , . m t tasks are mapped to n v = n 1 , n 2 , n 3 , n v virtual resources which are residing in a set of hosts indicated as h o q = h o 1 , h o 2 , h o 3 h o q , which are in turn residing in d p = d 1 , d 2 , d 3 , d p   datacenters. In this research, the task scheduling problem is defined in such a way that m t tasks are mapped to n v virtual machines, which are in turn mapped to d p datacenters by considering the priorities of the tasks and VMs based on the unit cost of electricity, while minimizing the makespan and improving the SLA-based trust parameters, i.e., the availability, success rate, and turnaround efficiency.

3.2. System Architecture

This subsection discusses the system’s architecture in a detailed manner. Figure 1 discusses the system’s architecture. Initially, various cloud users from heterogeneous resources are submitted onto the cloud console. These requests are captured by a cloud broker on behalf of all the users and are submitted to the task manager. The task manager needs to examine the requests of the users and if it is a valid request from a user as per the SLA made between the cloud provider and users, then the service request dispatcher dispatches the corresponding request to the scheduler. In our architecture, we induced a mechanism at the task manager level where the priorities of tasks are based on the length, run time, and capacity of tasks. After calculating the tasks’ priorities, the VM based on the unit cost of electricity is calculated. After calculating the priorities carefully, corresponding tasks are fed to the execution queue and then the scheduler sends tasks onto the appropriate VMs based on the consumption status of virtual resources at the resource manager level, while minimizing the makespan and improving the SLA-based trust parameters. In our research, we posed a deadline constraint where no other tasks are to be scheduled onto a VM when a task is currently being executed on a VM. After discussing the system’s architecture, we carefully modeled our scheduler using a mathematical model in this architecture. Table 2 indicates the notations used in the system’s architecture.
Initially, in this mathematical modeling, we calculated the current workload of all the VMs. It was calculated using Equation (1).
l o a d n v = l o a d v
where l o a d v indicated the workload on all n VMs considered in our work. After the calculation of the current workload on all the VMs, we calculated the workload on all the physical hosts as all the VMs are hosted on the considered physical hosts. Therefore, the current workload on all the hosts was calculated using Equation (2).
l o a d h o q = l o a d n v   h o q
where l o a d h o q indicates the workload on all the considered hosts.
After the calculation of the workload on the VMs and hosts, to map tasks onto the VMs and the priorities of the tasks are calculated based on the length and the various processing capacities of the VMs in which the tasks are running; the priorities of the tasks depend on the length of the tasks and the processing capacities of the VMs. The processing capacity of a VM was calculated using Equation (3).
p r n v = p r n o * p r M I P S
After calculating the processing capacity of a VM, the processing capacity of all the VMs was calculated using Equation (4).
T o T n v p r =   p r n v
After the calculation of the total processing capacity of all the VMs, the size of the task was calculated using Equation (5).
m t s i z e = m t M I P S * m t p r
After calculating the processing capacity from Equation (5), the priorities of the tasks were calculated using Equation (6).
p r i o m t = m t s i z e p r n v
where p r n v indicates the processing capacity of a VM and m t s i z e indicates the size of a corresponding task. At this level, we evaluated the priorities of tasks but we evaluated the priorities of VMs based on the electricity unit cost as incoming tasks onto the cloud console should be precisely mapped onto the VMs by considering the length and processing capacity of the VMs. Therefore, we calculated the priorities of the VMs based on the electricity cost using Equation (7). It is defined as the ratio of the highest electricity cost among all the datacenters to the electricity cost at that corresponding datacenter.
p r i o n v = h i g h e l e c o s t D C D C e l e c o s t
After the calculation of the priorities of the tasks and the VMs from Equations (6) and (7), the task manager needs to submit these priorities to the scheduler and the scheduler will generate schedules to the corresponding tasks by mapping the high-priority tasks onto the VMs with low electricity price datacenters. Every time the scheduler generates schedules based on these priorities, the scheduler needs to check with the resource manager for the status of the virtual resources to see whether they are busy or idle, i.e., the resource status.
After the calculation of these priorities, we identified the SLA-based trust parameters, i.e., the availability, success rate, and turnaround efficiency, which impact the quality of service of a cloud provider; if these SLA parameters are violated, then it has an impact on the trust in the cloud provider.
Initially, the availability of a virtual machine is calculated and it can be defined as the ratio of the number of tasks accepted by the VM to the number of tasks submitted by the user. It is calculated using Equation (8).
A V n v = A V m m t
Another SLA-based trust parameter which needs to be discussed is the success rate, which is defined as the ratio of successfully executed requests onto a VM to the number of submitted requests for a specified amount of time.
S R n v = S m t A V m
After calculating the success rate, the turnaround efficiency of a VM is calculated as it is also one of the SLA-based trust parameters. Therefore, it is calculated as a ratio of the estimated turnaround time supplied by a cloud provider to the actual turnaround time generated while scheduling tasks.
It is calculated using the following Equation (10).
T E n v = E S T T A C T T  
After the calculation of the SLA-based trust parameters from Equations (8)–(10), then the trust in a cloud service provider can be calculated using Equation (11).
t r u s t c s p = X 1 * A V n v + X 2 * S R n v * X 3 * T E n v
From Equation (11), X = X 1 , X 2 , X 3 are the positive weights assigned for the calculation of trust in a cloud service provider and the X value lies between 0 and 1; these weights are calculated using the method in [32]. For X 1 = 0.5 , X 2 = 0.2 , X 3 = 0.1 , these weights will vary with respect to the requests with arrive on the cloud console. Therefore, with these weights and using Equation (11), the trust in a cloud service provider is calculated.
After the identification of the trust parameters, the makespan for the set of considered tasks is calculated as the makespan indirectly affects the quality of service which, in turn, affects the trust in a cloud provider. Therefore, we identified that there is a relationship between the makespan and SLA-based trust parameters. For calculating the makespan, initially we need to identify the execution time of a task on a corresponding VM and we posed a deadline constraint in such a way that when a request is made by a user, then immediately it should be assigned to a VM or, otherwise, it should wait until the current task has being executed on that corresponding VM. Therefore, the execution time of a task can be calculated using Equation (12).
e x m t = e x m p r n v
After the calculation of the execution time, we calculated the finish time of the corresponding task as we posed a deadline constraint; therefore, every time a request is raised by a user, a VM needs to be assigned after the completion of the current task or assigned immediately if no tasks are currently running. Therefore, the finish time of a task needs to be identified as the deadline constraint is posed so the finish time is always less than the deadline of a task. Therefore, we calculated the finish time using Equation (13).
f t m t =   n v + e x m t
To preserve trust in a cloud service provider, it is necessary to finish a task within its deadline. Therefore, the finish time of a task should be less than the deadline of task. This is calculated using Equation (14).
f t m t   d l m t
Now we can calculate the makespan; this indirectly impacts on the trust in a cloud provider as it is defined as the time taken to execute a task on a particular VM. It is calculated using Equation (15).
m a m t = max f t m t n v
min f t m t m t n v = t = 1 v = 1 ψ t v f t m t m t n v
From Equation (16), ψ t v = 1 if a task t is assigned to a v VM; otherwise, it is to be set as 0.
Up to now, we carefully modeled the scheduler mathematically but in order to preserve trust and to optimize the parameters, we used a firefly optimization algorithm for improving the SLA-based trust parameters and the makespan.

3.3. Fitness Function for Trust-Aware Task Scheduling Algorithm (TAFFA) Using Firefly Optimization

This subsection discusses the fitness function used in our proposed trust-aware task scheduling algorithm. It is calculated using the following Equation (17).
f x = ψ 1 * m a m t + ψ 2 * A V n v + ψ 3 * S R n v + ψ 4 * T E n v  
where
ψ 1 + ψ 2 + ψ 3 + ψ 4 = 1
From Equations (17) to (18), we calculated the parameters mentioned above. When this fitness function is evaluated, the makespan needs to be minimized and the availability, success rate, and turnaround efficiency need to be improved. The next section discusses the methodology used in our task scheduler and the proposed trust-aware task scheduling algorithm using firefly optimization is discussed.

4. Proposed Trust-Aware Task Scheduling Algorithm Using Firefly Optimization

This section discusses the proposed trust-aware task scheduling algorithm using the firefly optimization algorithm. Initially, we will discuss the methodology used to model our scheduling algorithm. The proposed approach was modeled using firefly optimization as it requires less number of iterations and much of the problem space is traversed by fireflies because it is suitable for solving the NP-hard problem, which is the exact analogy for task scheduling in the cloud computing paradigm. The firefly optimization approach is a nature-inspired algorithm which mimics the behavior of fireflies and their attraction patterns based on the flashlight patterns discussed in [33]. The basic approach of firefly optimization and how two flies can be attracted to each other using flashlights are discussed in [34]. Given the nature of fireflies, they are more capable of solving NP-hard and multi-dimensional problems [35], which are similar to the task scheduling problems in cloud computing. Therefore, we chose the firefly optimization algorithm to solve the task scheduling problem in the cloud paradigm. These fireflies can be attracted to other flies based on the brightness of another fly. Naturally, they will be attracted towards flies of another gender. They can identify the gender of the fly based on the intensity of the flashlights, which are used for communication. Generally, in the firefly optimization approach, these cases need to be considered: (1) assuming that all flies belong to the same gender and they will attract each other irrespective of their gender. (2) A less-bright firefly will be attracted to the brighter firefly, i.e., attractiveness is directly proportional to the brightness. (3) If a fly is brighter than all the other flies in the search space, then it moves randomly. Therefore, if the flies’ light is less intense, i.e., they are less bright, then the distance between them increases as attractiveness is directly proportional to brightness. Therefore, to proceed with this approach, we need to calculate the brightness, i.e., the intensity and attractiveness.
Initially, the intensity of a firefly can be calculated using Equation (19).
I n t s = I n t r s 2
Attractiveness always depends on how light is absorbing and its brightness. Therefore, the brightness intensity and light absorption coefficient are related to each other. This is indicated in Equation (20).
I n t = I n t 0 . e Ω s 2
After calculating the intensity of the flies, we then need to identify the distance between two flies. This is calculated using Equation (21).
s = x i x j = 1 r z = 1 r x i , z x j , z 2
where r is the dimension of a firefly.
After the calculation of the distance, we identified the population of fireflies and dispersed them and defined the movement of a firefly i attracted towards j , which is a brighter firefly than i , for the number of iterations indicated as u . The movement of the firefly are calculated using Equation (22).
x u + 1 i = x u i + Γ 0 . e s 2 d 2 x u j x u i + γ   E U R   i
After defining the movement of the firefly, we need to calculate the randomized damping constant, which is calculated using Equation (23).
γ = γ 0 . θ u
where θ lies in between 0 and 1. θ is a damping constant.

Proposed Trust-Aware Task Scheduling Algorithm (TAFFA) in Cloud Computing Using Firefly Optimization Algorithm

This algorithm initially starts with a random firefly population and thereafter calculated the priorities of the incoming tasks using Equation (6) and calculated the priorities of the VMs using Equation (7). After calculating the priorities, the fitness function was evaluated using Equation (17). In next step, the intensity of the fireflies was calculated using Equations (19) and (20). In next step, the distance between the fireflies was evaluated by Equations (21) and (22). Then, if a firefly has a better brightness, then we calculated the corresponding parameters by using Equations (8)–(10) and (15), respectively. If these parameters are optimized, then we identify the firefly as the best solution and update its value as the best value or if the parameters are not optimized, we repeat this procedure for all the iterations until it arrives at the best solution.
Algorithm 1: Trust-Aware Task Scheduling Algorithm (TAFFA) in Cloud Computing Using Firefly Optimization Algorithm.
Input: set of m t = m 1 , m 2 , m 3 , . m t tasks, set of VMs n v = n 1 , n 2 , n 3 , n v , set of hosts h o q = h o 1 , h o 2 , h o 3 h o q set of datacenters d p = d 1 , d 2 , d 3 , d p .
Output: generates schedules using TAFFA while minimizing makespan, improving AV(n_v), SR(n_v), TE(n_v).
Start.
Initialize random firefly population.
Calculate priorities of incoming tasks using Equation (6).
Calculate priorities of VMs using Equation (7).
Evaluate fitness function in Equation (17).
for every firefly do
calculate intensity using Equations (19) and (20).
Calculate distance and movement of fireflies using Equation (21) and Equation (22).
If a firefly has more brightness then
calculate m a m t , A V n v , S R n v , T E n v using Equations (8)–(10) and (15),
respectively.
Return best solutions
else
repeat the same procedure until best solution arrived for all iterations.
End if
Stop.

5. Simulations and Results

In this section, discussion about extensive simulations are conducted using the Cloudsim [36] simulator. This simulator presents the entire cloud environment with the support of Java programming. For conducting these simulations, in this research, we used two types of workloads, i.e., randomized workloads with different distributions fabricated as uniform, normal, left skewed, and right skewed distributions. The second type of workload considered is the real time worklogs taken from HPC2N [37] and NASA [38] computing clusters. After taking the workload from these two types, we evaluated our proposed TAFFA approach with these algorithms. After evaluating our proposed approach, we compared our proposed TAFFA with the existing ACO, GA, and PSO.

5.1. Settings Used for Simulation

We used Cloudsim [36] for our entire simulation. For this simulation, we used 16 GB RAM, M1 chip processor, and MAC operating system, which provides virtualization support. We used a randomized fabricated workload which is represented as b01, b02, b03, b04, b05, and b06 with different distributions. After using the randomized workload, the realtime parallel worklog archives were used from HPC2N [37] to NASA [38]. These randomized distributions are of different categories in which b01 indicates uniform distribution in which all tasks are equally distributed. b02 represents normal distribution, which represents a higher number of medium tasks and a lower number of small and large tasks. b03 represents left skewed distribution, which consists of a higher number of medium tasks and a lower number of small tasks. b04 represents right skewed distribution, which consists of a higher number of small tasks and a lower number of medium tasks. In our work, we represented HPC2N as b05 and NASA worklogs are represented as b06. We considered the simulation settings from [39].

5.2. Calculation of Makespan

In this work, the makespan is the evaluated makespan for the proposed TFFA scheduler and it is needed to calculate the makespan while designing a scheduler as the effectiveness of any scheduler depends on the generated makespan. The minimization of the makespan improves the performance of the scheduler. Therefore, we calculated the makespan by providing 100 to 1000 tasks and ran a simulation for 50 iterations. The configuration settings in Table 3 were used for the simulation and the given workloads of b01, b02, b03, b04, b05, and b06, respectively. The proposed TAFFA approach was compared with the baseline algorithms, i.e., ACO, GA, and PSO approaches. Table 4 indicates the generated makespan for different workloads given as the input to the TAFFA scheduler and from Figure 2, it is clearly evident that the proposed TAFFA scheduler outperforms the baseline approaches for the generated makespan for 50 iterations for the considered tasks.

5.3. Calculation of Availability

After calculating the makespan for the proposed TAFFA scheduler, the SLA-based trust parameters were calculated to improve the trust and to calculate the availability as it is an important parameter from both the perspectives of the cloud consumer and service provider. Improving the availability percentage benefits the cloud provider in view of the quality of service, which improves the reliability and, thereby, the trust in a cloud provider will be improved. Therefore, the availability of virtual resources is calculated by providing 100 to 1000 tasks and we ran a simulation for 50 iterations. The configuration settings in Table 3 were used for the simulation and the workloads of b01, b02, b03, b04, b05, and b06, respectively. The proposed TAFFA approach was compared with the baseline algorithms, i.e., ACO, GA, and PSO approaches. Table 5 indicates the generated availability for different workloads given as the input to the TAFFA scheduler and from Figure 3, it is clearly evident that the proposed TAFFA scheduler outperforms the baseline approaches for the generated availability for 50 iterations for the considered tasks.

5.4. Calculation of Success Rate

The success rate of a virtual resource improves the trust in a cloud provider indirectly as a successful execution of tasks improves the quality of service which, in turn, minimizes SLA violations and, thereby, the trust in the cloud provider increases. Therefore, the success rate of virtual resources was calculated by giving 100 to 1000 tasks and we ran a simulation for 50 iterations. The configuration settings in Table 3 were used for the simulation and the given workloads of b01, b02, b03, b04, b05, and b06, respectively. The proposed TAFFA was compared with the baseline algorithms, i.e., ACO, GA, and PSO approaches. Table 6 indicates the generated success rate for the different workloads given as the input to the TAFFA scheduler and from Figure 4, it is clearly evident that the proposed TAFFA scheduler outperforms the baseline approaches for the generated success rate for 50 iterations for the considered tasks.

5.5. Calculation of Turnaround Efficiency

The turnaround efficiency of virtual resources impacts the quality of service as it relates to the successful execution of tasks and the time taken to execute and respond to the user for their corresponding requests. When the turnaround efficiency improves, this in turn improves the quality of service; thereby, the trust in the cloud provider increases. Therefore, the turnaround efficiency of the virtual resources was calculated by providing 100 to 1000 tasks and we ran a simulation for 50 iterations. The configuration settings in Table 3 were used for the simulation and the given workloads of b01, b02, b03, b04, b05, and b06, respectively. The proposed TAFFA approach is compared with the baseline algorithms, i.e., ACO, GA, and PSO approaches. Table 7 indicates the generated turnaround efficiency for different workloads given as the input to the TAFFA scheduler and from Figure 5, it is clearly evident that the proposed TAFFA scheduler outperforms the baseline approaches for the generated turnaround efficiency for 50 iterations for the considered tasks.

5.6. Discussion of Results

This subsection discusses the analysis of the simulated results and the improvement of the parameters which were mentioned in Section 5.2, Section 5.3, Section 5.4, Section 5.5 and Section 5.6. We already mentioned that we used Cloudsim [35] for extensive simulations. Our entire simulations were carried out with different datasets which were fabricated and with real time worklogs and they are mentioned as b01, b02, b03, b04, b05, and b06, respectively. We compared our proposed TAFFA scheduler with the existing baseline algorithms, i.e., ACO, GA, and PSO, by running a simulation with 50 iterations. We already presented the results for our measured parameters in the above Section 5.2, Section 5.3, Section 5.4 and Section 5.5. From the results, we draw inferences in such a way that measure how far our proposed TAFFA improved the parameters and generated schedules effectively. We presented the improvement of the makespan over the baseline approaches in Table 8 for the different workloads we used; in Table 9, we presented the improvement of the availability over the baseline approaches for the different workloads; in Table 10, we presented the improvement of the success rate over the baseline approaches for the different workloads; and in Table 11, we presented the improvement of the turnaround efficiency over the baseline approaches for the different workloads. Finally, from all these results, we can say that our proposed TAFFA scheduler improves the basic parameters, i.e., the makespan, energy consumption, as well as the SLA-based trust parameters, thereby improving the quality of service and trust in the cloud provider for the satisfaction of the cloud user.

5.6.1. Improvement of Makespan

This subsection discusses the improvement of the makespan as the effectiveness of the task scheduler primarily relies on the makespan. Therefore, the minimization of the makespan is to be considered as the primary aspect. The proposed TAFFA was evaluated over the baseline approaches, i.e., ACO, GA, and PSO. The reason we compared the proposed TAFFA over these algorithms is that many earlier authors modeled task schedulers with these approaches, but with the as-discussed disadvantages of ACO, GA, and PSO algorithms in the related works and different workloads, i.e., different statistical distributions and realtime worklogs, we considered these for the simulation. In Table 8, the proposed TAFFA was minimized over ACO for the given workloads b01, b02, b03, b04, b05, and b06 with 25.09%, 27.73%, 25.49%, 11%, 22.55%, and 27.45%, respectively. It was minimized over the GA for the given workloads of b01, b02, b03, b04, b05, and b06 with 37.6%, 26.62%, 30.1%, 19.86%, 32.08%, and 35.75%, respectively. It was minimized over PSO for the given workloads of b01, b02, b03, b04, b05, and b06 with 32.38%, 16.92%, 25.65%, 16.74%, 33.48%, and 23.11%, respectively. Therefore, the proposed TAFFA greatly minimizes the makespan over the mentioned algorithms in Table 8.

5.6.2. Improvement of Availability

This subsection discusses the improvement of the availability as trust in a cloud provider is based on the availability of virtual resources in the cloud paradigm. Therefore, the improvement of the availability of virtual resources is related to the quality of service provided by a cloud paradigm. The degradation of the availability leads to damaging the quality of service, which in turn affects the trust in a cloud paradigm. Therefore, the availability of virtual resources is calculated carefully using the proposed TAFFA. Different statistical distributions and realtime worklogs are considered for the simulation. In Table 9, the proposed TAFFA was improved over ACO for the given workloads of b01, b02, b03, b04, b05, and b06 with 27.28%, 27.4%, 34.64%, 42.98%, 53.56%, and 58.49%, respectively. It is improved over the GA for the given workloads of b01, b02, b03, b04, b05, and b06 with 19.54%, 15.32%, 17.93%, 34.82%, 49.96%, and 55.29%, respectively. It is improved over PSO for the given workloads of b01, b02, b03, b04, b05, and b06 with 22.4%, 19.02%, 24.47%, 22.87%, 29.62%, and 30.25%, respectively. Therefore, the proposed TAFFA greatly improves the availability over the mentioned algorithms in Table 9.

5.6.3. Improvement of Success Rate

This subsection discusses the improvement of the success rate trust in a cloud provider is based on the success rate of virtual resources in cloud paradigm. Therefore, the improvement of the success rate of virtual resources is related to the quality of service provided by a cloud paradigm. The success rate of tasks highly impacts the quality of service provided by a cloud provider, which in turn relates to trust in a cloud provider. Therefore, the success rate of virtual resources is calculated carefully using the proposed TAFFA. Different statistical distributions and realtime worklogs are considered for the simulation. In Table 10, the proposed TAFFA improved over ACO for given workloads of b01, b02, b03, b04, b05, and b06 with 30.48%, 66.7%, 53.56%, 56.82%, 60.08%, and 65.14%, respectively. It is improved over the GA for the given workloads of b01, b02, b03, b04, b05, and b06 with 24.84%, 53.79%, 58.11%, 40.23%, 48.43%, and 72.36%, respectively. It is improved over the PSO for the given workloads of b01, b02, b03, b04, b05, and b06 with 24.86%, 33.42%, 38.89%, 49.48%, 33.78%, and 55.55%, respectively. Therefore, the proposed TAFFA greatly improves the success rate over the mentioned algorithms in Table 10.

5.6.4. Improvement of Turnaround Efficiency

This subsection discusses the improvement of the turnaround efficiency as trust in a cloud provider is based on the turnaround efficiency of the virtual resources in a cloud paradigm. Therefore, the improvement of the turnaround efficiency of tasks on virtual resources is related to the quality of service provided by a cloud paradigm. The turnaround efficiency of tasks on virtual resources highly impacts the quality of service of a cloud provider, which in turn impacts the trust in a cloud provider. Therefore, the turnaround efficiency of tasks on virtual resources are calculated carefully using the proposed TAFFA. Different statistical distributions and realtime worklogs were considered for the simulation. In Table 11, the proposed TAFFA improved over ACO for the given workloads b01, b02, b03, b04, b05, and b06 with 88.75%, 59.21%, 38.92%, 41.85%, 82.38%, and 37.8%, respectively. It is improved over the GA for the given workloads of b01, b02, b03, b04, b05, and b06 with 71.57%, 51.91%, 36.24%, 26.77%, 73.66%, and 51.41%, respectively. It is improved over PSO for the given workloads of b01, b02, b03, b04, b05, and b06 with 67.62%, 38.33%, 29.77%, 26.55%, 41.04%, and 33.18%, respectively. Therefore, the proposed TAFFA greatly improves the turnaround efficiency over the mentioned algorithms in Table 11.

6. Conclusions and Future Work

Task scheduling is a challenging aspect in a cloud computing paradigm as workloads in the cloud paradigm as incoming workloads onto the cloud console arise from various heterogeneous resources with different types and different runtime processing capacities. Therefore it is difficult for a cloud provider to schedule these tasks onto the appropriate virtual resources. The ineffective mapping of tasks onto the virtual resources leads to the degradation of the quality of service and results in an increase in the makespan and which, in turn, leads to a violation of the SLA between the cloud provider and the user, which damages the trust in the cloud provider. Therefore, in order to address the SLA-based trust parameters, we carried out research in which the priorities of all incoming tasks and VMs are calculated and then precisely mapped the tasks onto the VMs. We chos the firefly optimization approach to model our trust-aware task scheduler (TAFFA) and the entire simulations were carried out on Cloudsim. We carried out extensive simulations by generating tasks using different randomized task distributions and to check the efficacy of our approach, we used the standard benchmark worklogs of HPC2N and NASA computing clusters. Finally, our proposed TAFFA was evaluated over the state-of-the-art algorithms and from the results, it evident that TAFFA shows its dominance over the existing approaches by minimizing the makespan and improving the avaialability, success rate, and turnaround efficiency. Our proposed TAFFA approch has some limitations as the tasks are generated from various heterogeneous resources and the proposed TAFFA is not able to predict upcoming tasks from various users. In the future, we intend to develop our task scheduling algorithm by using an artificial intelligence approach.

Author Contributions

Conceptualization, S.M. and A.A.E.; methodology, G.R.K.; writing—original draft preparation, S.M. and G.R.K.; writing—review and editing, A.A.E.; visualization, G.R.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Saadia, D. Integration of cloud computing, big data, artificial intelligence, and internet of things: Review and open research issues. Int. J. Web-Based Learn. Teach. Technol. 2021, 16, 10–17. [Google Scholar] [CrossRef]
  2. Dubey, K.; Sharma, S. A novel multi-objective CR-PSO task scheduling algorithm with deadline constraint in cloud computing. Sustain. Comput. Inform. Syst. 2021, 32, 100605. [Google Scholar] [CrossRef]
  3. Sharma, N.; Puneet, G. Ant colony based optimization model for QoS-based task scheduling in cloud computing environment. Meas. Sens. 2022, 24, 100531. [Google Scholar] [CrossRef]
  4. Abualigah, L.; Alkhrabsheh, M. Amended hybrid multi-verse optimizer with genetic algorithm for solving task scheduling problem in cloud computing. J. Supercomput. 2021, 78, 740–765. [Google Scholar] [CrossRef]
  5. Aydilek, I.B. A hybrid firefly and particle swarm optimization algorithm for computationally expensive numerical problems. Appl. Soft Comput. J. 2018, 66, 232–249. [Google Scholar] [CrossRef]
  6. Nabi, S.; Ahmad, M.; Ibrahim, M.; Hamam, H. AdPSO: Adaptive PSO-Based Task Scheduling Approach for Cloud Computing. Sensors 2022, 22, 920. [Google Scholar] [CrossRef]
  7. Rani, R.; Ritu, G. Energy efficient task scheduling using adaptive PSO for cloud computing. Int. J. Reason.-Based Intell. Syst. 2021, 13, 50–58. [Google Scholar] [CrossRef]
  8. Nabi, S.; Ahmed, M. PSO-RDAL: Particle swarm optimization-based resource- and deadline-aware dynamic load balancer for deadline constrained cloud tasks. J. Supercomput. 2021, 78, 4624–4654. [Google Scholar] [CrossRef]
  9. Mangalampalli, S.; Swain, S.K.; Mangalampalli, V.K. Multi Objective Task Scheduling in Cloud Computing Using Cat Swarm Optimization Algorithm. Arab. J. Sci. Eng. 2021, 47, 1821–1830. [Google Scholar] [CrossRef]
  10. Malik, M.; Suman. Lateral Wolf Based Particle Swarm Optimization (LW-PSO) for Load Balancing on Cloud Computing. Wirel. Pers. Commun. 2022, 1, 1–20. [Google Scholar] [CrossRef]
  11. Ankita; Sahana, S.K. Ba-PSO: A Balanced PSO to solve multi-objective grid scheduling problem. Appl. Intell. 2021, 52, 4015–4027. [Google Scholar] [CrossRef]
  12. Agarwal, M.; Srivastava, G.M.S. Opposition-based learning inspired particle swarm optimization (OPSO) scheme for task scheduling problem in cloud computing. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 9855–9875. [Google Scholar] [CrossRef]
  13. Shukri, S.E.; Al-Sayyed, R.; Hudaib, A.; Mirjalili, S. Enhanced multi-verse optimizer for task scheduling in cloud computing environments. Expert Syst. Appl. 2020, 168, 114230. [Google Scholar] [CrossRef]
  14. Walia, N.K.; Kaur, N.; Alowaidi, M.; Bhatia, K.S.; Mishra, S.; Sharma, N.K.; Sharma, S.K.; Kaur, H. An energy-efficient hybrid scheduling algorithm for task scheduling in the cloud computing environments. IEEE Access 2021, 9, 117325–117337. [Google Scholar] [CrossRef]
  15. Hussain, M.; Wei, L.F.; Lakhan, A.; Wali, S.; Ali, S.; Hussain, A. Energy and performance-efficient task scheduling in heterogeneous virtualized cloud computing. Sustain. Comput. Inform. Syst. 2021, 30, 100517. [Google Scholar]
  16. Velliangiri, S.; Karthikeyan, P.; Xavier, V.A.; Baswaraj, D. Hybrid electro search with genetic algorithm for task scheduling in cloud computing. Ain Shams Eng. J. 2020, 12, 631–639. [Google Scholar] [CrossRef]
  17. Kanwal, S.; Iqbal, Z.; Al-Turjman, F.; Irtaza, A.; Khan, M.A. Multiphase fault tolerance genetic algorithm for vm and task scheduling in datacenter. Inf. Process. Manag. 2021, 58, 102676. [Google Scholar] [CrossRef]
  18. Sanaj, M.S.; Joe Prathap, P.M. An efficient approach to the map-reduce framework and genetic algorithm based whale optimization algorithm for task scheduling in cloud computing environment. Mater. Today Proceed. 2021, 37, 3199–3208. [Google Scholar] [CrossRef]
  19. Pirozmand, P.; Hosseinabadi, A.A.R.; Farrokhzad, M.; Sadeghilalimi, M.; Mirkamali, S.; Slowik, A. Multi-objective hybrid genetic algorithm for task scheduling problem in cloud computing. Neural Comput. Appl. 2021, 33, 13075–13088. [Google Scholar] [CrossRef]
  20. Pirozmand, P.; Javadpour, A.; Nazarian, H.; Pinto, P.; Mirkamali, S.; Ja’Fari, F. GSAGA: A hybrid algorithm for task scheduling in cloud infrastructure. J. Supercomput. 2022, 78, 17423–17449. [Google Scholar] [CrossRef]
  21. Mirmohseni, S.M.; Tang, C.; Javadpour, A. FPSO-GA: A Fuzzy Metaheuristic Load Balancing Algorithm to Reduce Energy Consumption in Cloud Networks. Wirel. Pers. Commun. 2022, 127, 2799–2821. [Google Scholar] [CrossRef]
  22. Elsedimy, E.; Fahad, A. MOTS-ACO: An improved ant colony optimiser for multi-objective task scheduling optimisation problem in cloud data centres. IET Net. 2022, 11, 43–57. [Google Scholar] [CrossRef]
  23. Dubey, K.; Subhash, C.S. A hybrid multi-faceted task scheduling algorithm for cloud computing environment. Int. J. Syst. Assur. Eng. Manag. 2021, 1, 1–15. [Google Scholar] [CrossRef]
  24. Nalini, J.; Khilar, P.M. Reinforced Ant Colony Optimization for Fault Tolerant Task Allocation in Cloud Environments. Wirel. Pers. Commun. 2021, 121, 2441–2459. [Google Scholar] [CrossRef]
  25. Muteeh, A.; Sardaraz, M.; Tahir, M. MrLBA: Multi-resource load balancing algorithm for cloud computing using ant colony optimization. Clust. Comput. 2021, 24, 3135–3145. [Google Scholar] [CrossRef]
  26. Zambuk, F.U.; Ya’U, A.; Jiya, M.; Ado, N.; Ja’Afaru, B.; Muhammad, A. Efficient Task Scheduling in Cloud Computing using Multi-objective Hybrid Ant Colony Optimization Algorithm for Energy Efficiency. Int. J. Adv. Comput. Sci. Appl. 2021, 12, 450–456. [Google Scholar] [CrossRef]
  27. Liu, H. Research on cloud computing adaptive task scheduling based on ant colony algorithm. Optik 2022, 258, 168677. [Google Scholar] [CrossRef]
  28. Belgacem, A.; Beghdad-Bey, K. Multi-objective workflow scheduling in cloud computing: Trade-off between makespan and cost. Clust. Comput. 2021, 25, 579–595. [Google Scholar] [CrossRef]
  29. Xia, W.; Shen, L. Joint resource allocation at edge cloud based on ant colony optimization and genetic algorithm. Wirel. Pers. Commun. 2021, 117, 355–386. [Google Scholar] [CrossRef]
  30. Mbarek, F.; Mosorov, V. Hybrid Nearest-Neighbor Ant Colony Optimization Algorithm for Enhancing Load Balancing Task Management. Appl. Sci. 2021, 11, 10807. [Google Scholar] [CrossRef]
  31. Abed-Alguni, B.H.; Alawad, N.A. Distributed Grey Wolf Optimizer for scheduling of workflow applications in cloud environments. Appl. Soft Comput. 2021, 102, 107113. [Google Scholar] [CrossRef]
  32. Singh, A.; Kakali, C. A multi-dimensional trust and reputation calculation model for cloud computing environments. IEEE Accessed 2017, 1, 1–8. [Google Scholar]
  33. Yang, X.-S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar]
  34. Yang, X.-S. Firefly algorithms for multimodal optimization. In Proceedings of the International Symposium on Stochastic Algorithms, Sapporo, Japan, 26–28 October 2009; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  35. Yang, X. Firefly algorithm, Lévy distributions and global optimization. In Research and Development in Intelligent Systems XXVI; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  36. Calheiros, R.; Ranjan, R.; Beloglazov, A.; De Rose, C.A.F.; Buyya, R. CloudSim: A toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms. Softw. Pract. Exp. 2010, 41, 23–50. [Google Scholar] [CrossRef]
  37. Ibrahim, A.; Elaziz, M.A. and Xiong, S. Job scheduling in cloud computing using a modified harris hawks optimization and simulated annealing algorithm. Comput. Intell. Neurosci. 2020, 2020, 3504642. [Google Scholar]
  38. Abualigah, L. and Diabat, A. A novel hybrid antlion optimization algorithm for multi-objective task scheduling problems in cloud computing environments. Cluster Comput. 2021, 24, 205–223. [Google Scholar] [CrossRef]
  39. Madni, S.H.H.; Latiff, M.S.A.; Abdulhamid, S.M.; Ali, J. Hybrid gradient descent cuckoo search (HGDCS) algorithm for resource scheduling in IaaS cloud computing environment. Clust. Comput. 2018, 22, 301–334. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions, and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions, or products referred to in the content.
Figure 1. Proposed system architecture.
Figure 1. Proposed system architecture.
Sensors 23 01384 g001
Figure 2. Calculation of makespan using various workloads: (a) calculation of makespan using uniform distribution; (b) calculation of makespan using normal distribution; (c) calculation of makespan using left skewed distribution; (d) calculation of makespan using right skewed distribution; (e) calculation of makespan using HPC2N Workload; (f) calculation of makespan using NASA Workload.
Figure 2. Calculation of makespan using various workloads: (a) calculation of makespan using uniform distribution; (b) calculation of makespan using normal distribution; (c) calculation of makespan using left skewed distribution; (d) calculation of makespan using right skewed distribution; (e) calculation of makespan using HPC2N Workload; (f) calculation of makespan using NASA Workload.
Sensors 23 01384 g002
Figure 3. Calculation of availability using various workloads: (a) calculation of availability using uniform distribution; (b) calculation of availability using normal distribution; (c) calculation of availability using left skewed distribution; (d) calculation of availability using right skewed distribution; (e) calculation of availability using HPC2N workload; (f) calculation of availability using NASA workload.
Figure 3. Calculation of availability using various workloads: (a) calculation of availability using uniform distribution; (b) calculation of availability using normal distribution; (c) calculation of availability using left skewed distribution; (d) calculation of availability using right skewed distribution; (e) calculation of availability using HPC2N workload; (f) calculation of availability using NASA workload.
Sensors 23 01384 g003aSensors 23 01384 g003b
Figure 4. Calculation of success rate using various workloads: (a) calculation of success rate using uniform distribution; (b) calculation of success rate using normal distribution; (c) calculation of Success rate using left skewed distribution; (d) calculation of success rate using right skewed distribution; (e) calculation of success rate using HPC2N workload; (f) calculation of success rate using NASA workload.
Figure 4. Calculation of success rate using various workloads: (a) calculation of success rate using uniform distribution; (b) calculation of success rate using normal distribution; (c) calculation of Success rate using left skewed distribution; (d) calculation of success rate using right skewed distribution; (e) calculation of success rate using HPC2N workload; (f) calculation of success rate using NASA workload.
Sensors 23 01384 g004
Figure 5. Calculation of turnaround efficiency using various workloads: (a) calculation of turnaround efficiency using uniform distribution; (b) calculation of turnaround efficiency using normal distribution; (c) calculation of turnaround efficiency using left skewed distribution; (d) calculation of turnaround efficiency using right skewed distribution; (e) calculation of turnaround efficiency using HPC2N workload; (f) calculation of turnaround efficiency using NASA workload.
Figure 5. Calculation of turnaround efficiency using various workloads: (a) calculation of turnaround efficiency using uniform distribution; (b) calculation of turnaround efficiency using normal distribution; (c) calculation of turnaround efficiency using left skewed distribution; (d) calculation of turnaround efficiency using right skewed distribution; (e) calculation of turnaround efficiency using HPC2N workload; (f) calculation of turnaround efficiency using NASA workload.
Sensors 23 01384 g005aSensors 23 01384 g005b
Table 1. Summary of various task scheduling algorithms in cloud computing and addressed parameters.
Table 1. Summary of various task scheduling algorithms in cloud computing and addressed parameters.
AuthorsTechnique UsedParameters
[6]Adaptive PSOThroughput, makespan, resource utilization.
[7]PSOMakespan, energy consumption.
[8]PSOResource utilization, makespan, task rejection, penalty cost, total cost.
[9]CSOMakespan, total power cost, migration time, energy consumption.
[10]LW-PSOExecution time, turnaround time, response time.
[11]Ba-PSOTotal execution time, wait time, scalability, load balancing of tasks.
[12]OPSOMakespan, energy consumption.
[13]EMVOMakespan, resource utilization.
[14]GA-FPAResource utilization, computation cost, completion time.
[15]EPETSMakespan, energy Consumption.
[16]HSGAMakespan, cost, response time.
[17]MFTGAFault tolerance, execution time, cost, energy consumption, SLA violation.
[18]GA-WOAThroughput, processing cost, computation cost, processing time.
[19]GAECSMakespan, energy consumption.
[20]GSAGAProcessing capacity, energy consumption.
[21]FPSO-GALoad balancing of tasks.
[22]MOTS-ACOMakespan, turnaround time, load balancing, power efficiency.
[23]AC-PSOMakespan, total cost, resource utilization.
[24]R-ACOMakespan.
[25]MR-LBAExecution time and execution cost.
[26]ACOBFEnergy Consumption, makespan, resource utilization.
[27]Adaptive ACOTask completion time, execution cost, load balance of tasks.
[28]HEFT-ACOMakespan, cost.
[29]JRA-ACO-GAResource utilization.
[30]ACO-NNLoad balancing.
[31]DGWOMakespan, computation and transmission costs.
Table 2. Notations used in system architecture.
Table 2. Notations used in system architecture.
EntityMeaning
m t Set of tasks.
n v Set of VMs.
d p Set of datacenters.
h o q Set of hosts.
l o a d n v workload on virtual resources.
l o a d h o q Workload on physical hosts.
p r n v Processing capacity of virtual resources.
p r i o m t Priorities of considered tasks
p r i o n v Priorities of VMs based on electricity unit cost.
m a m t Makespan.
d l m t Deadline constraint.
e x m t Execution time.
f t m t Finish time.
A V n v Availability of VMs.
S R n v Success rate of VMs.
T E n v Turnaround efficiency.
t r u s t c s p Trust in a cloud provider.
Table 3. Configuration settings for simulation.
Table 3. Configuration settings for simulation.
Name Quantity
No. of tasks considered 100–1000
Length of tasks900,000
Memory of host16 GB
Storage capacity of host2 TB
Network bandwidth capacity200 Mbps
No. of VMs50
Memory capacity of VM2 GB
Bandwidth of VM20 Mbps
Processing elements1100 MIPS
Hypervisor usedXen
Hypervisor typeMonolithic
Operating systemMAC
Datacenters used10
Table 4. Calculation of makespan using various workloads.
Table 4. Calculation of makespan using various workloads.
Algorithm
No. of TasksACOGAPSOTAFFA
q δBest q δBest q δBest q δBest
b01
100752.81.32732.8767.991.99712.24675.62.43623.88545.872.67521.34
500941.32.88921.21487.322.561421.121267.341.571232.87853.211.78813.67
10001421.60.891398.81656.121.121610.331763.32.441712.45945.320.98912.57
b02
100857.211.78834.12857.21989.342.45912.87753.581.21698.35678.121.67
5001456.32.671408.341456.31425.782.891377.341265.210.691198.321146.661.11
10001854.210.341812.221854.211688.61.431612.661728.760.871689.341254.651.78
b03
100853.71.76812.9753.212.12712.35834.121.28785.23653.540.78612.56
500987.31.98894.671287.652.581187.32986.120.34933.12788.120.56721.12
10001378.62.451312.51457.321.211408.351398.51.971309.34924.580.17886.56
b04
100745.321.23697.86853.572.16797.36876.142.12787.56554.121.77521.45
500843.241.58806.35956.351.32899.79987.122.56897.13824.782.12784.56
10001387.542.161311.321488.431.761418.361247.982.251198.721378.92.741245.34
b05
1001687.32.871623.451745.51.871699.91931.22.141893.5988.461.13954.12
5001923.43.341894.32267.61.381987.342178.53.122012.451688.121.891612.34
10002756.32.182245.83088.50.342987.452923.431.592877.122068.71.351986.21
b06
100887.351.78843.21757.651.27703.56634.211.29589.17578.121.47534.12
500956.881.12895.341189.262.121098.23894.321.87824.66798.121.34712.67
10001278.90.781209.561998.321.091885.121856.231.671826.771098.72.13978.55
Table 5. Calculation of availability using various workloads.
Table 5. Calculation of availability using various workloads.
Algorithm
No. of TasksACOGAPSOTAFFA
q δBest q δBest q δBest q δBest
b01
10082.771.1276.5793.40.6881.2589.671.7781.7697.880.8794.78
50074.362.5668.7887.990.8784.3784.321.3679.5795.430.3693.89
100082.181.0978.3578.970.3672.8877.561.2371.3696.570.7895.26
b02
10088.760.9884.3192.640.7990.3288.981.1285.3296.760.8795.12
50079.881.1272.7789.570.4385.3478.161.2976.1195.772.7892.18
100069.891.0467.5377.840.4372.6782.181.9977.4397.890.5796.39
b03
10069.381.8864.3879.782.5676.4574.581.2870.3396.210.7794.59
50076.881.5469.8784.780.8581.2781.382.5778.6795.330.8794.17
100079.581.2977.9985.221.7783.4685.310.8779.9697.550.7195.33
b04
10058.991.7654.7867.371.9363.2169.881.4567.3694.540.5293.21
50077.292.7871.3779.091.5771.3682.551.1180.0297.110.5496.22
100082.461.8478.6781.371.4279.8691.243.7588.6798.310.2697.55
b05
10057.861.6553.4764.991.1259.3376.392.9073.2494.390.7292.66
50064.531.6363.5368.362.3762.1978.113.0867.6697.130.6395.14
100073.321.5470.3874.473.1268.8483.312.1379.5798.410.3596.88
b06
10055.331.1551.7658.180.3451.5771.82.2168.9995.310.2592.16
50063.322.9859.4466.112.1459.8874.321.8771.5696.180.9194.55
100072.312.4669.4679.572.8674.3378.111.1276.8598.420.4196.11
Table 6. Calculation of success rate using various workloads.
Table 6. Calculation of success rate using various workloads.
Algorithm
No. of TasksACOGAPSOTAFFA
q δBest q δBest q δBest q δBest
b01
10084.552.8379.3386.110.5582.0982.171.3881.3696.310.6193.54
50074.241.3271.8879.241.5377.3379.331.2174.2195.770.5994.11
100068.111.8267.3372.110.7569.1276.410.9272.3397.120.2196.02
b02
10045.871.5442.7756.121.3852.1164.771.7662.9997.350.2192.37
50068.332.1163.6665.212.1762.3578.462.8270.6696.780.4393.71
100072.111.8269.5478.171.6671.2282.871.1278.7797.690.1995.32
b03
10055.161.5452.1857.342.8651.1268.331.5363.1293.310.5791.78
50068.332.2662.1366.151.0861.3173.542.5769.5496.170.2194.09
100077.352.6771.3972.111.7367.3682.122.0977.1397.650.5695.21
b04
10058.560.5754.3367.162.5763.1262.771.2959.3697.961.2494.31
50061.352.1159.8371.881.8868.5769.191.3862.1898.122.1295.67
100073.121.6770.7278.242.3673.5278.332.8971.3898.892.5696.88
b05
10054.881.7750.5664.221.8761.3369.861.3766.7897.870.7896.36
50064.781.8861.6769.881.3465.4673.552.8770.3796.790.5795.32
100077.471.9371.9972.172.0968.2181.261.7779.9898.811.1697.26
b06
10046.771.7745.1448.121.3145.3153.521.2150.2195.880.9294.32
50062.770.8761.3659.712.7654.6868.311.3266.9897.670.3795.12
100078.331.3273.1774.161.8871.2772.311.1070.3298.320.5696.21
Table 7. Calculation of turnaround efficiency using various workloads.
Table 7. Calculation of turnaround efficiency using various workloads.
Algorithm
No. of TasksACOGAPSOTAFFA
q δBest q δBest q δBest q δBest
b01
10048.421.3344.1751.131.4849.8358.721.8653.5697.280.5795.27
50056.120.9752.1759.351.7156.1760.361.3758.7796.480.2696.39
100062.772.1858.5366.132.8263.9163.671.3760.3298.170.9297.11
b02
10053.311.2256.7861.821.6459.5668.371.8865.1897.321.8895.36
50061.872.5659.9268.331.2462.1871.371.3569.8896.752.2194.96
100068.991.3662.3769.981.7267.3876.671.4172.5195.771.1296.31
b03
10068.761.0966.3269.171.2566.1272.862.5369.7696.311.4795.17
50070.351.1168.1874.851.6371.2975.471.6574.1898.481.2896.37
100075.210.9873.8779.631.4275.1882.571.8779.2398.760.7697.46
b04
10064.651.2161.3774.172.4770.8777.671.6272.1995.430.1294.98
50071.772.5768.7179.371.5776.7379.372.1575.1896.190.3295.27
100077.311.7573.2183.172.5379.2182.562.3779.6798.470.5696.76
b05
10049.671.2947.3853.260.2751.3765.781.6763.6796.870.4695.21
50052.771.2151.2158.540.7857.3169.111.7968.6696.380.2195.55
100064.361.6261.6660.671.0758.2676.881.8973.2199.080.2198.43
b06
10066.881.7464.9262.272.5958.6570.391.2169.8797.570.6897.01
50073.531.3671.3867.182.1663.1975.771.9571.4896.980.5496.22
100078.281.7376.4975.381.5372.3680.841.8278.3799.150.2698.82
Table 8. Improvement in makespan over existing algorithms.
Table 8. Improvement in makespan over existing algorithms.
Algorithms
Improvement of Makespan
ACOGAPSO
b0125.09%37.6%32.38%
b0227.73%26.62%16.92%
b0325.49%30.1%25.65%
b0411%19.86%16.74%
b0522.55%32.08%33.48%
b0627.45%35.75%23.11%
Table 9. Improvement in availability over existing algorithms.
Table 9. Improvement in availability over existing algorithms.
Algorithms
Improvement of Availability
ACOGAPSO
b0127.28%19.54%22.4%
b0227.4%15.32%19.02%
b0334.64%17.93%24.47%
b0442.98%34.82%22.87%
b0553.56%49.96%29.62%
b0658.49%55.29%30.25%
Table 10. Improvement in success rate over existing algorithms.
Table 10. Improvement in success rate over existing algorithms.
Algorithms
Improvement of Success Rate
ACOGAPSO
b0130.48%24.84%24.86%
b0266.7%53.79%33.42%
b0353.56%58.11%38.89%
b0456.82%40.23%49.48%
b0560.08%48.43%33.78%
b0665.14%72.36%55.55%
Table 11. Improvement in turnaround efficiency over existing algorithms.
Table 11. Improvement in turnaround efficiency over existing algorithms.
Algorithms
Improvement of Turnaround Efficiency
ACOGAPSO
b0188.75%71.57%67.62%
b0259.21%51.91%38.33%
b0338.92%36.24%29.77%
b0441.85%26.77%26.55%
b0582.38%73.66%41.04%
b0637.8%51.41%33.18%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mangalampalli, S.; Karri, G.R.; Elngar, A.A. An Efficient Trust-Aware Task Scheduling Algorithm in Cloud Computing Using Firefly Optimization. Sensors 2023, 23, 1384. https://doi.org/10.3390/s23031384

AMA Style

Mangalampalli S, Karri GR, Elngar AA. An Efficient Trust-Aware Task Scheduling Algorithm in Cloud Computing Using Firefly Optimization. Sensors. 2023; 23(3):1384. https://doi.org/10.3390/s23031384

Chicago/Turabian Style

Mangalampalli, Sudheer, Ganesh Reddy Karri, and Ahmed A. Elngar. 2023. "An Efficient Trust-Aware Task Scheduling Algorithm in Cloud Computing Using Firefly Optimization" Sensors 23, no. 3: 1384. https://doi.org/10.3390/s23031384

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop