Next Article in Journal
AAHEG: Automatic Advanced Heap Exploit Generation Based on Abstract Syntax Tree
Previous Article in Journal
Generalized Equations in Quantum Mechanics and Brownian Theory
Previous Article in Special Issue
A Novel Adaptive UE Aggregation-Based Transmission Scheme Design for a Hybrid Network with Multi-Connectivity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Flexible Offloading and Task Scheduling for IoT Applications in Dynamic Multi-Access Edge Computing Environments

1
Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
2
CICT Mobile Communication Technology Co., Ltd., Beijing 100083, China
3
Key Laboratory of Cognitive Radio and Information Processing, Ministry of Education, Guilin University of Electronic Technology, Guilin 541004, China
4
School of Statistics and Data Science, Beijing Wuzi University, Beijing 101149, China
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(12), 2196; https://doi.org/10.3390/sym15122196
Submission received: 24 September 2023 / Revised: 1 December 2023 / Accepted: 12 December 2023 / Published: 14 December 2023
(This article belongs to the Special Issue Asymmetrical Network Control for Complex Dynamic Services)

Abstract

:
Nowadays, multi-access edge computing (MEC) has been widely recognized as a promising technology that can support a wide range of new applications for the Internet of Things (IoT). In dynamic MEC networks, the heterogeneous computation capacities of the edge servers and the diversified requirements of the IoT applications are both asymmetric, where and when to offload and schedule the time-dependent tasks of IoT applications remains a challenge. In this paper, we propose a flexible offloading and task scheduling scheme (FLOATS) to adaptively optimize the computation of offloading decisions and scheduling priority sequences for time-dependent tasks in dynamic networks. We model the dynamic optimization problem as a multi-objective combinatorial optimization problem in an infinite time horizon, which is intractable to solve. To address this, a rolling-horizon-based optimization mechanism is designed to decompose the dynamic optimization problem into a series of static sub-problems. A genetic algorithm (GA)-based computation offloading and task scheduling algorithm is proposed for each static sub-problem. This algorithm encodes feasible solutions into two-layer chromosomes, and the optimal solution can be obtained through chromosome selection, crossover and mutation operations. The simulation results demonstrate that the proposed scheme can effectively reduce network costs in comparison to other reference schemes.

1. Introduction

Nowadays, the Internet of Things (IoT) has entered a new era of “Internet of Everything” [1]. Explosive growing smart phones, wearable devices, autonomous driving cars, and intelligent robots, will be connected to the network. The rapid proliferation of IoT mobile devices (MDs) has spurred the development of numerous intelligent applications, such as virtual reality and auto-driving [2], and has raised higher demands on the computation capacities of the IoT networks [3]. In comparison to conventional data transmission services, intelligent application services exhibit more complex and diversified characteristics and demands both in task composition and quality of service (QoS). On the one hand, these application services are typically composed of multiple time-dependent tasks that employ diverse information technologies to collaboratively accomplish functions such as identification, detection and comprehensive decision-making. On the other hand, the computing-intensive and delay-sensitive tasks of intelligent applications put forward higher performance requirements for real-time and high-speed processing. Handling all application services on the resource-constrained MD sides is infeasible due to the limited computing resources and battery power of the MDs.
Multi-access edge computing (MEC) has emerged as one promising technology that overcomes the above limitations [4]. Compared to mobile cloud computing with high transmission latency and local processing with low processing efficiency, MEC can effectively support a wide range of intelligent application services by bringing computing resources closer to the MDs. Currently, the research on computation offloading in MEC has emerged as a hotspot, drawing considerable attention from both academia and industry [5].
Currently, the existing computation offloading strategies typically concentrate on the independent task offloading or time-dependent task offloading of a single MD, which mostly resolves the issue of “where” to offload. The offloading problems are often modeled as combinatorial optimizations or mix-integer optimization problems to minimize the delay [6,7,8,9], energy consumption [10,11] or the trade-off of multiple performances [12,13,14,15,16]. A variety of optimization methods, like Lyapunov algorithm [8,12], heuristic algorithm [11,17,18,19], swarm intelligence algorithm [13,20,21] and deep reinforcement learning [7,22,23,24], are widely adopted to solve the task offloading problems. However, for applications that contain multiple time-dependent tasks, the computation offloading strategies should not only focus on “where” to offload but also “when” to offload.
To comprehensively address this challenging problem, several aspects should be considered. To meet the application requirements of massive IoT MDs, there is an urgent need for network collaboration to sufficiently explore the potential wireless and computing capacities of heterogeneous MEC networks. To improve the service quality of applications, computation offloading decisions and task scheduling sequences should be jointly optimized with time-dependent constraints to determine where and when to offload and schedule the tasks. Obviously, this is a non-convex complex combinatorial optimization problem that is NP-hard and challenging for general mathematical methods to solve. The feasible solution space is a large, exhaustive search method that can not acquire the optimal solution in a short time. Metaheuristic algorithms, including genetic algorithm (GA) [25,26], hybrid algorithm [27], red deer algorithm [28] can effectively address complex combinatorial optimization problems by simulating natural biological processes, and are widely applied to solve the job scheduling and resource allocation problems in industry [29]. Furthermore, a more flexible and fine-grained online offloading and scheduling mechanism should be designed to adaptively resolve the resource competition and balance the multiple performances of dynamic scenarios.
In this paper, we propose a flexible offloading and task scheduling scheme (FLOATS) to address the computation offloading and task scheduling optimization problem for time-dependent IoT applications in dynamic asymmetric MEC networks. Here, we use the multi-connectivity technology to more flexibly utilize the heterogeneous wireless and computing resources. The dynamic computation offloading and task scheduling problem can be modeled as a multi-objective optimization problem in an infinite time horizon, aiming to balance the serving latency, energy consumption and tardiness performances of continuously arriving application tasks. To solve the above problem, we adopt a rolling horizon approach to transform the dynamic optimization problem into a series of static sub-problems. A GA-based computation offloading and task scheduling algorithm has been developed for each static sub-problem. This algorithm utilizes a two-layer chromosome construction to describe the offloading and scheduling solutions for multiple application tasks and obtain the optimal solution through chromosome selection, crossover and mutation operations. The main contributions of this paper are summarized as follows:
  • Taking into account the diversified requirements and time-dependent characteristics of multiple applications, we utilize the computation offloading decision and task scheduling priority to describe the offloading and scheduling solutions of multiple application tasks based on the multi-connectivity technology. The problem of computation offloading and task scheduling for applications that continuously arrive can be modeled as a multi-objective combinatorial optimization problem in an infinite time horizon, making it challenging to solve.
  • To address this problem, we propose the FLOATS scheme to flexibly and adaptively coordinate and orchestrate the wireless and computing resources of dynamic networks among multiple application tasks. We divide the infinite time into numerous discrete time intervals named rolling horizon windows (RHW). By periodically updating the network and application information in each RHW, the rolling-horizon-based optimization and scheduling mechanism can decompose the dynamic intractable optimization problem into a series of static sub-problems that are easy to solve.
  • For the static sub-problem in each RHW, a GA-based computation offloading and task scheduling algorithm is proposed to find the optimal solution. We utilize a two-layer chromosome construction to describe the offloading and scheduling solutions of the sub-problems. In order to enhance the searching efficiency, we adopt multiple methods to initialize the population chromosomes to avoid the GA algorithm falling into the local optimal solution prematurely. Furthermore, various crossover and mutation operations are applied to each layer of genes in the chromosomes to ensure the feasibility and validity of the chromosomes in each generation.
  • Extensive simulation results validate the proposed optimization mechanism and algorithm performance. In comparison to other reference schemes, the proposed scheme demonstrates a significant reduction in network cost and enhances the efficiency in resource utilization.
The rest of this paper is organized as follows. Section 2 provides an overview of the related works to this paper. Then, we introduce the general network architecture and define some system models in Section 3. The problem formulation is given in Section 4. Section 5 provides a detailed description of the FLOATS scheme. The simulation results are presented and analyzed in Section 7. Finally, Section 8 concludes this paper.

2. Related Work

There has been a large amount of work dedicated to the computation offloading problems in MEC networks, while most of these existing works concentrated on the computation offloading and resource allocation strategies of independent tasks in MEC networks [5]. According to the optimization objectives, the computation offloading schemes of independent tasks can generally be divided into several categories, such as single objective optimization for delay minimization [6,7,8,9], energy consumption minimization [10,11], and multi-objective optimization for balancing multiple performances such as delay, energy consumption or load variance [12,13].
With the continuous emergence of intelligent applications, computation offloading for application-level services has gradually been focused. In practice, most computing-intensive and delay-sensitive applications can be discomposed into several time-dependent tasks, which makes fine-grained computation offloading and task scheduling possible. Meanwhile, flexible and efficient management and orchestration of multiple nearby MEC servers for time-dependent tasks can greatly increase the serving quantity of applications, reduce task processing and waiting delay, and improve resource utilization efficiency.
In the research of computation offloading for application-level services, the time-dependent characteristics of tasks within the same application can be modeled as a directed acyclic graph (DAG) [17,18,20,21,22,30,31,32,33], based on which the tasks can be sequentially offloaded to multiple MEC servers to improve service quality. Xu et al. in [33] proposed a novel offloading algorithm for time-dependent tasks, which minimized the makespan by finding the dynamic critical path based on the task graph. Chen et al. in [17] proposed a heuristic algorithm called Daas to jointly optimize the offloading and scheduling problem of DAG-type applications. Al-Habob et al. in [20] developed two heuristic algorithms that used a genetic algorithm and conflict graph model to reduce the latency and offloading failure probability for time-dependent tasks. Liu et al. in [21] designed an algorithm based on integer particle swarm optimization (IPSO) to collaboratively offload the time-dependent tasks to multiple edge nodes. In [22], Yan et al. considered an MEC system with a single access point and an MD, and proposed a deep reinforcement learning framework based on the actor–critic learning structure to jointly optimize offloading decisions and resource allocation.
However, the above literature concentrated on scenarios involving a single application and multiple MEC nodes. Shi et al., in [30], proposed a fuzzy-based mobile edge architecture with task partitioning to efficiently offload tasks of IoT applications in multi-layer MEC networks. Fu et al. in [18] considered the fog/edge collaborative system and developed a priority and dependency-based DAG tasks offloading algorithm (PDAGTO) to minimize the application delay while meeting energy consumption requirements. Liu et al. in [31] obtained the optimal task offloading and resource allocation policy by using the Lagrangian dual method with the objective of minimizing the task serving latency. While the dependent task offloading scenarios mentioned above are mostly static, it is imperative to take into account the dynamics of the networks when offloading the application tasks in reality. Mahmoodi et al. in [19] proposed an online heuristic strategy for multi-RAT-enabled mobile devices to achieve the optimal computation offloading decisions of multi-component applications. It should be noted that the research on computation offloading and task scheduling of time-dependent applications in dynamic and heterogeneous MEC networks is not sufficient at present. The fine-grained, flexible and adaptive computation offloading and task scheduling mechanisms for time-dependent applications in complex MEC network scenarios need to be further studied. The comparison of the existing computation offloading strategies is highlighted in Table 1.

3. System Model

In this section, we describe the general network architecture and define some system models that are used in our scheme.

3.1. Network Model

We consider a dynamic heterogeneous MEC network which is composed of multiple base stations (BSs) and smart MDs, as shown in Figure 1. Let M = { 1 , 2 , , m , , M } denote the set of the BSs. To cope with the computation-intensive tasks, each BS is equipped with an MEC server. In the heterogeneous MEC network, multiple BSs with diversified radio access technologies (RATs), transmission frequency bands and computation capacities coexist in the system. Let U = { 1 , 2 , , u , , U } denote the set of the smart MDs with limited computation capacities. The applications of the MDs, which are composed of several time-dependent tasks with diversified requirements, arrive dynamically. To effectively utilize the wireless and computing resources in the heterogenous MEC network, we assume that all smart MDs in the system support the multi-connectivity technology and can access several BSs simultaneously. In this condition, the application tasks can be jointly and cooperatively completed by multiple BSs.
In dynamic scenarios, the MDs and network environment are both time-varying. Here, we propose an SDN-based network architecture to facilitate the adaptive orchestration and collaborative scheduling of multi-dimensional network resources in the heterogeneous MEC network. A centralized SDN controller is introduced in the system which is responsible for network and MD state information collection and aggregation, collaborative offloading and task scheduling optimization and execution.

3.2. Application Model

We consider a time-dependent, computation-intensive and delay-sensitive application model here. For each MD in U , the dynamic arrival of its application follows a homogeneous Poisson process with an average rate λ . Let AP = AP 1 AP 2 AP u AP U denote the application set in the system, where AP u = { A P u 1 , A P u 2 , , A P u v , , A P u V u } is the application set of MD u U , V u is the total application number of MD u. Specifically, we use t u v a to denote the arrival time of the application A P u v .
We assume each application can be decomposed into multiple time-dependent tasks with diversified requirements, i.e., a generic augmented reality (AR) application is composed of tracking, rendering, interaction, calibration and registration. The time dependency relationship between tasks in application A P u v can be described by a directed acyclic graph (DAG) G u v = T u v , E u v , where T u v = { T u v 1 , T u v 2 , , T u v n , , T u v N u v } is the set of tasks. Each task T u v n in T u v is computation-intensive and delay-sensitive, which can be described by a triple { R u v n , Z u v n , D u v n } , where R u v n , Z u v n , D u v n are the input data size, the demanded central processing unit (CPU) cycles and the recommended maximum serving delay of task T u v n . Here, we use T = T 11 T 12 T u v T U V U ( A P u v AP u , u U ) to denote all application tasks in the network.
We first define the predecessor task set to characterize the time dependency of the tasks within the same application as follows:
Definition 1
(Predecessor task set). We define the task set consisting of all tasks that must be executed before T u v n within the same application as the predecessor task set T u v n p of T u v n . For example, the predecessor task set of T 113 is T 113 p = { T 111 , T 112 } .

3.3. Task Offloading and Scheduling Model

Here, we adopt the time division duplex (TDD) mode in the system. The characteristics of the offloading process can be summarized as follows.
  • The order of the time-dependent tasks for each application is predefined, and the MEC network cannot start transmitting and processing the task until all its predecessor tasks have been processed.
  • Each application can be coordinately processed by multiple BSs, but each task can be processed by only one of its candidate BSs.
  • Each BS and MEC server can transmit and process at most one task at a time.
  • The task is inseparable and non-preemptive; once it starts, the processing of the task cannot be stopped or paused until it is completed.
To effectively coordinate network resources and orchestrate task scheduling, both the computation offloading decision and task scheduling sequence should be determined to figure out where and when to offload and schedule the time-dependent tasks. Here, we use the vector A = [ a 111 , a 112 , , a u v n , ] ( T u v n T ) to express the offloading decisions of all tasks, where the integer parameter a u v n { 0 , 1 , 2 , , m , , M } represents where the task T u v n should be executed. Specifically, a u v n = 0 means the task T u v n is executed locally, and a u v n = m ( m M ) indicates the task T u v n is offloaded and executed in mth MEC server. The vector O = [ o 111 , o 112 , , o u v n , ] is used to represent the scheduling priority sequence of all the tasks in the network, where o u v n is a non-repetitive integer between 1 and | T | , o u v n = 1 indicates the task T u v n has the highest scheduling priority during scheduling, and o u v n = | T | means the task T u v n has the lowest scheduling priority during scheduling. Obviously, O is a vector with complex constraints. For o u v n , we always have
o u v n > o u v n ( T u v n T u v n p ) .
Definition 2
(High-priority task set). We define the task set consisting of all tasks that have higher priorities and should be processed earlier than T u v n ( A P u v AP u , 1 n N u v ) as T u v n ’s high-priority task set H u v n p , which can be written as follows:
  • If the task T u v n is processed locally ( a u v n = 0 ) ,
    H u v n p { o u v n | o u v n < o u v n & a u v n = 0 , T u v n T } T u v n p .
  • If the task T u v n is offloaded to the MEC server ( a u v n = m ( m M ) ) ,
H u v n p { o u v n | o u v n < o u v n & a u v n = a u v n & T u v n T u v n p , T u v n T } T u v n p .
For the sake of clarity, we take A = [ a 111 = 0 , a 112 = 2 , a 113 = 0 , a 121 = 2 , a 122 = 0 , a 123 = 2 , a 211 = 1 , a 212 = 2 , a 213 = 0 , a 311 = 0 , a 312 = 1 , a 313 = 1 ] , O = [ o 111 = 4 , o 112 = 6 , o 113 = 11 , o 121 = 3 , o 122 = 8 , o 123 = 9 , o 211 = 1 , o 212 = 5 , o 213 = 12 , o 311 = 2 , o 312 = 7 , o 313 = 10 ] as an example. For T 113 that is locally processed, the high-priority task set H 113 p = { T 122 } T 113 p = { T 111 , T 112 , T 122 } . For T 123 which is offloaded to the 2-nd MEC server, the high-priority task set H 123 p = { T 112 , T 212 } T 123 p = { T 121 , T 122 , T 112 , T 212 } .

3.4. Latency Model

3.4.1. Local Computing Mode

When the task T u v n is executed locally ( a u v n = 0 ), the starting time s u v n of task T u v n can be expressed by:
s u v n = max { s u , t u v a } if H u v n p = s u v n = c u v n p otherwise .
where s u is the available serving time of MD u. c u v n p is the maximum completion time of all high-priority tasks of T u v n in H u v n p , which can be expressed by c u v n p = max { c u v n | T u v n H u v n p } . Thus, the completion time c u v n of task T u v n can be written as
c u v n = s u v n + Z u v n F u l o c a l .
where F u l o c a l is the local computation capability of MD u.

3.4.2. MEC Offloading Mode

When the task is offloaded to the m-th MEC server ( a u v n = m ) for execution, the wireless transmission time and the execution time based on the task offloading and scheduling sequence should both be considered comprehensively.
According to the Shannon theorem [34], the uplink transmission rate from MD u to BS m is
γ u m = B log ( 1 + p u g u m σ 2 ) .
where B is the bandwidth of each BS, p u is the uplink transmit power of MD u, and g u m and σ 2 are the wireless channel gain and white noise between MD u and BS m, respectively.
Therefore, the starting time s u v n of T u v n is
s u v n = s m t r if H u v n p = s u v n = max { s u v n p , c u v n p } otherwise .
where s m t r is the available transmission time of BS m, s u v n p = max { s u v n + R u v n γ u m | T u v n H u v n p } is the available time for MD u to upload the input data of T u v n to the BS m through uplink transmission. c u v n p = max { c u v n | T u v n T u v n p } is the maximum completion time of all precedent tasks of T u v n , which can also be regarded as the arrival time of T u v n .
The completion time c u v n of task T u v n can be given by:
c u v n = max { s u v n + R u v n γ u m , c u v n p , s m c o m } + Z u v n F m m e c .
where s m c o m is the available serving time of the MEC server in BS m, F m m e c is the computation capability of m-th MEC server. c u v n p is the available time for T u v n to be executed at the m-th MEC server, which can be expressed as c u v n p = max { c u v n | T u v n H u v n p } .
Thus, the serving latency l u v n of T u v n can be defined as
l u v n = c u v n t u v a T u v n p = c u v n c u v n p otherwise .
For delay-sensitive tasks, long-serving latency can have an adverse impact on the service quality of the application. Therefore, we need to reduce the number of tardy tasks as much as possible. We use a binary parameter d u v n to indicate whether the serving latency l u v n of T u v n exceeds the recommended maximum serving delay D u v n or not, which can be written as
d u v n = 1 l u v n D u v n > 0 0 o t h e r w i s e .

3.5. Energy Model

3.5.1. Local Computing Mode

When task T u v n is executed locally, the energy consumption of T u v n on MD u can be written as [35]
E u v n l = ε ( F u l o c a l ) 3 Z u v n F u l o c a l = ε ( F u l o c a l ) 2 Z u v n .
where ε is the switched capacitance related to the MD’s chip architecture.

3.5.2. MEC Offloading Mode

When T u v n is offloaded to BS m ( m M ) , the energy consumption of T u v n is mainly caused by the wireless transmission on the MD side, which can be written as
E u v n m = p u R u v n γ u m .
Thus, the energy consumption E u v n can be defined as
E u v n = E u v n l , if a u v n = 0 E u v n m , if a u v n = m ( m M ) .

4. Problem Formulation

In this section, we adopt a weighted sum of serving latency, energy consumption and the number of tardy tasks of time-dependent application tasks to represent the network cost, and the dynamic computation offloading and task scheduling optimization problem can be formulated as follows:
min { A , O } J ( T , A , O ) = min { A , O } ω 1 T u v n T l u v n + ω 2 T u v n T E u v n + ω 3 T u v n T d u v n s . t . C 1 : a u v n { 0 , 1 , 2 , , m , , M } , a u v n A . C 2 : o u v n { 0 , 1 , 2 , , | T | } , o u v n O . C 3 : o u v n o u v n , o u v n O a n d T u v n T u v n . C 4 : s u v n t u v a , T u v n T . C 5 : c u v n s u v n , T u v n T . C 6 : s u v n c u v n , T u v n T , T u v n T u v n p .
where ω 1 , ω 2 and ω 3 are the weights assigned to the serving latency, energy consumption and tardiness, respectively. Constraints C1, I2 and C3 normalize the feasible solutions of the problem. Specifically, constraint C1 indicates the computation offloading decisions are non-negative integers between 0 and M. Constraints C2 and C3 indicate the scheduling prioritization decisions are non-repetitive integer between 1 and | T | . Constraints C4, C5 and C6 impose restrictions on starting and completion time for the time-dependent tasks. Constraints C4 and C5 guarantee that the starting time of each task should be later than the arriving time of the application it belongs to, and earlier than its completion time. Constraints C6 states that the starting time of each task should not be earlier than the completion time of its predecessor tasks.
Clearly, the problem (14) is a combinatorial optimization problem in an infinite time domain with complex time-dependent constraints between the tasks within each application, which is NP-hard and difficult to solve by traditional optimization methods. Meanwhile, with the continuously arriving application tasks, we cannot obtain accurate and perfect information about the future networks and application tasks in T in reality. Therefore, it is difficult to find the global optimal solution to the original dynamic programming problem.

5. Flexible Offloading and Task Scheduling Optimization

In this section, we design a flexible offloading and task scheduling scheme, named FLOATS, for time-dependent applications that arrive dynamically over an infinite time horizon. Firstly, we adopt a rolling-horizon-based optimization and scheduling mechanism that divides the infinite time into multiple optimization intervals and further transforms the dynamic programming problem into a number of static sub-problems. In order to solve the sub-problem in each optimization interval, we employ a two-layer chromosome structure to describe the offloading decision and task scheduling priority and propose a GA-based computation offloading and task scheduling algorithm to effectively obtain the optimal solution.

Rolling-Horizon-Based Optimization and Scheduling Mechanism

In order to solve the dynamic programming problem in an infinite time horizon, we propose an online periodic optimization and scheduling mechanism based on the rolling horizon approach [36].
By dividing the infinite time into numerous discrete time intervals of the same length τ , we initially transform the continuous-time optimization into the sum of a series of static discrete-time optimization problems. Here, we define the equal length time interval as the rolling horizon window (RHW), and describe the k-th RHW’s time frame as [ k τ , ( k + 1 ) τ ) ( 0 k ) . We assume that the network condition and application information remain unchanged in each RHW. To solve the dynamic optimization problem, we adopt the rolling-horizon-based optimization and scheduling mechanism which periodically solves and executes the optimization problem of each RHW according to the updated network and application conditions.Specifically, in the proposed mechanism, each RHW is composed of three parts, which are the information update part, problem optimization part and solution execution part.
In the information update part of each RHW, the centralized SDN is responsible for collecting and updating the network condition such as channel gain { g u m | u U , m M } , the available time { s u , s m t r , s m c o m | u U , m M } of all the MDs, BSs and MEC servers in next RHW, and the application information such as the current task status and the newly arrived tasks. Let T k τ denote the task set waiting to be offloaded and scheduled at the beginning of k-th RHW, all tasks within k-th RHW can be divided into four kinds of task sets: completed task set T ( c o m ) k τ , ongoing task set T ( o n ) k τ , unprocessed task set T ( u n p r o ) k τ and newly arrived task set T ( n e w ) k τ , which can be defined as follows:
T ( c o m ) k τ { T u v n | c u v n ( k + 1 ) τ , T u v n T k τ } , T ( o n ) k τ { T u v n | s u v n < ( k + 1 ) τ & c u v n > ( k + 1 ) τ , T u v n T k τ } , T ( u n p r o ) k τ { T u v n | s u v n ( k + 1 ) τ , T u v n T k τ } , T ( n e w ) k τ { T u v n | k τ < t u v a ( k + 1 ) τ } .
Thus, T k τ can be derived by:
T k τ ( T ( k 1 ) τ T ( c o m ) ( k 1 ) τ T ( o n ) ( k 1 ) τ ) T ( n e w ) ( k 1 ) τ T ( u n p r o ) ( k 1 ) τ T ( n e w ) ( k 1 ) τ .
Thus, we can heuristically solve the original problem (14) by solving the following discrete-time static optimization problem periodically:  
min J ( T k τ , A k τ , O k τ ) = min { A k τ , O k τ } ω 1 T u v n T k τ l u v n + ω 2 T u v n T k τ E u v n + ω 3 T u v n T k τ d u v n s . t . C 1 : a u v n { 0 , 1 , 2 , , m , , M } , a u v n A k τ . C 2 : o u v n { 0 , 1 , 2 , , | T | } , o u v n O k τ . C 3 : o u v n o u v n , o u v n O k τ a n d T u v n T u v n . C 4 : s u v n t u v a , T u v n T k τ . C 5 : c u v n s u v n , T u v n T k τ . C 6 : s u v n c u v n , T u v n T k τ , T u v n T u v n p .
where A k τ and O k τ are the computation offloading decision and scheduling priority order vectors of T k τ in k-th RHW. Clearly, (17) is a combinatorial optimization problem with multiple time constraints. To solve the above problem, we propose a GA-based computation offloading and task scheduling algorithm in Section 6.

6. GA-Based Computation Offloading and Task Scheduling Algorithm

The genetic algorithm is a stochastic searching algorithm that seeks the global optimal solution of the problem by simulating the natural evolution process in biological evolution [25,37]. GA converts the solutions into a set of chromosomes and obtains the optimal solution through chromosome selection, crossover and mutation.

6.1. Two-Layer Modified Chromosome Construction and Encoding

Chromosome construction and encoding are to express the solution of the optimized problem in the form of the chromosome, of which the legitimacy, effectiveness and integrity of solution space expression must be considered. In this section, we utilize the task offloading decision vector A k τ and the scheduling priority vector O k τ to describe a solution S k τ and further adopt the two-layer segmented coding method to transfer { A k τ , O k τ } into the chromosome I k τ , which can be expressed by:
I k τ = [ A k τ ; O k τ ]
where A k τ , O k τ are the gene segments of the computation offloading decisions part and the scheduling prioritization decisions part with the chromosome length | T k τ | , respectively.
Each gene in the chromosome is represented by an integer. In the computation offloading decisions part, each gene represents the offloading decision of the current task according to the sequence of task numbers. In the scheduling prioritization decisions part, each gene is directly encoded with the application number, and the order of all application numbers indicates the scheduling sequence of the application tasks. In detail, when the genes in the scheduling prioritization decisions part are compiled from left to right, the application number u v appearing for the n-th time represents the task T u v n , and the occurrence number of the application number u v is equal to the total number of tasks of the application A P u v . A chromosome example and the corresponding encoding solution are illustrated in Figure 2, and the Gantt chart corresponds to the solution which is decoded from the given chromosome example, as shown in Figure 3.

6.2. Population Initialization

Population initialization is a key problem in evolutionary algorithms, and the quality of initial solutions has a great impact on the convergence speed and quality of GA. Here, we adopt two kinds of heuristic population generation methods named global heuristic generation method and local heuristic generation method, combined with the random generation method to initialize the population with N c chromosomes.
  • Global heuristic generation method: We randomly generate the scheduling prioritization decisions O k τ for all tasks, and heuristically select the global optimal offloading decision for each task sequentially. The detailed steps are as follows:
    • Randomly select an application number from the application set and fill it into O k τ , decode the selected task number according to Section 6.1.
    • Calculate the network cost of the selected task based on the current network state, select the offloading decision with the lowest network cost and fill it into A k τ .
    • Update and record the available time of the MDs and BSs in the current network. If the selected task is the last task of the application, remove the application from the application set.
    • Repeat the first step until the application set is empty.
  • Local heuristic generation method: We randomly generate the scheduling prioritization decisions O k τ for all tasks and heuristically select the offloading decisions with the lowest network cost for all tasks based on the initial network status. The detailed steps are as follows:
    • Randomly select an application number from the application set and fill it into O k τ , decode the selected task number according to Section 6.1.
    • Calculate the network cost of the selected task based on the initial network state, select the offloading decision with the lowest network cost and fill it into A k τ .
    • If the selected task is the last task of the application, remove the application from the application set. Repeat the first step until the application set is empty.
  • Random generation method: We randomly generate the scheduling prioritization decisions O k τ and the offloading decision vector A k τ for all tasks. The detailed steps are as follows:
    • Randomly select an application number from the application set and fill it into O k τ , decode the selected task number according to Section 6.1.
    • Randomly select an offloading decision for the selected task and fill it into A k τ .
    • If the selected task is the last task of the application, remove the application from the application set. Repeat the first step until the application set is empty.

6.3. Fitness Evaluation and Chromosome Selection

We evaluate the quality of each solution by calculating the fitness value of each chromosome based on the objective function of the problem (17). The fitness function is defined as
F i t n e s s = ξ J ( T k τ , A k τ , O k τ )
where ξ is a constant value that is large enough to ensure non-negative fitness.
The classical tournament method is adopted in the selection operation to pick out a certain quantity of parent chromosomes for generating the new population through the crossover and mutation operations. In tournament method, we randomly take several chromosomes from the original population each time, and select the chromosome with the best fitness to enter the offspring population. Repeat this operation until the new population reaches the size of original population.

6.4. Crossover and Mutation

Crossover operation affects the global search ability of genetic algorithm. In the crossover operation, we recombine the gene fragments of A k τ and O k τ of the selected new population chromosomes with the crossover probability ρ c to generate the new individuals. To ensure the legitimacy of newly generated chromosomes, we use two crossover operations, named uniform crossover and precedence preserving order-based crossover, for computation offloading decisions part and scheduling prioritization decisions part, respectively [38].
  • Computation offloading decisions part: uniform crossover. We randomly select multiple crossover positions on O k τ and exchange corresponding genes from two parent chromosomes. The detailed steps are as follows:
    • Randomly generate | O k τ | 2 unequal integers between 1 and | O k τ | .
    • According to the generated integers, exchange corresponding position genes from two parent chromosomes to generate the offspring chromosomes.
    As shown in Figure 4, we randomly generate 6 positions, and exchange the genes at the 2nd, 3rd, 5th, 6th, 8th and 10th positions from the computation offloading decisions parts of parent chromosomes  [ 0 , 2 , 0 , 2 , 0 , 2 , 1 , 2 , 0 , 0 , 1 , 1 ] , [ 1 , 1 , 0 , 2 , 1 , 2 , 0 , 2 , 2 , 1 , 2 , 1 ] to obtain the computation offloading decisions parts of offspring chromosome [ 0 , 1 , 0 , 2 , 1 , 2 , 1 , 2 , 0 , 1 , 1 , 1 ] , [ 1 , 2 , 0 , 2 , 0 , 2 , 0 , 2 , 2 , 0 , 2 , 1 ] .
  • Scheduling prioritization decisions part: precedence preserving order-based crossover. We randomly divide the application set into two subsets. For tasks of all applications in one subset, we extract and preserve the priority gene sequences on A k τ from two parent chromosomes, exchange the corresponding genes and sequentially fill them into the gene positions of the two chromosomes. The detailed steps are as follows:
    • Randomly divide the application set into two subsets AP 1 and AP 2 .
    • Search for the gene positions G P 1 , G P 2 and gene sequences G S 1 , G S 2 of tasks in AP 1 from the scheduling prioritization decisions parts of the parent chromosomes I 1 and I 2 .
    • Replace G S 1 with G S 2 and fill it into the gene positions G P 1 in parent chromosome I 1 to obtain the offspring chromosome I 1 , and replace G S 2 with G S 1 and fill it into the gene positions G P 2 in parent chromosome I 2 to obtain the offspring chromosome I 2 .
    As shown in Figure 4, we randomly divide the application set into { 11 , 31 } and { 12 , 21 } . For the scheduling prioritization decisions parts of the parent chromosomes [ 21 , 31 , 12 , 11 , 21 , 11 , 31 , 12 , 12 , 31 , 11 , 21 ] , [ 11 , 12 , 31 , 21 , 21 , 12 , 31 , 12 , 11 , 11 , 21 , 31 ] , we exchange the gene sequences [ 31 , 11 , 11 , 31 , 31 , 11 ] and [ 11 , 31 , 31 , 11 , 11 , 31 ] and get the scheduling prioritization decisions parts of the offspring chromosomes [ 21 , 11 , 12 , 31 , 21 , 31 , 11 , 12 , 12 , 11 , 31 , 21 ] , [ 31 , 12 , 11 , 21 , 21 , 12 , 11 , 12 , 31 , 31 , 21 , 11 ] .
Mutation operation can increase the diversity of chromosomes by randomly changing some genes of chromosomes, which can improve the local search ability of the genetic algorithm to a certain extent. As shown in Figure 5, two kinds of mutation methods are adopted for A k τ and O k τ , respectively.
  • Computation offloading decisions part: random mutation. For A k τ , we randomly mutate each gene within { 0 } M with the mutation probability of ρ m .
  • Scheduling prioritization decisions part: exchange mutation. For O k τ , we randomly choose two genes for exchange with the mutation probability of ρ m to maintain the feasibility and validity of O k τ .

6.5. Generating New Population

We produce the offspring population by employing the aforementioned selection, crossover, and mutation operations. We combine the parent and offspring populations to form a new population, and then sort all the chromosomes in descending order according to their calculated fitness. In accordance with the principles of natural evolution, we discard chromosomes that exhibit poor fitness, while only retaining a specific number of chromosomes that demonstrate good fitness in the new population. The population evolves continuously until the optimal fitness converges or the generation surpasses the maximum generation G c . The detailed pseudocode is provided in Algorithm 1.
Algorithm 1 GA-based computation offloading and task scheduling algorithm
1:
Initialize: The population size N c , maximum generation G c , crossover probability ρ c , mutation probability ρ m .
2:
Generate the initial population based on the global heuristic generation method, local heuristic generation method and random generation method.
3:
G = 1 .
4:
repeat
5:
   Evaluate the fitness of each chromosome.
6:
   Execute the tournament selection operation to select the next-generation population.
7:
   Each chromosome in the population is randomly paired and performs the crossover operation with the crossover probability ρ c .
8:
   Each chromosome performs the mutation operation according to the mutation probability ρ m .
9:
   Generate the new population.
10:
    G = G + 1 ;
11:
until The optimal fitness converges or G > G c

6.6. Computation Time Complexity

In the proposed GA algorithm, the time complexities to generate the initial chromosome population by using the GHG and LHG methods are O ( ( | M | + 2 ) × | T k τ | ) , and the time complexity to initialize the chromosome population by using RH method is O ( 2 | T k τ | ) . The time complexity to calculate the fitness is O ( | T k τ | × N c ) , the time complexity of crossover operation is O ( 2 N c 2 ) , the time complexity of mutation operation is O ( 2 | T k τ | × N c ) . Due to N c > ( | M | + 2 ) × | T k τ | , and the time complexity of proposed algorithm is O ( 2 G c × N c 2 ) .

7. Simulation and Performance Evaluation

7.1. Simulation Setup

Here, we consider a heterogeneous MEC network that consists of 4 BSs and 20 MDs that are randomly distributed within a 500 m × 500 m square area. For each BS, the bandwidth is 10 MHz. The MEC servers have diversified computation capacities which are randomly generated from [ 2 , 8 ] GHz. For each MD, the maximal uplink transmit power is 23 dBm, the local computation capacity is 1 GHz, and the switched capacitance ε = 10 27 [35]. The white noise σ 2 = 174 dBm/Hz. The path loss model is 128.1 + 37.6 log ( d ) , where d (km) is the distance between the BS and MD [39]. The application arrival of each MD follows the Poisson process with the mean value of λ app/s. Each application consists of 3 time-dependent tasks. The input data size R u v n of each task follows a uniform distribution of [ R ¯ 500 , R ¯ + 500 ] KB, where R ¯ is the mean input data size of tasks. The demanded CPU cycles per bit Z u v n / R u v n of each task randomly varies in the range of [ 250 , 750 ] cycles/bit. The recommended maximum serving delay D u v n is randomly generated between [ 250 , 750 ] ms. The weight parameters are set to be ω 1 = 1 , ω 2 = 1 , and ω 3 = 10 . The time length of RHW is 250 ms. The parameters of the GA algorithm are listed as follows: N c = 2000 , ρ c = 0.8 , ρ m = 0.01 . The proportions of chromosomes generated based on the global heuristic generation method, local heuristic generation method and random generation are 0.6 , 0.3 , 0.1 [40]. The simulation parameters are modified from [41], and the detailed simulation parameters are listed in Table 2.
Here, we compare our proposed FLOATS scheme with the following reference schemes:
(1)
FIFO-FLO: First-in-first-out prioritization-based flexible computation offloading algorithm, in which each task can be processed locally or flexibly offloaded to one of the multiple MEC servers and the computation offloading decisions are optimized by GA method.
(2)
FIFO-greedy: First-in-first-out prioritization-based computation offloading algorithm, in which each task can be processed locally or flexibly offloaded to one of the multiple MEC servers, and the computation offloading decisions are optimized by the greedy method.
(3)
COATS: Computation offloading and task scheduling algorithm with single-connectivity technology, in which each MD accesses the BS with the maximal received signal and all tasks of each MD can be processed locally or be processed by the MEC server equipped in associated BS.
We evaluate the performances of the proposed scheme and the reference schemes for all tasks of dynamically arrived applications in the MEC network within 3 s. By using the Monte Carlo simulation method, all performance results are obtained over 50 runs with various simulation parameters.

7.2. Performance Evaluation

To evaluate the performance of the proposed GA algorithm, we consider a small-scale problem and compare the performance of the proposed scheme with the optimal solution obtained through exhaustive search. Figure 6 illustrates the performance comparisons of the proposed FLOATS and exhaustive search with application numbers ranging from 2 to 5 and BS numbers ranging from 2 to 3. It can be seen that the proposed scheme can achieve near-optimal performances. When the problem size is relatively small, the proposed scheme can achieve the same results as the optimal solution. As the number of MDs increases, there is an extremely small gap between the proposed solution and the optimal solution.
Figure 7 shows the performance comparison of the proposed schemes and the reference schemes under different application arrival rates λ . Figure 7a illustrates the impacts of the application arrival rates on the total network cost. It can be seen from Figure 7a, as the application arrival rate increases, the total network cost grows. Compared with COATS with single-connectivity technology, other schemes using multi-connectivity technology can simultaneously utilize the wireless and computing resources of multiple BSs and significantly reduce the total network costs. Compared with COATS, which only optimizes the task scheduling, and FIFO-FLO, which only optimizes the computation offloading, the proposed FLOATS can significantly reduce the total network cost of the application tasks. It proves that the joint optimization of computation offloading and task scheduling can improve the serving quality of the application tasks. The network cost reduction of the proposed FLOATS compared to FIFO-greedy proves the effectiveness of the GA algorithm used in our scheme.
In order to better illustrate the trade-off between the three performances, Figure 7b–d show the detailed total serving latency, energy consumption and the number of tardy tasks of different schemes, respectively. It can be figured out that the total serving latency, energy consumption and the number of tardy tasks of the applications gradually increase as the application arrival rate rises. Specifically, compared with COATS, other schemes such as FIFO-FLO, FIFO-greedy and FLOATS can specifically shorten service latency and improve tardiness performance. This is achieved by allowing application tasks to be offloaded to multiple MEC servers for execution. Additionally, by retaining fewer tasks for local execution, they further reduce energy consumption. The performance gains of FLOATS in serving latency and the number of tardy tasks compared with FIFO-FLO and FIFO-greedy prove the utilization of GA algorithm in the joint optimization of computation offloading and task scheduling can improve the service quality of the applications. Meanwhile, FLOATS can reduce the local computing probability and gain better energy consumption performance.
Figure 8 shows the performance comparison between the proposed scheme and the reference schemes under different mean input data sizes of tasks. Figure 8a shows the total network cost comparison of different schemes. It can be seen from Figure 8a, with the increase in the input data size, COATS can not effectively achieve the load balance of the system and has the lowest utilization rate of network resources and highest network cost of the application tasks. Compared with FIFO-greedy, the adoption of the GA algorithm in FLOATS can bring about a network cost reduction of over 40% and can improve the resource utilization efficiency. Furthermore, compared with FIFO-FLO, the proposed FLOATS can reduce network cost by at least 20% by jointly optimizing the computation offloading and task scheduling.
Figure 8b–d show the performances of total serving latency, energy consumption and the number of tardy tasks of different schemes, respectively. Figure 8b,d demonstrate that as the input data size increases, both the wireless transmission and computational demands of the system increase, leading to a gradual rise in the serving latency and tardiness of the application tasks due to the long waiting time of the application tasks. The proposed FLOATS always outperforms other schemes both in the serving latency and the tardiness performances. Meanwhile, from Figure 8c, we can find out that as the input data increase, the wireless transmission and local computing processing load increases, which results in higher energy consumption on the MD sides. Compared with COATS, FIFO-FLO and FIFO-greedy, the proposed FLOATS is more energy-efficient due to its reduced local computing load. These three figures indicate that the proposed FLOATS consistently achieves a comparatively lower service delay and energy consumption compared with other schemes, and can significantly decrease the rate of tardy tasks in the network.
Figure 9 shows the performance comparison between the proposed scheme and the reference schemes under different RHW time lengths. As shown in Figure 9a, with the increase in the RHW time length, the optimization period is prolonged, and the scheduling of tasks will be gradually delayed, while the number of tasks that can be comprehensively optimized grows accordingly. Due to limited available resources and long service waiting latency, the performance of COATS is not significantly affected by shorter RHWs, while for other schemes that can flexibly utilize the wireless and computing resources of multiple BSs and MDs, the total network cost of tasks experiences a moderate increase when the RHW time length falls within the minimum delay tolerance range. When the RHW time length exceeds the minimum delay tolerance range, the waiting time of tasks for scheduling will be extended, which results in a rapid increase in the total network cost performance. Under different RHW time lengths, the proposed FLOATS still maintains superior performances than other reference schemes.
Figure 9b–d show the performances of total serving latency, energy consumption and the number of tardy tasks of different schemes, respectively. Note that with shorter RHW time lengths, tasks can be optimized and scheduled frequently, which leads to shorter waiting time for tasks and reduces the possibility of tardiness. As the RHW time length increases, the serving latency and tardiness both go up rapidly, and more tasks remain for scheduling in the network. From the three figures, it can be seen that the proposed FLOATS can effectively balance the performances of the three objectives under different RHW time lengths. Specifically, when the RHW time lengths are less than 500 ms, the proposed FLOATS performs better in terms of the network costs and three performances compared to other schemes. When the RHW time lengths are greater than 500 ms, the proposed FLOATS will keep more tasks executed locally to reduce the number of tardy tasks, and the serving latency and energy consumption of MDs will also increase accordingly.

8. Conclusions

In this paper, we propose the FLOATS scheme to adaptively coordinate and orchestrate heterogeneous wireless and computing resources to meet the diversified requirements of multiple IoT applications in dynamic environments. We first adopt the multi-connectivity technology and utilize the computation offloading decision and scheduling priority sequence to describe a more flexible offloading and scheduling solution for time-dependent tasks. The dynamic problem is formulated as a multi-objective combinatorial optimization problem in an infinite time horizon. To address this, a rolling-horizon-based optimization and scheduling mechanism is developed which decomposes the original problem into a series of static sub-problems. To search for the optimal solution of the static sub-problems, we propose a GA-based computation offloading and task scheduling algorithm based on the two-layer chromosome construction and search for the optimal solution by selection, crossover and mutation operations. Simulation results show that the proposed scheme can considerably reduce the network cost and make a good balance between multiple performances compared with other reference schemes. Although the proposed GA algorithm can effectively improve the service quality of the applications, its computational complexity still becomes a hindrance in large-scale network scenarios. Therefore, it is recommended to use it in low- or medium-network-scale scenarios. In future work, we will consider applying a distributed algorithm or deep reinforcement learning algorithm to further explore the computation offloading and task scheduling algorithms with lower complexity for time-dependent tasks in large-scale networks.

Author Contributions

Conceptualization, Y.S. and H.L.; methodology, Y.S. and L.L.; software, Y.S. and H.L.; validation, Y.S., Y.B. and H.L.; formal analysis, Y.S. and Y.B.; investigation, L.L.; resources, Y.S. and H.L; data curation, Y.S. and Y.B.; writing—original draft preparation, Y.S.; writing—review and editing, Y.S., Y.B., H.L., F.T. and L.L.; visualization, Y.S. and H.L.; supervision, Y.S., H.L. and F.T.; project administration, H.L.; funding acquisition, Y.S. and F.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China under Grant 62001011, the Natural Science Foundation of Beijing Municipality under Grant L212003, Open Fund Project of Key Laboratory of Cognitive Radio and Information Processing, Ministry of Education (Guilin University of Electronic Technology) under Grant CRKL220205, the Natural Science Foundation of Beijing Municipality under Grant L211002, the National Natural Science Foundation of China under Grant 62371014, and the Beijing Natural Science Foundation under Grant 4222002.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

Huixin Li was employed by CICT Mobile Communication Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IoTInternet of things
QoSQuality of service
MECMulti-access edge computing
RHWRolling-horizon window
DAGDirected acyclic graph
BSBase station
SDNSoftware-defined network
GAGenetic algorithm

References

  1. Al-Fuqaha, A.; Guizani, M.; Mohammadi, M.; Aledhari, M.; Ayyash, M. Internet of Things: A Survey on Enabling Technologies, Protocols, and Applications. IEEE Commun. Surv. Tutor. 2015, 17, 2347–2376. [Google Scholar] [CrossRef]
  2. Zhong, A.; Li, Z.; Wu, D.; Tang, T.; Wang, R. Stochastic Peak Age of Information Guarantee for Cooperative Sensing in Internet of Everything. IEEE Internet Things J. 2023, 10, 15186–15196. [Google Scholar] [CrossRef]
  3. Tang, M.; Gao, L.; Huang, J. Communication, Computation, and Caching Resource Sharing for the Internet of Things. IEEE Commun. Mag. 2020, 58, 75–80. [Google Scholar] [CrossRef]
  4. Taleb, T.; Samdanis, K.; Mada, B.; Flinck, H.; Dutta, S.; Sabella, D. On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration. IEEE Commun. Surv. Tutor. 2017, 19, 1657–1681. [Google Scholar] [CrossRef]
  5. Mach, P.; Becvar, Z. Mobile Edge Computing: A Survey on Architecture and Computation Offloading. IEEE Commun. Surv. Tutor. 2017, 19, 1628–1656. [Google Scholar] [CrossRef]
  6. Wu, Y.; Ni, K.; Zhang, C.; Qian, L.P.; Tsang, D.H.K. NOMA-Assisted Multi-Access Mobile Edge Computing: A Joint Optimization of Computation Offloading and Time Allocation. IEEE Trans. Veh. Technol. 2018, 67, 12244–12258. [Google Scholar] [CrossRef]
  7. Jeong, J.; Kim, I.M.; Hong, D. Deep Reinforcement Learning-based Task Offloading Decision in the Time Varying Channel. In Proceedings of the 2021 International Conference on Electronics, Information, and Communication (ICEIC), Jeju, Republic of Korea, 31 January–3 February 2021; pp. 1–4. [Google Scholar]
  8. Liu, C.F.; Bennis, M.; Debbah, M.; Poor, H.V. Dynamic Task Offloading and Resource Allocation for Ultra-Reliable Low-Latency Edge Computing. IEEE Trans. Commun. 2019, 67, 4132–4150. [Google Scholar] [CrossRef]
  9. Zhang, K.; Yang, J.; Lin, Z. Computation Offloading and Resource Allocation Based on Game Theory in Symmetric MEC-Enabled Vehicular Networks. Symmetry 2023, 15, 1241. [Google Scholar] [CrossRef]
  10. Tao, X.; Ota, K.; Dong, M.; Qi, H.; Li, K. Performance Guaranteed Computation Offloading for Mobile-Edge Cloud Computing. IEEE Wirel. Commun. Lett. 2017, 6, 774–777. [Google Scholar] [CrossRef]
  11. Huang, C.; Yan, Y.; Zhang, Y.; Sun, J. A Metaheuristic Algorithm for Mobility-Aware Task Offloading for Edge Computing Using Device-to-Device Cooperation. In Proceedings of the 2022 IEEE Smartworld, Ubiquitous Intelligence & Computing, Scalable Computing & Communications, Digital Twin, Privacy Computing, Metaverse, Autonomous & Trusted Vehicles, Haikou, China, 15–18 December 2022; pp. 656–663. [Google Scholar]
  12. Zhang, G.; Zhang, W.; Cao, Y.; Li, D.; Wang, L. Energy-Delay Tradeoff for Dynamic Offloading in Mobile-Edge Computing System With Energy Harvesting Devices. IEEE Trans. Ind. Inform. 2018, 14, 4642–4655. [Google Scholar] [CrossRef]
  13. Long, S.; Zhang, Y.; Deng, Q.; Pei, T.; Ouyang, J.; Xia, Z. An Efficient Task Offloading Approach Based on Multi-Objective Evolutionary Algorithm in Cloud-Edge Collaborative Environment. IEEE Trans. Netw. Sci. Eng. 2023, 10, 645–657. [Google Scholar] [CrossRef]
  14. Bai, W.; Wang, Y. Jointly Optimize Partial Computation Offloading and Resource Allocation in Cloud-Fog Cooperative Networks. Electronics 2023, 12, 3224. [Google Scholar] [CrossRef]
  15. Li, D.; Jin, Y.; Liu, H. Resource Allocation Strategy of Edge Systems Based on Task Priority and an Optimal Integer Linear Programming Algorithm. Symmetry 2020, 12, 972. [Google Scholar] [CrossRef]
  16. Cai, W.; Duan, F. Task Scheduling for Federated Learning in Edge Cloud Computing Environments by Using Adaptive-Greedy Dingo Optimization Algorithm and Binary Salp Swarm Algorithm. Future Internet 2023, 15, 357. [Google Scholar] [CrossRef]
  17. Chen, L.; Wu, J.; Zhang, J.; Dai, H.N.; Long, X.; Yao, M. Dependency-Aware Computation Offloading for Mobile Edge Computing with Edge-Cloud Cooperation. IEEE Trans. Cloud Comput. 2020, 10, 2451–2468. [Google Scholar] [CrossRef]
  18. Fu, X.; Tang, B.; Guo, F.; Kang, L. Priority and Dependency-Based DAG Tasks Offloading in Fog/Edge Collaborative Environment. In Proceedings of the 2021 IEEE 24th International Conference on Computer Supported Cooperative Work in Design (CSCWD), Dalian, China, 5–7 May 2021; pp. 440–445. [Google Scholar]
  19. Mahmoodi, S.E.; Subbalakshmi, K.P.S. A Time-Adaptive Heuristic for Cognitive Cloud Offloading in Multi-RAT Enabled Wireless Devices. IEEE Trans. Cogn. Commun. Netw. 2016, 2, 194–207. [Google Scholar] [CrossRef]
  20. Al-Habob, A.A.; Dobre, O.A.; Armada, A.G.; Muhaidat, S. Task Scheduling for Mobile Edge Computing Using Genetic Algorithm and Conflict Graphs. IEEE Trans. Veh. Technol. 2020, 69, 8805–8819. [Google Scholar] [CrossRef]
  21. Liu, J.; Zhang, Q. Code-Partitioning Offloading Schemes in Mobile Edge Computing for Augmented Reality. IEEE Access 2019, 7, 11222–11236. [Google Scholar] [CrossRef]
  22. Yan, J.; Bi, S.; Zhang, Y.J.A. Offloading and Resource Allocation with General Task Graph in Mobile Edge Computing: A Deep Reinforcement Learning Approach. IEEE Trans. Wirel. Commun. 2020, 19, 5404–5419. [Google Scholar] [CrossRef]
  23. Fang, J.; Qu, D.; Chen, H.; Liu, Y. Dependency-Aware Dynamic Task Offloading Based on Deep Reinforcement Learning in Mobile Edge Computing. IEEE Trans. Netw. Serv. Manag. 2023. [Google Scholar] [CrossRef]
  24. Fang, C.; Zhang, T.; Huang, J.; Xu, H.; Hu, Z.; Yang, Y.; Wang, Z.; Zhou, Z.; Luo, X. A DRL-Driven Intelligent Optimization Strategy for Resource Allocation in Cloud-Edge-End Cooperation Environments. Symmetry 2022, 14, 2120. [Google Scholar] [CrossRef]
  25. Sastry, K.D.E.G.; Kendall, G. Genetic Algorithms; Springer: Boston, MA, USA, 2014. [Google Scholar]
  26. Fu, Y.; Wang, H.; Huang, M.; Ding, J.; Tian, G. Multiobjective flow shop deteriorating scheduling problem via an adaptive multipopulation genetic algorithm. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 2018, 232, 2641–2650. [Google Scholar] [CrossRef]
  27. Fu, Y.; Jiang, G.; Tian, G.; Wang, Z. Job scheduling and resource allocation in parallel-machine system via a hybrid nested partition method. IEEJ Trans. Electr. Electron. Eng. 2019, 14, 597–604. [Google Scholar] [CrossRef]
  28. Fazli, M.; Fathollahi-Fard, A.M.; Tian, G. Addressing a Coordinated Quay Crane Scheduling and Assignment Problem by Red Deer Algorithm. Mater. Energy Res. Cent. 2019, 32, 1186–1191. [Google Scholar]
  29. Fathollahi-Fard, A.M.; Woodward, L.; Akhrif, O. Sustainable distributed permutation flow-shop scheduling model based on a triple bottom line concept. J. Ind. Inf. Integr. 2021, 24, 100233. [Google Scholar] [CrossRef]
  30. Shi, Y.; Chu, J.; Ji, C.; Li, J.; Ning, S. A Fuzzy-Based Mobile Edge Architecture for Latency-Sensitive and Heavy-Task Applications. Symmetry 2022, 14, 1667. [Google Scholar] [CrossRef]
  31. Liu, F.; Huang, J.; Wang, X. Joint Task Offloading and Resource Allocation for Device-Edge-Cloud Collaboration with Subtask Dependencies. IEEE Trans. Cloud Comput. 2023, 11, 3027–3039. [Google Scholar] [CrossRef]
  32. Peng, B.; Li, T.; Chen, Y. DRL-Based Dependent Task Offloading Strategies with Multi-Server Collaboration in Multi-Access Edge Computing. Appl. Sci. 2023, 13, 191. [Google Scholar] [CrossRef]
  33. Xu, B.; Hu, Y.; Hu, M.; Liu, F.; Peng, K.; Liu, L. Iterative Dynamic Critical Path Scheduling: An Efficient Technique for Offloading Task Graphs in Mobile Edge Computing. Appl. Sci. 2022, 12, 3189. [Google Scholar] [CrossRef]
  34. Cover, T.M.; Thomas, J.A. Elements of information theory. Publ. Am. Stat. Assoc. 1991, 103, 429. [Google Scholar]
  35. Li, Z.; Zhu, N.; Wu, D.; Wang, H.; Wang, R. Energy-Efficient Mobile Edge Computing under Delay Constraints. IEEE Trans. Green Commun. Netw. 2022, 6, 776–786. [Google Scholar] [CrossRef]
  36. Le, K.D.; Day, J.T. Rolling Horizon Method: A New Optimization Technique for Generation Expansion Studies. IEEE Trans. Power Appar. Syst. 1982, PAS-101, 3112–3116. [Google Scholar] [CrossRef]
  37. Ren, Y.; Zhang, C.; Zhao, F.; Xiao, H.; Tian, G. An asynchronous parallel disassembly planning based on genetic algorithm. Eur. J. Oper. Res. 2018, 269, 647–660. [Google Scholar] [CrossRef]
  38. Moghadam, A.M.; Wong, K.Y.; Piroozfard, H. An efficient genetic algorithm for flexible job-shop scheduling problem. In Proceedings of the 2014 IEEE International Conference on Industrial Engineering and Engineering Management, Selangor, Malaysia, 9–12 December 2014; pp. 1409–1413. [Google Scholar]
  39. Sesia, S.; Toufik, I.; Baker, M. LTE, The UMTS Long Term Evolution: From Theory to Practice; Wiley Publishing: Hoboken, NJ, USA, 2009. [Google Scholar]
  40. Yang, S.; Guohui, Z.; Liang, G.; Kun, Y. A novel initialization method for solving Flexible Job-shop Scheduling Problem. In Proceedings of the 2009 International Conference on Computers & Industrial Engineering, Troyes, France, 6–9 July 2009; pp. 68–73. [Google Scholar]
  41. Yi, C.; Cai, J.; Su, Z. A Multi-User Mobile Computation Offloading and Transmission Scheduling Mechanism for Delay-Sensitive Applications. IEEE Trans. Mob. Comput. 2020, 19, 29–43. [Google Scholar] [CrossRef]
Figure 1. System model.
Figure 1. System model.
Symmetry 15 02196 g001
Figure 2. Chromosome structure and encoding.
Figure 2. Chromosome structure and encoding.
Symmetry 15 02196 g002
Figure 3. The Gantt chart corresponded to the given chromosome example.
Figure 3. The Gantt chart corresponded to the given chromosome example.
Symmetry 15 02196 g003
Figure 4. Example of crossover operation.
Figure 4. Example of crossover operation.
Symmetry 15 02196 g004
Figure 5. Example of mutation operation.
Figure 5. Example of mutation operation.
Symmetry 15 02196 g005
Figure 6. Performance comparison between the proposed scheme and exhaustive search.
Figure 6. Performance comparison between the proposed scheme and exhaustive search.
Symmetry 15 02196 g006
Figure 7. Performances of different application arrival rates λ (app/s) ( R ¯ = 1000 KB, τ = 250 ms). (a) Total network cost. (b) Total serving latency. (c) Total energy consumption. (d) Total number of tardy tasks.
Figure 7. Performances of different application arrival rates λ (app/s) ( R ¯ = 1000 KB, τ = 250 ms). (a) Total network cost. (b) Total serving latency. (c) Total energy consumption. (d) Total number of tardy tasks.
Symmetry 15 02196 g007
Figure 8. Performances of under different mean input data size of tasks R ¯ (KB) ( λ = 0.6 app/s, τ = 250 ms). (a) Total network cost. (b) Total serving latency. (c) Total energy consumption. (d) Total number of tardy tasks.
Figure 8. Performances of under different mean input data size of tasks R ¯ (KB) ( λ = 0.6 app/s, τ = 250 ms). (a) Total network cost. (b) Total serving latency. (c) Total energy consumption. (d) Total number of tardy tasks.
Symmetry 15 02196 g008
Figure 9. Performances of different RHW time lengths τ (ms) ( R = 1000 KB, λ = 0.6 app/s). (a) Total network cost. (b) Total serving latency. (c) Total energy consumption. (d) Total number of tardy tasks.
Figure 9. Performances of different RHW time lengths τ (ms) ( R = 1000 KB, λ = 0.6 app/s). (a) Total network cost. (b) Total serving latency. (c) Total energy consumption. (d) Total number of tardy tasks.
Symmetry 15 02196 g009
Table 1. Comparative study of the existing computation offloading strategies.
Table 1. Comparative study of the existing computation offloading strategies.
PaperTaskMDServerScenarioObjectiveAlgorithm
 [6]IndependentMultipleMultipleStaticDelayLinear-search-based algorithm
 [7]IndependentSingleSingleDynamicDelayDeep reinforcement learning algorithm
 [8]IndependentMultipleMultipleDynamicDelayLyapunov optimization and matching theory
 [10]IndependentMultipleMultipleStaticEnergyConvex algorithm
 [11]IndependentMultipleSingleDynamicEnergyMetaheuristic algorithm
 [24]IndependentMultipleMultipleDynamicOffloaded trafficDeep reinforcement learning algorithm
 [12]IndependentSingleSingleDynamicDelay and energyLyapunov algorithm
 [13]IndependentMultipleMultipleStaticDelay, energy and load varianceMulti-objective evolutionary algorithm
 [14]IndependentMultipleMultipleStaticDelay, energy and cloud rental costConvex algorithm
 [33]DependentSingleMultipleStaticDelayHeuristic algorithm
 [17]DependentSingleMultipleStaticDelay and offloading failure probabilityHeuristic algorithm
 [20]DependentSingleMultipleStaticService failure probabilityGenetic algorithm
 [21]DependentSingleMultipleStaticDelay and energyInteger particle swarm algorithm
 [22]DependentSingleSingleStaticDelay and energyDeep reinforcement learning algorithm
 [30]DependentMultipleMultipleStaticDelayFuzzy logic
 [18]DependentMultipleMultipleStaticDelay and energyHeuristic algorithm
 [31]DependentMultipleMultipleStaticDelay and energyConvex algorithm
 [19]DependentSingleMultipleDynamicDelay and energyHeuristic algorithm
Table 2. Main simulation Parameters.
Table 2. Main simulation Parameters.
System parametersValues
MD number U20
BS number M4
Channel bandwidth B10 MHz
Local computation capacity F u l o c a l 1 GHz
Computation capacity of MEC server F m m e c [ 2 , 8 ] GHz
Input data size of task R u v n [ R ¯ 500 , R ¯ + 500 ] KB
Demanded CPU cycles per bit Z u v n / R u v n [ 250 , 750 ] cycles/bit
Recommended maximum serving delay D u v n [ 250 , 750 ] ms
Maximal uplink transmit power p u 23 dBm
Pathloss 128.1 + 37.6 log ( d )
White noise σ 2 174 dBm/Hz
Time length of RHW τ 250 ms
Chromosome N c 2000
Crossover probability ρ c 0.8
Mutation probability ρ m 0.01
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, Y.; Bian, Y.; Li, H.; Tan, F.; Liu, L. Flexible Offloading and Task Scheduling for IoT Applications in Dynamic Multi-Access Edge Computing Environments. Symmetry 2023, 15, 2196. https://doi.org/10.3390/sym15122196

AMA Style

Sun Y, Bian Y, Li H, Tan F, Liu L. Flexible Offloading and Task Scheduling for IoT Applications in Dynamic Multi-Access Edge Computing Environments. Symmetry. 2023; 15(12):2196. https://doi.org/10.3390/sym15122196

Chicago/Turabian Style

Sun, Yang, Yuwei Bian, Huixin Li, Fangqing Tan, and Lihan Liu. 2023. "Flexible Offloading and Task Scheduling for IoT Applications in Dynamic Multi-Access Edge Computing Environments" Symmetry 15, no. 12: 2196. https://doi.org/10.3390/sym15122196

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop