In this section, we first discuss limitation of vanilla EDF in meeting nonpreemptive tasks’ job deadlines without the information about the future job release patterns of tasks. We then explain design principles for LCEDF, and finally develop the LCEDF scheduling algorithm.
4.1. Limitation of Vanilla EDF
In this subsection, we discuss limitation of vanilla EDF. To this end, we first explain the scheduling algorithm of vanilla (global nonpreemptive) EDF. Vanilla EDF manages a ready queue, in which jobs are sorted such that a job with an earlier deadline has a higher priority. Whenever there exists an unoccupied processor, the highestpriority job in the ready queue starts its execution on the processor; once a job starts its execution, the job cannot be preempted by any other job until the job finishes its execution.
As many studies pointed out [
5], nonpreemptive EDF is not effective in meeting job deadlines, if the information about the future job release patterns is not available (i.e., the system is nonclairvoyant). The following example demonstrates such ineffectiveness of EDF in meeting job deadlines.
Example 1. Consider a task set τ with the following two tasks is executed on a uniprocessor platform: ${\tau}_{1}({T}_{1}=102$, ${C}_{1}=24,{D}_{1}=102)$, ${\tau}_{2}({T}_{2}=33,{C}_{2}=17,{D}_{2}=33)$. Consider the following scenario: (i) the interval of interest is $[0,47)$; and (ii) ${J}_{1}^{1}$ is released at $t=0$ and ${J}_{2}^{1}$ is released at $t=x>0$. We show the schedule under vanilla EDF. At $t=0$, vanilla EDF schedules ${J}_{1}^{1}$, because any job of ${\tau}_{2}$ is not released at $t=0$; then, ${J}_{1}^{1}$ occupies the processor $[0,24)$. Suppose that ${J}_{2}^{1}$ is released at $t=6$; then, ${J}_{2}^{1}$ misses its deadline without having any chance to compete for an unoccupied processor until $({d}_{2}^{1}{c}_{2}^{1})=22$ as shown in Figure 1a. What if we know at $t=0$ that ${J}_{2}^{1}$ will be released at $t=6$? Then, by idling the processor in $[0,6)$, we can make ${J}_{2}^{1}$ and ${J}_{1}^{1}$ schedulable as shown in Figure 1b. However, without the information of the future release time of ${J}_{2}^{1}$, we cannot idle the processor. This is because, if ${J}_{2}^{1}$ is not released until $t=78$, ${J}_{1}^{1}$ eventually misses its deadline. Note that the task set is schedulable by LCEDF, to be presented in Example 3 in Section 4.3. The similar phenomenon occurs on a symmetry multiprocessor platform, as demonstrated in the following example.
Example 2. Consider a task set τ with the following three tasks on a twoprocessor platform: ${\tau}_{1}({T}_{1}=202,{C}_{1}=22,{D}_{1}=202)$, ${\tau}_{2}({T}_{2}=312,{C}_{2}=17,{D}_{2}=312)$ and ${\tau}_{3}({T}_{3}=81,{C}_{3}=74,{D}_{3}=81)$. Consider the following scenario: (i) the interval of interest is $[0,93)$; (ii) ${J}_{1}^{1}$ is released at $t=0$, and ${J}_{2}^{1}$ and ${J}_{3}^{1}$ is released at $t=x>0$ and $t=y>0$, respectively. We show the schedule under vanilla EDF. At $t=0$, vanilla EDF schedules ${J}_{1}^{1}$, because ${J}_{2}^{1}$ and ${J}_{3}^{1}$ are not released at $t=0$; then, ${J}_{1}^{1}$ occupies the processor $[0,22)$. Suppose that ${J}_{2}^{1}$ and ${J}_{3}^{1}$ are released at $t=6$ and $t=12$, respectively; then, ${J}_{3}^{1}$ misses its deadline without having any chance to compete for unoccupied processors until $({d}_{3}^{1}{c}_{3}^{1})=19$ as shown in Figure 2a. If we know that ${J}_{2}^{1}$ and ${J}_{3}^{1}$ will be released at $t=6$ and $t=12$, respectively, we can make ${J}_{3}^{1}$ schedulable by idling a processor in $[6,12)$ as shown in Figure 2b. However, without the information of the future release time of ${J}_{3}^{1}$, we cannot idle the processor; this is because, if ${J}_{3}^{1}$ is not released until $t=295$, ${J}_{2}^{1}$ eventually misses its deadline. Note that the task set is schedulable by LCEDF, to be presented in Example 4 in Section 4.3. As shown in Examples 1 and 2, there may exist a release pattern that yields a deadline miss of a job of
${\tau}_{k}$ of interest under vanilla EDF, if
$({D}_{k}{C}_{k}+1)$ of
${\tau}_{k}$ is smaller than the worstcase execution time of some other tasks (i.e.,
${C}_{i}$). The following observation records this property formally. (Observation 1 is already implicitly incorporated into the existing schedulability analysis for vanilla EDF [
11,
12].)
Observation 1. Suppose that we do not know any future job release pattern. Then, there exists a release pattern that yields a deadline miss of a job of ${\tau}_{k}$ of interest, if there exist at least m tasks ${\tau}_{i}\in \tau \setminus \left\{{\tau}_{k}\right\}$ whose worstcase execution time (i.e., ${C}_{i}$) is larger than $({D}_{k}{C}_{k}+1)$.
The observation holds as follows. Suppose that jobs of m tasks ${\tau}_{i}\in \tau \setminus \left\{{\tau}_{k}\right\}$ whose worstcase execution time is larger than $({D}_{k}{C}_{k}+1)$ are released at $t=0$. We consider the following situation. Suppose that the m jobs start their execution at $t=x\ge 0$. In this case, if a job of ${\tau}_{k}$ is released at $t=x+1$, the job of ${\tau}_{k}$ misses its deadline. In addition, if any of the m jobs do not start its execution forever, the jobs eventually miss their deadlines. Therefore, the observation holds.
The observation is important because if there exists such a task ${\tau}_{k}$, the task set including ${\tau}_{k}$ is unschedulable by vanilla EDF. Therefore, we design LCEDF so as to carefully handle such a task ${\tau}_{k}$ in Observation 1, which is detailed in the next subsection.
4.2. Design Principle for LCEDF
Motivated by Observation 1, we would like to avoid job deadline misses of tasks which belong to ${\tau}_{k}$ in Observation 1, by using the limited information about the future job release patterns. To this end, we classify tasks offline as follows:
${\tau}^{\mathsf{A}}$, a set of tasks ${\tau}_{k}$ which satisfy that there exist at least m other tasks ${\tau}_{i}\in \tau \setminus \left\{{\tau}_{k}\right\}$ whose execution time (i.e., ${C}_{i}$) is larger than $({D}_{k}{C}_{k}+1)$, and
${\tau}^{\mathsf{B}}$, a set of tasks which do not belong to ${\tau}^{\mathsf{A}}$.
We would like to avoid the deadline miss situations shown in
Figure 1a and
Figure 2a, in which a job
${J}_{i}^{j}$ of a task in
${\tau}^{\mathsf{A}}$ misses its deadline without having any chance to compete for an unoccupied processor until
$({d}_{i}^{j}{c}_{i}^{j})$, which is the last time instant each job should start its execution to avoid its deadline miss. As we explained in
Figure 1b and
Figure 2b, the situations can be avoided using knowledge of future job release patterns. In this paper, we aim at developing a systematic way to avoid such deadline miss situations with only a limited information for future job release patterns (explained in
Section 3). To this end, we manage a critical queue
CQ, in which jobs
$\left\{{J}_{i}^{j}\right\}$ of tasks in
${\tau}^{\mathsf{A}}$ are sorted by their
$({d}_{i}^{j}{c}_{i}^{j})$, which is the last time instant each job should start its execution to avoid its deadline miss. Whenever a job of a task in
${\tau}^{\mathsf{B}}$ is able to start its execution, we check whether executing the job of a task in
${\tau}^{\mathsf{B}}$ will jeopardize timely execution of jobs in
CQ by experiencing the deadline miss situations such as
Figure 1a and
Figure 2a. If so, the job of a task in
${\tau}^{\mathsf{B}}$ will be postponed by idling a processor where the job is supposed to execute under vanilla EDF. Note that although we assume to know the next job’s release time of all tasks in
Section 3, we actually need to know the next job’s release time of tasks in
${\tau}^{\mathsf{A}}$ only.
As we mentioned, the LCEDF algorithm avoids the deadline miss situations of jobs of tasks in
${\tau}^{\mathsf{A}}$ shown in
Figure 1a and
Figure 2a, by postponing jobs of tasks in
${\tau}^{\mathsf{B}}$. Then, the main problem is when jobs of tasks in
${\tau}^{\mathsf{B}}$ should postpone their executions for timely execution of jobs of tasks in
${\tau}^{\mathsf{A}}$. The more postponing yields the higher and lower chance for timely execution of jobs of tasks in
${\tau}^{\mathsf{A}}$ and
${\tau}^{\mathsf{B}}$, respectively; on the other hand, the less postponing results in the lower and higher chance for timely execution of jobs of tasks in
${\tau}^{\mathsf{A}}$ and
${\tau}^{\mathsf{B}}$, respectively. Therefore, we need to minimize postponing the execution of jobs of tasks in
${\tau}^{\mathsf{B}}$ while guaranteeing timely execution of jobs of tasks in
${\tau}^{\mathsf{A}}$. We may classify the situations where a job
${J}_{A1}^{1}$ of
${\tau}_{A1}$ in
${\tau}^{\mathsf{A}}$ has and does not have at least a chance to compete for unoccupied processors until
$({d}_{A1}^{1}{c}_{A1}^{1})$, into four situations as shown in
Figure 3.
We now discuss the four situations at
t in
Figure 3. Suppose that at
t, there are three unoccupied processors out of the four processors in the system. And, three jobs (
${J}_{B1}^{1}$,
${J}_{B2}^{1}$ and
${J}_{B3}^{1}$) of tasks belonging to
${\tau}^{\mathsf{B}}$ start their executions at
t, while one job (
${J}_{B4}^{1}$) of a task belonging to
${\tau}^{\mathsf{B}}$ keeps its execution started before
t. Now we are interested in the timely execution of
${J}_{A1}^{1}$ of a task in
${\tau}^{\mathsf{A}}$. If all the four jobs (i.e.,
${J}_{B1}^{1}$,
${J}_{B2}^{1}$,
${J}_{B3}^{1}$ and
${J}_{B4}^{1}$) finish their execution after
$({d}_{A1}^{1}{c}_{A1}^{1})$,
${J}_{A1}^{1}$ misses its deadline without having any chance to compete for unoccupied processors, as shown in
Figure 3a. However, if at least one job among the three jobs (
${J}_{B1}^{1}$,
${J}_{B2}^{1}$ and
${J}_{B3}^{1}$) which start its execution at
t finish their execution no later than
$({d}_{A1}^{1}{c}_{A1}^{1})$, then
${J}_{A1}^{1}$ does not miss its deadline as shown in
Figure 3b. Similarly, although there is another job
${J}_{A2}^{1}$ of a task in
${\tau}^{\mathsf{A}}$,
${J}_{A1}^{1}$ does not miss its deadline as long as
${J}_{A2}^{1}$ finishes its execution before
$({d}_{A1}^{1}{c}_{A1}^{1})$, which is shown in
Figure 3c. We have another case where
${J}_{A1}^{1}$ does not miss its deadline; that is the case where the job of
${J}_{B4}^{1}$ which keeps its execution started before
t finishes its execution before
$({d}_{A1}^{1}{c}_{A1}^{1})$, as shown in
Figure 3d.
Therefore, if the current situation does not belong to one of situations illustrated in
Figure 3b–d, we judge that there exists a job deadline miss. Once we judge such a deadlinemiss situation, we choose the lowestpriority job among the jobs of tasks in
${\tau}^{\mathsf{B}}$ which is supposed to start their execution at
t, and avoid the lowestpriority job’s execution, by idling a processor intentionally.
4.3. Algorithm Details and Examples
Based on the design principle explained in
Section 4.2, we detail the LCEDF algorithm using pseudo code and examples in this subsection.
As shown in Algorithm 1, the input components of LCEDF at
t are the ready queue
RQ, the critical queue
CQ, the number of unoccupied processors
${m}^{\prime}$, and the running job set
RJ. Here,
RQ is a set of ready jobs at
t, and
CQ is a set of jobs which will be released after
t, invoked by
${\tau}^{\mathsf{A}}$.
Algorithm 1 The LCEDF algorithm 
Input: the ready queueRQ, the critical queueCQ, the number of unoccupied processors ${m}^{\prime}$, and the running job setRJ, at t  1:
// Step 1: Check the priority of jobs of ${\tau}_{x}\in {\tau}^{\mathsf{A}}$ in RQ  2:
for every ${\tau}_{x}\in {\tau}^{\mathsf{A}}$ job ${J}_{x}^{i}$ in RQ do  3:
if Priority of ${J}_{x}^{i}\le {m}^{\prime}$ then  4:
Remove the job ${J}_{x}^{i}$ from RQ; start its execution; ${m}^{\prime}\leftarrow {m}^{\prime}1$  5:
Update the job release information of ${\tau}_{x}$ (from ${J}_{x}^{i}$ to ${J}_{x}^{i+1}$) in CQ  6:
end if  7:
end for  8:
// Step 2: Check whether every job of ${\tau}^{\mathsf{A}}$ in CQ does not miss its deadline  9:
for every ${\tau}_{x}\in {\tau}^{\mathsf{A}}$ job ${J}_{x}^{i}$ in CQ do  10:
if the number of jobs in RQ $<{m}^{\prime}$ then  11:
${m}^{\prime}\leftarrow {m}^{\prime}1$; Continue the for statement  12:
end if  13:
if ${m}^{\prime}\le 0$ then  14:
Exit the for statement  15:
end if  16:
IsFeasible ← Case0  17:
for ${m}^{\prime}$ high priority jobs ${J}_{y}^{j}$ of tasks in ${\tau}_{y}\in {\tau}^{\mathsf{B}}$ in RQ do  18:
if $t+{c}_{y}^{j}\le {d}_{x}^{i}{c}_{x}^{i}$ then  19:
IsFeasible ← Case1  20:
Exit the for statement  21:
end if  22:
end for  23:
if IsFeasible = Case0 then  24:
for every ${\tau}_{y}\in {\tau}^{\mathsf{A}}$ job ${J}_{y}^{j}$ in CQ and ${\tau}_{y}\ne {\tau}_{x}$ do  25:
if ${r}_{y}^{j}+{c}_{y}^{j}\le {d}_{x}^{i}{c}_{x}^{i}$ then  26:
IsFeasible ← Case2  27:
Exit the for statement  28:
end if  29:
end for  30:
for every job ${J}_{y}^{j}$ in RJ do  31:
if $t+{c}_{y}^{j}(t)\le {d}_{x}^{i}{c}_{x}^{i}$ then  32:
IsFeasible ← Case3  33:
Exit the for statement  34:
end if  35:
end for  36:
end if  37:
if IsFeasible = Case1 then  38:
Remove the highest priority job ${J}_{y}^{j}$ that satisfies $(t+{c}_{y}^{j}\le {d}_{x}^{i}{c}_{x}^{i})$ from RQ; start its execution; ${m}^{\prime}\leftarrow {m}^{\prime}1$  39:
else if IsFeasible = Case2 or Case3 then  40:
Remove the highest priority job ${J}_{y}^{j}$ from RQ; start its execution; ${m}^{\prime}\leftarrow {m}^{\prime}1$  41:
else // if IsFeasible = Case0  42:
${m}^{\prime}\leftarrow {m}^{\prime}1$  43:
end if  44:
end for  45:
// Step 3: Execute ${m}^{\prime}$ remaining jobs ${J}_{x}^{i}$ of ${\tau}_{x}\in {\tau}^{\mathsf{B}}$ in RQ  46:
for ${m}^{\prime}$ highestpriority jobs ${J}_{x}^{i}$ of ${\tau}_{x}\in {\tau}^{\mathsf{B}}$ in RQ do  47:
Remove from RQ; start its execution  48:
end for

Step 1 in Algorithm 1 assigns jobs of tasks in ${\tau}^{\mathsf{A}}$ belonging to RQ, to unoccupied processors. Since we postpone the execution of jobs of tasks in ${\tau}^{\mathsf{B}}$ in order for timely execution of jobs of tasks in ${\tau}^{\mathsf{A}}$, the ${m}^{\prime}$ highestpriority jobs of tasks in ${\tau}^{\mathsf{A}}$ belonging to RQ can be executed, which is the same as that under vanilla EDF. To this end, we first find jobs of tasks in ${\tau}^{\mathsf{A}}$ belonging to RQ, whose execution is started at t in unoccupied processors (Lines 2–3). Such a job starts its execution and it removed from RQ (Line 4). Also, whenever a job starts its execution on an unoccupied processor, we decrease the number of unoccupied processors by 1 (Line 4). Then, we update the release information of a task which invokes the job starting its execution (Line 5).
In Step 1 we start the execution of higherpriority jobs of tasks in
${\tau}^{\mathsf{A}}$ belonging to
RQ, and therefore we are ready to start the execution of higherpriority jobs of tasks in
${\tau}^{\mathsf{B}}$ belonging to
RQ in the remaining unoccupied processors. In Step 2, we decide whether each job of a task in
${\tau}^{\mathsf{B}}$ belonging to
RQ starts or postpones its execution, according to whether it is possible to guarantee the timely execution of jobs of tasks in
${\tau}^{\mathsf{A}}$ belonging to
CQ. First, we investigate whether it is possible to guarantee the schedulability of each job of tasks in
${\tau}^{\mathsf{A}}$ belonging to
CQ (Line 9). If the number of jobs in
RQ is strictly smaller than
${m}^{\prime}$, we can assign an unoccupied processor to
${J}_{x}^{i}$ even though all jobs in
RQ start their execution in unoccupied processors. Since we reserve an unoccupied processors for
${J}_{x}^{i}$, we decrease the number of unoccupied processors by 1; then continue the
for statement (Lines 10–11). If there is no more unoccupied processor, we stop this process because we cannot start any job execution (Lines 13–15). We set
IsFeasible as
CASE0 (Line 16), and investigate whether the current situation belongs one of the three cases in
Figure 3b–d where the timely execution of a job of a task in
${\tau}^{\mathsf{A}}$ is guaranteed; based on the cases, we change
IsFeasible to either
CASE1,
CASE2 or
CASE3 (Lines 17–36).
The
for statement in Lines 17–22 aims at checking whether execution of the
${m}^{\prime}$ highestpriority jobs in
RQ compromises the schedulability of any job of tasks in
${\tau}^{\mathsf{A}}$ in
CQ. We assume to assign the highestpriority job
${J}_{y}^{j}$ of a task in
${\tau}^{\mathsf{B}}$ to an unoccupied processor (Line 17). If the finishing time of the job’s execution (i.e.,
$t+{c}_{y}^{j}$) is no later than
$({d}_{x}^{i}{c}_{x}^{i})$ that is the last instant at which
${J}_{x}^{i}$ in
CQ starts its execution without its deadline miss, we set
IsFeasible as
Case1 (Line 19), which corresponds to
Figure 3b.
The
for statements respectively in Lines 24–29 and Lines 30–35 are performed only when
IsFeasible is equal to
Case0, which means the timely execution of the job
${J}_{x}^{i}$ in
CQ is not guaranteed yet. In the
for statement in Lines 24–29, we check the finishing time of a job in
CQ (i.e.,
${r}_{y}^{j}+{c}_{y}^{j}$) is no later than
$({d}_{x}^{i}{c}_{x}^{i})$, which corresponds to
Figure 3c; since
${J}_{y}^{j}$ cannot start its execution until
t because
${r}_{y}^{j}$ is later than
t, we calculate the earliest finishing time of the job by
$({r}_{y}^{j}+{c}_{y}^{j})$ (Line 25). If so, we set
IsFeasible as
Case2 (Line 26). In the
for statement Lines 30–35, we check the finishing time of a job in
RJ which starts its execution before
t (i.e.,
$t+{c}_{y}^{j}(t)$) is no later than
$({d}_{x}^{i}{c}_{x}^{i})$, which corresponds to
Figure 3d, where
${c}_{y}^{j}(t)$ denotes the remaining execution time of
${J}_{y}^{j}$ at
t; this is because,
${J}_{y}^{j}$ starts its execution before
t (Line 31). If so, we set
IsFeasible as
Case3 (Line 32).
In Lines 37–43, if IsFeasible is set to Case1, we remove the highestpriority job in RQ that satisfies $t+{c}_{y}^{j}\le {d}_{x}^{i}{c}_{x}^{i}$ and start to execute the job; also, we decrease the number of unoccupied processors by 1 (Line 38). If IsFeasible is set to Case2 or Case3, we remove the highestpriority job in RQ and start to execute the job; also, we decrease the number of unoccupied processors by 1 (Line 40). Otherwise (meaning that IsFeasible equals to Case0), we just decrease the number of unoccupied processors by 1, meaning that we postpone a job of a task in ${\tau}^{\mathsf{B}}$ belonging to RQ (Line 42).
In Steps 1 and 2, we already guarantee the timely execution of jobs of tasks belonging to ${\tau}^{\mathsf{A}}$ in CQ, meaning that the remaining unoccupied processors can serve for jobs of tasks in ${\tau}^{\mathsf{B}}$ in RQ. Therefore, in Step 3 we start to execute ${m}^{\prime}$ highestpriority jobs in RQ (Lines 46–48).
In the following examples, we show that the task sets associated with the processor platforms in Examples 1 and 2 can be schedulable by the LCEDF algorithms.
Example 3. Consider the task set with the processor platform shown in Example 1; that is, ${\tau}_{1}({T}_{1}=102,{C}_{1}=24,{D}_{1}=102)$ and ${\tau}_{2}({T}_{2}=33,{C}_{2}=17,{D}_{2}=33)$ are scheduled on a uniprocessor platform. We first categorize each task into ${\tau}^{\mathsf{A}}$ or ${\tau}^{\mathsf{B}}$. When we calculate $({D}_{2}{C}_{2}+1)$ of ${\tau}_{2}$, we get 17; since there is one task whose execution time is larger than 17 (which is ${\tau}_{1}$), ${\tau}_{2}$ belongs to ${\tau}^{\mathsf{A}}$. Similarly, $({D}_{1}{C}_{1}+1)$ of ${\tau}_{1}$ is 79, which is no smaller than ${C}_{2}=17$; therefore, ${\tau}_{1}$ belongs to ${\tau}^{\mathsf{B}}$.
Consider the following scenario which is the same as that of Example 1: (i) the interval of interest is $[0,47)$; and (ii) ${J}_{1}^{1}$ is released at $t=0$ and ${J}_{2}^{1}$ is released at $t=6$. Since we categorized ${\tau}_{2}$ as ${\tau}^{\mathsf{A}}$, we know that at $t=0$, ${J}_{2}^{1}$ will release at $t=6$. At $t=0$, there is only ${J}_{1}^{1}$ in the ready queue; according to Step 2 we examine if there is an unoccupied processor for ${J}_{2}^{1}$ in the critical queue to be executed. This can be done by checking $t+{c}_{1}^{1}\le {d}_{2}^{1}{c}_{2}^{1}$ resulting $0+24\le 22$, which is wrong. We conclude that there is no unoccupied processor for ${J}_{2}^{1}$ to be executed after ${J}_{1}^{1}$. Hence we postpone ${J}_{1}^{1}$ and execute ${J}_{2}^{1}$ at $t=6$; this yields that the task set is schedulable by LCEDF.
Example 4. Consider the task set with the processor platform shown in Example 2; that is, ${\tau}_{1}({T}_{1}=202,{C}_{1}=22,{D}_{1}=202)$, ${\tau}_{2}({T}_{2}=312,{C}_{2}=17,{D}_{2}=312)$, and ${\tau}_{3}({T}_{3}=81,{C}_{3}=74,{D}_{3}=81)$ are scheduled on a twoprocessor platform. We first categorize each task into ${\tau}^{\mathsf{A}}$ or ${\tau}^{\mathsf{B}}$. If we calculate $({D}_{3}{C}_{3}+1)$ of ${\tau}_{3}$, we get 8; since there are two tasks whose execution time is larger than 8 (which is ${\tau}_{1}$ and ${\tau}_{2}$), ${\tau}_{3}$ belongs to ${\tau}^{\mathsf{A}}$. Similarly, ${\tau}_{1}$ and ${\tau}_{2}$ belong to ${\tau}^{\mathsf{B}}$.
Consider the following scenario which is the same as that of Example 2: (i) the interval of interest is $[0,93)$; and (ii) ${J}_{1}^{1}$ is released at $t=0$, and ${J}_{2}^{1}$ and ${J}_{3}^{1}$ is released at $t=6$ and $t=12$, respectively. Since we categorized ${\tau}_{3}$ as ${\tau}^{\mathsf{A}}$, we know that at $t=0$, ${J}_{3}^{1}$ will release at $t=12$. At $t=0$, there is only ${J}_{1}^{1}$ in the ready queue; according to Step 2 we examine if there is an unoccupied processor for ${J}_{3}^{1}$ in the critical queue to be executed. Since there are two unoccupied processors and there is only one job in the ready queue, we do not postpone the execution of ${J}_{1}^{1}$. At the $t=6$, there is only ${J}_{2}^{1}$ in the ready queue; according to Step 2 we examine if there is an unoccupied processor for ${J}_{3}^{1}$ in the critical queue to be executed. This can be done by checking $t+{c}_{2}^{1}\le {d}_{3}^{1}{c}_{3}^{1}$ resulting $6+17\le 19$, which is wrong. We conclude that there is no unoccupied processor for ${J}_{3}^{1}$ to be executed after ${J}_{2}^{1}$. Hence we postpone ${J}_{2}^{1}$ and execute ${J}_{3}^{1}$ at $t=12$; this yields that the task set is schedulable by LCEDF.
We now discuss timecomplexity of the LCEDF algorithm itself. Algorithm 1 takes $O({n}^{2})$, where the number of tasks in $\tau $ is n. That is, the number of jobs in ${\tau}^{\mathsf{A}}$ is upperbounded by n (See the number of iterations in Line 10), and the number of iterations for Lines 17, 24 and 30 are also upperbounded by n. Also, it takes $O\left(n\xb7log(n)\right)$ to sort jobs according to EDF. Therefore, $O({n}^{2})+O\left(n\xb7log(n)\right)=O({n}^{2})$. Note that LCEDF performs Algorithm 1 only when there is job release, job completion or processor idling.