Next Article in Journal
Cost Optimal Production-Scheduling Model Based on VNS-NSGA-II Hybrid Algorithm—Study on Tissue Paper Mill
Next Article in Special Issue
Research on the Siting Model of Emergency Centers in a Chemical Industry Park to Prevent the Domino Effect
Previous Article in Journal
Performance Analysis of Externally Pressurized Gas Journal Bearing Lubricated with Vapor of R134a in Centrifugal Compressor
Previous Article in Special Issue
An Efficient Ant Colony Algorithm Based on Rank 2 Matrix Approximation Method for Aircraft Arrival/Departure Scheduling Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Scheduling Large-Size Identical Parallel Machines with Single Server Using a Novel Heuristic-Guided Genetic Algorithm (DAS/GA) Approach

1
Industrial Engineering Department, School of Applied Technical Sciences, German Jordanian University, Amman 11180, Jordan
2
Mechanical, Industrial & Manufacturing Engineering, Youngstown State University, Youngstown, OH 44555, USA
3
Mechanical and Maintenance Engineering Department, School of Applied Technical Sciences, German Jordanian University, Amman 11180, Jordan
*
Author to whom correspondence should be addressed.
Processes 2022, 10(10), 2071; https://doi.org/10.3390/pr10102071
Submission received: 13 September 2022 / Revised: 6 October 2022 / Accepted: 11 October 2022 / Published: 13 October 2022

Abstract

:
Parallel Machine Scheduling (PMS) is a well-known problem in modern manufacturing. It is an optimization problem aiming to schedule n jobs using m machines while fulfilling certain practical requirements, such as total tardiness. Traditional approaches, e.g., mix integer programming and Genetic Algorithm (GA), usually fail, particularly in large-size PMS problems, due to computational time and/or memory burden and the large searching space required, respectively. This work aims to overcome such challenges by proposing a heuristic-based GA (DAS/GA). Specifically, a large-scale PMS problem with n independent jobs and m identical machines with a single server is studied. Individual heuristic algorithms (DAS) and GA are used as benchmarks to verify the performance of the proposed combined DAS/GA on 18 benchmark problems established to cover small, medium, and large PMS problems concerning standard performance metrics from the literature and a new metric proposed in this work (standardized overall total tardiness). Computational experiments showed that the heuristic part (DAS-h) of the proposed algorithm significantly enhanced the performance of the GA for large-size problems. The results indicated that the proposed algorithm should only be used for large-scale PMS problems because DAS-h trapped GA in a region of local optima, limiting its capabilities in small- and mainly medium-sized problems.

1. Introduction

In modern manufacturing, the Parallel Machine Scheduling (PMS) problem amounts to scheduling several jobs using various identical machines while fulfilling specific practical requirements, such as minimum total tardiness while executing the jobs [1,2,3,4,5]. Thus, the PMS problem can be formulated as an NP-hard optimization problem that requires sophisticated optimization techniques for scheduling the jobs using the available machines while satisfying some practical constraints [6,7,8,9,10].
Many algorithms have been proposed in the literature to deal with the PMS problem. According to refs. [11,12,13,14,15], the algorithms used in PMS can be globally divided into two main groups: construction and improvement or interchange algorithms. The construction algorithms choose one job at a time and fix it in the available position using dispatching rules. One of the well-known construction rules is the Apparent Tardiness Cost (ATC) rule [16]. Several scholars have used modified versions of ATC to minimize the total tardiness in PMS [17,18,19,20,21,22,23].
On the other hand, the improvement or interchange algorithms work on an initial solution and use local interchanges to improve the solution. Several scholars have used meta-heuristics as improvement algorithms, such as Genetic Algorithms (GAs). GAs have been heavily used as improvement algorithms for PMS problems [24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40]. Simulated Annealing (SA) [41,42], Ant Colony (AC) [43,44,45], Non-dominated Sorting Genetic Algorithm-II (NSGA-II) [7,46,47,48], NSGA-III [1,49,50], and Non-dominated Ranking GA (NRGA) [51] have also been used to optimize PMS as improving algorithms.
For example, Wang et al. [31] proposed modified versions of two meta-heuristics algorithms (GA and SA) to obtain an approximate solution to large-scale PMS problems with unrelated parallel machines. Sharma et al. [52] proposed and evaluated the effectiveness of a multi-step crossover GA on randomly generated PMS problems of different sizes using identical parallel machines. Laha et al. [42] proposed a SA meta-heuristics algorithm for minimizing makespan in identical PMS problems. Jia et al. [45] proposed a fuzzy AC optimization algorithm to obtain better solutions within a convenient time for fuzzy scheduling problems (i.e., jobs characterized by fuzzy processing time) on parallel batch machines with different capacities. Farmand et al. [8] investigated the effectiveness of two meta-heuristic algorithms (Multi-Objective Particle Swarm Optimization (MOPSO) and NSGA-II) on small, medium, and large-scale PMS problems with identical parallel machines.
In this paper, we consider a large-size identical parallel machine scheduling problem P m with n independent jobs and m identical machines with a single server. Traditional computational algorithms, such as Mix Integer Programming (MIP) and GA, usually fail or perform badly in such large-size problems due to computational time limitations and/or memory limitations and the large search space required, respectively. A meta-heuristic algorithm DAS/GA is developed in this paper to overcome these limitations while further boosting the performance of the GA. The problem can be described as follows: given a sequence of n independent jobs J = J 1 ,   J 2 ,   ,   J n and a set of m identical machines M = M 1 ,   M 2 ,   ,   M m , the objective is to sequence the jobs on the machines to minimize the total tardiness i T i of the schedule. Job J i is associated with a processing time   P T i and a due date D D i with an independent setup time S T s i , which is included in P T i . One server   S 1 exists for dispatching the jobs to the available machines according to the dispatching rules in the algorithm. The standard notation for this problem can be denoted as P m ,   S 1 S T s i i T i according to ref. [53], which is known to be an NP-hard problem [54]. Individual heuristic algorithms (DAS) and GA are used as benchmarks to assess the effectiveness of the proposed combined DAS/GA algorithm on 18 benchmark problems selected to represent small, medium, and large PMS problems.
The assumptions made in this article are as follows: the jobs are independent; each job has a single operation; the processing times include the corresponding setup times, and the setup times are independent of the sequence; machines are identical and have 100% availability and utilization while jobs are waiting; no preemption, no cancelation, and no priority for jobs are allowed; all of the jobs are available at time zero, and the problem is static and deterministic.
The rest of the article is organized as follows: Section 2 presents the framework of the proposed heuristic-guided GA; Section 3 discusses the performance measures used to evaluate the effectiveness of the proposed algorithm to the benchmarks; Section 4 explains the data generation method for the experimentation problems setup; Section 5 discusses the results of the application; and finally, Section 6 concludes the article and highlights future work.

2. The Framework of the Proposed Heuristic-Guided Genetic Algorithm (DAS/GA)

The meta-heuristic algorithm DAS/GA proposed in this article consists of heuristic DAS-h, followed by GA. DAS-h itself consists of 3 heuristics working sequentially: Due Date Tightness heuristic (DDT-h), Apparent Tardiness Cost heuristics (ATC-h), and Swap heuristic (S-h). Figure 1 shows the flowchart of the proposed DAS/GA.
DAS/GA starts with the construction heuristic (DDT-h), which assigns jobs to different machines based on their due dates’ tightness index values. Then, the produced schedule is fed into the modified version of the ATC heuristic (ATC-h) to further improve the schedule. The produced schedule is fed into a third improvement heuristic (S-h), which will fine-adjust the schedule. The output schedule of S-h is used as one chromosome (i.e., schedule) to seed the initial population in the GA. The heuristics and the GA comprising DAS/GA meta-heuristic are discussed in detail in the following sections.

2.1. Due Date Tightness Heuristic (DDT-h)

In DDT-h, jobs are assigned to different machines based on their Due Date Tightness index I i D D values. This index is calculated by Equation (1), in which the   D D i and P T i are the due date and the processing time for job i, respectively. DDT-h sorts the jobs in an ascending order based on their I i D D values such that a smaller I i D D value indicates a higher priority job.
I i D D = D D i P T i P T i
To illustrate how the DDT-h works, consider 2 machines and 10 jobs along with their processing times and due dates, as shown in Table 1. Their I i D D values were calculated according to Equation (1) and are recorded in Table 1.
The 10 jobs are sorted in ascending order based on their   I i D D values, as reported in Table 2.
After jobs were sorted, the job with the smallest I i D D value is dispatched as the first job on the first machine, the job with the second-smallest I i D D value is dispatched as the first job on the second machine, the job with third-smallest I i D D value is dispatched as the second job on the first machine, and so on. The resulting schedule is summarized in Table 3.

2.2. Apparent Tardiness Cost Heuristic (ATC-h)

The Apparent Tardiness Cost heuristic (ATC-h) is an iterative heuristic that schedules jobs one at a time from a set of remaining available jobs. The heuristic contains a priority index I i t that changes dynamically with time t . Variations of ATC-h were explained, among other references, in refs. [17,20]. We highlight the main aspects of ATC-h in this article for convenience and modify it to suit the   P m ,   S 1 S T s i i T i problem at hand. The I i t is calculated by multiplying two terms, which are the Weighted Shortest Processing Time ( W S P T ) and the Least Slack (LS), as in Equation (2):
I i t = W S P T ×   LS
The Weighted Shortest Processing Time ( W S P T ) is given by Equation (3):
W S P T = 1 P T i ,
and the LS is given by Equation (4):
LS = e m a x D D i P T i C T k   ,       0 ζ × μ P T ,
where   C T k is the cumulative time for predecessor job k, μ P T is the mean of the processing times for the remaining jobs, and ζ is a look-ahead parameter, which is determined through experimentation. According to ref. [17], ζ can be calculated as in Equation (5):
ζ = 1.2 × ln n m m a x i D D i m i n i D D i m n μ P T
Substituting Equations (3) and (4) in Equation (2) gives the final equation for I j t as in Equation (6):
I i t = 1 P T i × e m a x D D i P T i C T k   ,       0 ζ × μ P T ,
where ζ is given by Equation (5). It should be noted that the processing time for a job includes its setup time, and the setup time for the job is independent of the sequence that includes the job, as stated in assumption 3 in this article. Hence, the equations reported in ref. [17], which assume a dependent setup time to derive I i t , should be modified from their original form in ref. [17] to Equations (4)–(6).
As seen in Equation (3), W S P T is the inverse of the processing times. This means that jobs with lower processing times have higher priority (providing everything else remains the same in LS ). Moreover, Equation (4) shows that LS has two possible values: value 1 when the slack value for the job is negative, which indicates that the job is tardy, or any value between 0 and 1 when the slack value is not negative, which indicates that the job is not tardy. This means that a tardy job has the highest possible LS value and consequently has the highest possible I i t value among the jobs with equal processing times.
Every time a machine is available, the priority indices values of all the remaining jobs are re-calculated, and the job with the highest I i t value is chosen to be processed next. The decision is dynamic, as the   I i t value of the job changes over time because it depends on C T k , the cumulative time for its predecessor.
The ATC-h needs an initial schedule to work on. This matter makes the final schedule for ATC-h dependent on the initial schedule provided. Table 4 shows a possible initial schedule for ATC-h.
Table 5 shows the I i t values generated for the data in Table 1 based on the initial schedule provided in Table 4.
Table 6 shows the schedule produced using ATC-h.
Figure 2 shows the standardized values of I i t and I i D D for the 10 jobs described in Table 1. In ATC-h, as D D i P T i value, the Due Date Tightness (DDT), increases, the   I i t value decreases, and consequently the priority of the job decreases, while in DDT-h, as the DDT value increases, the I i D D value increases but the priority of the job decreases.
The Figure shows that the two heuristics have the same basic behavior for jobs with small values of DDT like jobs 2, 3 and 7 but have fundamentally different behavior for jobs with large values like jobs 4 and 8. This difference in the behavior is because the ATC-h takes into account the cumulative processing time for the predecessor C T k in calculating L S while DDT-h does not. The effect of the C T k on the I i t values in ATC-h is evident in jobs 2 and 8, where both jobs have very close I i t values even though they have a big difference in their DDT values.

2.3. Swap Heuristic (S-h)

The aim of the Swap Heuristic (S-h) is to fine-adjust the schedule by removing one job (with positive tardiness) at a time from a tardy machine and assigning it to the machine with the lowest total tardiness. This heuristic is a fine-adjusting heuristic, so it is meant to be applied after ATC-h, which is a coarse-adjusting heuristic. The aim of this heuristic is to level the load on the machines. S-h is similar to ATC-h in terms of the need for an initial schedule and in terms of using the cumulative times in their calculations.
The pseudo-code for this heuristic is as follows (Algorithm 1):
Algorithm 1: The pseudo-code of the Swap Heuristic (S-h)
1:Input schedule from ATC-h
2:Calculate the tardiness T j for each machine in machine set M
3:Determine the machine M L with lowest T j
4:Determine the set that contains the rest of the other machines M o such that M o j x   M | x   M L
5:Determine the set of jobs J L on M L
6:Calculate C t l = i   J l P T i
7:Do WHILE the overall total tardiness T T is improving or termination condition is reached
8:            FOR all machines indexed j in M o do the following
9:                        FOR all jobs indexed i in M o j do the following
10:                                    Calculate the cumulative processing time C t j i for job i on M o j
11:                                    IF C t l + P T i < C t j i do the following
12:                                                Remove job i from M o j and assign it to the end of the schedule for M L
13:                                                Update   T j , M L , J L , M o , and C t l
14:                                                Go to DO WHILE loop
15:                                    END IF
16:                        END FOR indexed i
17:            END FOR indexed j
18:  END DO WHILE
19:  Report the new Schedule
The heuristic chooses the job to be moved according to the Ineq. 7 (Equation (7)).
C t l + P T i < C t j i
Ineq. 7 states that moving a tardy job from any machine in M o to the end of the schedule for M L always results in an improvement in the overall tardiness of the schedule. The left-hand side of the inequality is a quasi-representation of the new tardiness value for the removed job i in the new machine, while the right-hand side is a quasi-representation of the tardiness value for the removed job i in its original machine. If the left-hand side is less than the right-hand side of the inequality, then removing job i from the current machine M o j and assigning it to M L guarantees enhancement of the overall total tardiness of the schedule by the difference between the left-hand side and the right-hand side values.
To illustrate the effect of this heuristic, consider the data in Table 7 and the corresponding proposed schedule in Table 8.
The total tardiness for machine 1 is 7802 and for machine 2 is 1934; the overall total tardiness for the schedule is 9736. The cumulative processing time for machine 2, which is M L in this case, is 2434; hence, feeding the data of job 10 in Ineq. 7 gives a correct inequality, as shown below:
2434 + 224 373 < 2674 373 2285 < 2301
Thus, removing job 10 from machine 1 and assigning it at the end of the schedule for machine 2 should improve the overall total tardiness for the schedule by 2301 − 2285 = 16. In fact, removing job 10 from machine 1 and assigning it at the end of the schedule for machine 2 gives an overall total tardiness of 9720, which is 16 units less than the total tardiness of the original schedule, with total tardiness for machine 1 of 5501 and total tardiness for machine 2 of 4219. It should be noticed here how S-h levels the load on the machines and enhances the overall total tardiness of the schedule.

2.4. Genetic Algorithm (GA)

The GA has been used intensively in PMS with the aim of minimizing the overall total tardiness of the schedule [25,33,34,35]. GA has two kinds of operations: the genetic operation and the evolution operation. Crossover and mutation in GA belong to the genetic operation, while the selection mechanism belongs to the evolution operation [55]. The crossover operator in GA aims to roughly search the solution space while the mutation operator aims to finely search the solution space by exploiting the promising areas found by the crossover operator [56].
Chromosome representation, mutation, and selection strategies should be tailored to fit the problem at hand. These strategies are explained in the following sections.

2.4.1. Chromosome Representation

In this proposed GA, the chromosome consists of m rows, one row for each machine, and n m 1 columns (positions) for each row, to ensure that each machine has at least one job on it. In this representation, gene ξ i j = k means that the i t h position on the j t h machine is occupied by job k . In this representation, each gene carries three pieces of information: the value of the gene itself represents the job, while the position of the gene, determined by i and j indices, represents the machine and the position on that machine for the job. This representation ensures the feasibility of the chromosomes and offspring and thus avoids the need for any repair actions.
Consider the data used in Table 1. Table 9 shows one possible chromosome for this data. The phenotype can be easily retrieved from the genotype in this representation, as gene ξ 22 = 5 means that job 5 is the second job to be processed on machine 2, while ξ 17 = 0 means that there are no jobs processed in position 7 or beyond for machine 1.

2.4.2. Fitness Function

In this GA, the fitness function used is the overall total tardiness of the schedule, as expressed in Equation (8):
T T = j = 1 m d ξ i j < S ξ i j + P T ξ i j i = 1 : n m 1 S ξ i j + P T ξ i j D D ξ i j ,
where S ξ i j , P T ξ i j , and D D ξ i j are the start time, processing time, and due date for gene ξ i j .
The fitness function value for the chromosome in Table 9 is as follows:
j = 1 2 d ξ i j < S ξ i j + P T ξ i j i = 1 : 5 S ξ i j + P T ξ i j D D ξ i j = 7618 + 1011 = 8629 .

2.4.3. Mutation

This GA is a crossover-free GA in which only mutation is used to produce the offspring from a single parent. Four different mutation types were used to mutate the selected chromosome. The Two Genes Exchange mutation (TGEm) affects only two jobs on the same machine, as it chooses two random jobs from a randomly selected machine and switches their positions. The Number of Jobs mutation (NoJm) changes the number of jobs on two randomly selected machines as it swaps randomly selected jobs from one randomly selected machine to another randomly selected one, provided that the minimum of one job per machine constraint is conserved. NoJm imitates the S-h, but it differs from S-h in two aspects. First, S-h swaps one job each time it is applied while NoJm may swap more than one job each time it is applied. Second, unlike S-h, NoJm does not guarantee that the change is beneficial. The Flip Ends mutation (FEm) flips the ends of the schedule for a randomly selected machine such that the first job becomes the last job and the last job becomes the first job on that machine. Flip Middle mutation (FMm), flips the sequence of the jobs between two randomly selected positions near the middle of the machine’s schedule.
TGEm plays the role of the traditional mutation in the GA, in which it generates a limited disturbance in the chromosome; hence, it performs fine search. The other three types of mutations play the role of crossover in the GA, as they introduce high disturbances in the chromosome; hence, they play the role of coarse search. The offspring produced by these four types of mutations are always feasible offspring thanks to the chromosome representation discussed earlier. Consequently, there is no need for any repair actions on the offspring. Moreover, in this GA, a 25% mutation rate is adopted. This means that the number of offspring generated by mutation is the same as the population size, as each chromosome selected for the mutation will produce 4 offspring.

2.4.4. Selection

An elitist selection strategy is adopted in this GA. Under this strategy, all the parents and the offspring form a pool in which they have to compete for their survival. Those who have better fitness values will be selected as the parents of the next generation.

2.5. Mathematical Model

Binary programming for P m ,   S 1 S T s i i T i is formulated in studies such as refs. [57,58]. The objective function of this model is
T T = k = 1 m j = 1 p T j k
The constraints are
j = 1 p k = 1 m X i j k = 1                                                 ,   i = 1 , ,   n
i = 1 j o b X i j k 1                       ,     j = 1 , ,   p   a n d   k = 1 , ,   m        
C j k   =   C j 1 k   + i = 1 n P T i X i j k       j = 1 ,   ,   p ,       k = 1 ,   ,   m .   C 0 k = 0
C j k   i = 1 n D D i X i j k T j k   0               j = 1 ,   ,   p ,       k = 1 ,   ,   m
T j k 0
X i j k   B i n a r y
In this mathematical model, indices i , j , and k are indices for job, position, and machine, respectively, and p is the number of positions on the machine. Moreover, X i j k = 1 if job i is processed in position j at machine k and is “0” otherwise, C j k   is the completion time for position j at machine k , T j k is a positive value that represents the tardiness value for position j at machine k .
Equation (9) calculates the overall total tardiness of the schedule by summing the individual tardiness values for the positions on the machines. Equation (10) guarantees that each job is assigned only once, while Equation (11) guarantees that each position on each machine, if occupied, will be occupied only once. Equation (12) calculates the cumulative processing time for the job, and Equations (13) and (14) together dictate that if the position on the machine is occupied by a tardy job, then the tardiness of the position equals the tardiness of that job; otherwise, the tardiness of the position and consequently the tardiness of the job is zero.

3. Performance Measures

The performance of DAS-h, DAS/GA, and GA was measured in this article using two performance measures suggested by ref. [17], (Relative Error ( R E ) and Average Relative Improvement ( A R I )), and a third performance measure that is suggested in this article: the standardized overall total tardiness S t d r d T T .
R E is the difference between the tardiness of the method used   T T h and the tardiness of the optimal schedule T T o p found by the binary programming model discussed in Section 2.5 using CPLEX software relative to T T o p . Mathematically R E is given by Equation (16). It should be noted that this measure can only be used when T T o p > 0 . If T T o p could not be found due the memory limitations or if T T o p = 0 , T T o p will be substituted by T T B e s t , which is the best T T found among the different methods used to solve the problem.
R E = T T h T T o p T T o p
A R I is used only with GA and DAS/GA to measure their tardiness T T G A | D A S / G A with respect to the tardiness of DAS-h, which is T T D A S - h . Mathematically, ARI is given by Equation (17):
A R I = T T G A | D A S / G A T T D A S - h
S t d r d T T is the relative deviation between the overall total tardiness of the method and the minimum overall total tardiness among the methods used, divided by the overall total tardiness among the methods used. Mathematically, S t d r d T T is given by Equation (18):
S t d r d T T = T T H m i n T T D A S - h , T T D A S / G A ,   T T G A m i n T T D A S - h , T T D A S / G A ,   T T G A ,               H D A S - h ,   D A S / G A ,   G A
It should be noted that R E and S t d r d T T is the same for cases where T T o p = 0 .

4. Data Generation for Experiments

In this section, 18 benchmark problems are considered for experimentation. The problems’ instances were generated such that they cover small, medium, and large problems to study the performance of the different methods, ATC-h, DAS, DAS/GA, and GA, under these cases. The average of 20 replicates was considered in calculating the performance of GA, DAS/GA, and ATC-h, as they demand a random start-up schedule. The running time for DAS/GA and GA is only 1 min per replicate. The instances for the experimentation problems were generated according to Fisher’s standard method as discussed in refs. [57,59]. The method starts by generating n integer processing times P T i s from a uniform distribution such that P T i ~ U 1 ,   100 . The corresponding due dates D D i s   are generated from another uniform distribution such that D D i ~ P 1 τ R 2 / m ,   P 1 τ + R 2 / m   , where P = i = 1 n P T i , τ 0.2 ,   0.4 ,   0.6 ,   0.8 ,   1 and   R 0.2 ,   0.4 ,   0.6 ,   0.8 ,   1 . The template used for the instances is J _ M _ τ _ R . For example, 2000_10_04_04 means scheduling 2000 jobs on 10 identical machines such that due dates are generated using τ = 0.4 and R = 0.4 .

5. Results and Experiments

Table 10 shows the performance measures for 18 problems generated according to Fisher’s standard method as discussed earlier. It should be noted that CPLEX software did not find the optimal solution, Opt.   T T , for problems beyond problem 11 due to memory limitations. This shows the importance of this work and other related works in solving big NP-hard PMS problems where binary programming fails due to technical issues.
Figure 3a shows a comparison between the performances of the different methods using the R E measure. Figure 3b shows that the performance of GA slightly outperformed the performance of DAS/GA for small problems and significantly for medium problems. On the other hand, for large problems, the performance of DAS/GA significantly outperformed the performance of GA. This means that DAS-h helped GA improve its performance significantly in large problems but deteriorated its performance for medium problems. Figure 3c shows that the trend between the performances of GA and DAS-h is the same as the trend captured in Figure 4b between GA and DAS/GA. The performance of GA slightly outperformed the performance of DAS-h for small problems and significantly for medium problems, while for large problems the performance of DAS-h significantly outperformed the performance of GA. From Figure 3b,c, one can see that the behavior of DAS-h masks the behavior of DAS/GA for large problems, as DAS/GA has the same behavior of DAS-h relative to GA in this range of problem sizes. The comparison between the performance of DAS-h and DAS/GA shown in Figure 3d supports this argument. The Figure shows that DAS/GA slightly outperformed DAS-h for small problems and significantly for medium problems; the methods had a negligible difference in their performance for large problems.
The negligible difference in performance between DAS-h and DAS/GA shown in Figure 3d for large problems suggests that when combining DAS-h with GA to form the DAS/GA meta-heuristic, DAS/GA will be trapped in a region of good local optima created by DAS-h; consequently, the effect of GA will be negligible as it is trapped in this region.
In Figure 4, the standardized total tardiness values S t d r d T T H for the different methods were compared. Figure 4a shows the exact same trends as found in Figure 3a between the performances of the different methods but using the S t d r d T T performance measure. Figure 4b shows that GA slightly outperformed DAS/GA for small problems and significantly for medium problems. For large problems, DAS/GA significantly outperformed GA, which is the same as the trend in Figure 3b. Figure 4c shows the same trend found in Figure 3c between the performances of GA and DAS-h.
Figure 4d shows that DAS/GA outperformed DAS-h for small problems and significantly for medium problems; however, the methods had a negligible difference in their performance for large problems, which is the same trend as found in Figure 4d.
Figure 5 shows a comparison between DAS/GA and GA using R A I measure. The figure reveals the same trend found earlier using R E and S t d r d T T measures in Figure 3b and Figure 4b, respectively: GA slightly outperformed DAS/GA for small problems and especially for medium problems. For large problems, DAS/GA significantly outperformed GA. DAS-h helped GA to improve its performance significantly in large problems but deteriorated its performance for small and especially for medium problems, as the A R I values for DAS/GA for large problems are almost constant and approximately 1, but for small problems and especially for medium problems, the values are less than 1.
Moreover, looking at the A R I values for DAS/GA for large problems supports the argument made earlier: DAS-h creates a region of good local optima for large size problems that trap GA in it and renders the effect of GA negligible, and hence in this region DAS-h dictates the behavior of DAS/GA. The values of A R I support this argument, as the A R I values for large problems are almost 1, which indicates that the behavior of DAS/GA is very close to the behavior of DAS-h, and hence the effect of GA is almost negligible in DAS/GA compared to the effect of DAS-h.
Figure 6 compares the evolution line of DAS/GA and GA for problem 500_10_05_05. The Figure shows the same basic behavior for their evolution lines, except that DAS/GA’s evolution line, shown in Figure 6b, started from a better point and reached a better fitness value than GA. Moreover, the figure shows that GA had a higher number of enhancement points in its evolution line than DAS/GA. This observation agrees with what was noted earlier about the effect of the region of local optima that DAS-h introduces in DAS/GA.
To explore this observation more, Figure 7 shows a comparison between Average Number of Enhancements Per Replicate (ANEPR) for both meta-heuristics DAS/GA and GA. It is noted that as the problem size increased, the ANEPR for the GA method increased significantly, while for the DAS/GA method, the ANEPR was independent of the problem size, as the DAS/GA method showed a random relation between ANEPR and the problem size.
This shows that generating a strong initial solution for DAS/GA for large-size problems using DAS-h will cause DAS/GA to be trapped in a region of local optima and hence limit the number of enhancements made to the strong initial solution. This matter makes the difference between the performances of DAS-h and DAS/GA negligible, as revealed in Figure 3d and Figure 4d and the DAS/GA curve in Figure 5, and hence supports the argument made earlier that DAS-h dictates the behavior of DAS/GA in this region of large-size problems.
Figure 8 shows a comparison between the performance of ATC-h and DAS-h. The Figure shows that DAS-h outperformed ATC-h for most of the problems, whether they are small, medium, or large. Moreover, the Figure shows that the largest difference between the two heuristics is for the medium-size problems, where DAS-h significantly outperformed ATC-h.
This superior performance of DAS-h can be linked to the structure of this heuristic, where DDT-h first constructs a good initial feasible schedule; then, this schedule is coarse-tuned using ATC-h, after which the S-h fine-tunes the schedule.
This superior performance of DAS-h in medium-range problems explains the strong deterioration in the performance of DAS/GA relative to GA in medium-size problems. DAS-h generates a region of local optima that traps DAS/GA, and consequently the performance of DAS/GA is limited in this region to the performance of DAS-h. Unlike the case with large-size problems, in medium-size problems GA alone can reach better regions than the region provided by DAS-h in DAS/GA; therefore, GA alone outperforms the performance of DAS/GA in medium-size problems, as the region provided by DAS-h traps DAS/GA in it and limits the capabilities of GA in DAS/GA to reach better regions.

6. Conclusions

This article proposed a Heuristic-Based Genetic Algorithm (DAS/GA) to minimize the overall total tardiness in scheduling large-size identical parallel machines with a single server. The results showed that DAS-h significantly enhanced the performance of GA for large-size problems where MIP failed due to high execution time and/or memory limitations, and GA performed badly due to the large search space. The results also showed that the usage of the proposed meta-heuristic DAS/GA algorithm should be limited only to large-size problems because DAS-h limited the capabilities of GA in small-size problems and especially in medium-size problems and because MIP performed well in small-size problems, which rendered the usage of DAS/GA unnecessary in this range of problems.
Future works will investigate existing meta-heuristic algorithms for scheduling large identical parallel machines with a single server.

Author Contributions

Conceptualization, M.A.-S., S.R., S.A.-D. and A.A.; methodology, M.A.-S. and S.R.; software, S.R. and S.A.-D.; validation, M.A.-S., S.R., S.A.-D. and A.A.; formal analysis, M.A.-S., S.R., S.A.-D. and A.A.; investigation, M.A.-S., S.R., S.A.-D. and A.A.; resources, M.A.-S., S.R., S.A.-D. and A.A.; data curation, S.R.; writing—original draft preparation, M.A.-S., S.R., S.A.-D. and A.A.; writing—review and editing, M.A.-S., S.R., S.A.-D. and A.A.; visualization, S.R.; supervision, M.A.-S.; project administration, M.A.-S.; funding acquisition, M.A.-S., S.R., S.A.-D. and A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The procedure for generating the data presented in this work was stated in the manuscript. However, it can be requested from the corresponding author.

Acknowledgments

The authors are grateful for the valuable comments raised by the reviewers aiming to enhance the quality of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Anghinolfi, D.; Paolucci, M.; Ronco, R. A bi-objective heuristic approach for green identical parallel machine scheduling. Eur. J. Oper. Res. 2021, 289, 416–434. [Google Scholar] [CrossRef]
  2. Heydar, M.; Mardaneh, E.; Loxton, R. Approximate dynamic programming for an energy-efficient parallel machine scheduling problem. Eur. J. Oper. Res. 2022, 302, 363–380. [Google Scholar] [CrossRef]
  3. Asadpour, M.; Hodaei, Z.; Azami, M.; Kehtari, E.; Vesal, N. A green model for identical parallel machines scheduling problem considering tardy jobs and job splitting property. Sustain. Oper. Comput. 2022, 3, 149–155. [Google Scholar] [CrossRef]
  4. Vincent, B.; Duhamel, C.; Ren, L.; Tchernev, N. An efficient heuristic for scheduling on identical parallel machines to minimize total tardiness. IFAC-PapersOnLine 2016, 49, 1737–1742. [Google Scholar] [CrossRef]
  5. Safarzadeh, H.; Niaki, S.T.A. Bi-objective green scheduling in uniform parallel machine environments. J. Clean. Prod. 2019, 217, 559–572. [Google Scholar] [CrossRef]
  6. Bazargan-Lari, M.R.; Taghipour, S.; Zaretalab, A.; Sharifi, M. Correction to: Production scheduling optimization for parallel machines subject to physical distancing due to COVID-19 pandemic. Oper. Manag. Res. 2022. [Google Scholar] [CrossRef]
  7. Salimifard, K.; Li, J.; Mohammadi, D.; Moghdani, R. A multi objective volleyball premier league algorithm for green scheduling identical parallel machines with splitting jobs. Appl. Intell. 2021, 51, 4143–4161. [Google Scholar] [CrossRef]
  8. Farmand, N.; Zarei, H.; Rasti-Barzoki, M. Two meta-heuristic algorithms for optimizing a multi-objective supply chain scheduling problem in an identical parallel machines environment. Int. J. Ind. Eng. Comput. 2021, 12, 249–272. [Google Scholar] [CrossRef]
  9. Mahmud, S.; Abbasi, A.; Chakrabortty, R.K.; Ryan, M.J. A self-adaptive hyper-heuristic based multi-objective optimisation approach for integrated supply chain scheduling problems. Knowl.-Based Syst. 2022, 251, 109190. [Google Scholar] [CrossRef]
  10. Wu, X.; Peng, J.; Xiao, X.; Wu, S. An effective approach for the dual-resource flexible job shop scheduling problem considering loading and unloading. J. Intell. Manuf. 2021, 32, 707–728. [Google Scholar] [CrossRef]
  11. Allahverdi, A.; Mittenthal, J. Scheduling on M parallel machines subject to random breakdowns to minimize expected mean flow time. Nav. Res. Logist. 1994, 41, 677–682. [Google Scholar] [CrossRef]
  12. Liao, C.-J.; Tsou, H.-H.; Huang, K.-L. Neighborhood search procedures for single machine tardiness scheduling with sequence-dependent setups. Theor. Comput. Sci. 2012, 434, 45–52. [Google Scholar] [CrossRef] [Green Version]
  13. Wodecki, M. A branch-and-bound parallel algorithm for single-machine total weighted tardiness problem. Int. J. Adv. Manuf. Technol. 2008, 37, 996–1004. [Google Scholar] [CrossRef]
  14. Allali, K.; Aqil, S.; Belabid, J. Distributed no-wait flow shop problem with sequence dependent setup time: Optimization of makespan and maximum tardiness. Simul. Model. Pract. Theory 2022, 116, 102455. [Google Scholar] [CrossRef]
  15. Yanıkoğlu, İ.; Yavuz, T. Branch-and-price approach for robust parallel machine scheduling with sequence-dependent setup times. Eur. J. Oper. Res. 2022, 301, 875–895. [Google Scholar] [CrossRef]
  16. Vepsalainen, A.P.J.; Morton, T.E. Priority Rules for Job Shops with Weighted Tardiness Costs. Manag. Sci. 1987, 33, 947–1068. [Google Scholar] [CrossRef]
  17. Lee, Y.H.; Pinedo, M. Scheduling jobs on parallel machines with sequence-dependent setup times. Eur. J. Oper. Res. 1997, 100, 464–474. [Google Scholar] [CrossRef]
  18. Lee, Y.H.; Bhaskaran, K.; Pinedo, M. A heuristic to minimize the total weighted tardiness with sequence-dependent setups. IIE Trans. 1997, 29, 45–52. [Google Scholar] [CrossRef]
  19. Morton, T.E.; Rachamadugu, R.M.V. Myopic Heuristics for the Single Machine Weighted Tardiness Problem; Carnegic-Mellon University: Pittsburgh, PA, USA, 1982. [Google Scholar]
  20. Pfund, M.; Fowler, J.W.; Gadkari, A.; Chen, Y. Scheduling jobs on parallel machines with setup times and ready times. Comput. Ind. Eng. 2008, 54, 764–782. [Google Scholar] [CrossRef]
  21. Biele, A.; Mönch, L. Decomposition methods for cost and tardiness reduction in aircraft manufacturing flow lines. Comput. Oper. Res. 2019, 103, 134–147. [Google Scholar] [CrossRef]
  22. Schaller, J.; Valente, J.M.S. Heuristics for scheduling jobs in a permutation flow shop to minimize total earliness and tardiness with unforced idle time allowed. Expert Syst. Appl. 2019, 119, 376–386. [Google Scholar] [CrossRef]
  23. Alves, F.F.; Nogueira, T.H.; Ravetti, M.G. Learning algorithms to deal with failures in production planning. Comput. Ind. Eng. 2022, 169, 108231. [Google Scholar] [CrossRef]
  24. Balasubramanian, H.; Mönch, L.; Fowler, J.; Pfund, M. Genetic algorithm based scheduling of parallel batch machines with incompatible job families to minimize total weighted tardiness. Int. J. Prod. Res. 2004, 42, 1621–1638. [Google Scholar] [CrossRef]
  25. Chaudhry, I.A.; Drake, P.R. Minimizing total tardiness for the machine scheduling and worker assignment problems in identical parallel machines using genetic algorithms. Int. J. Adv. Manuf. Technol. 2008, 42, 581. [Google Scholar] [CrossRef]
  26. Xhafa, F.; Sun, J.; Barolli, A.; Biberaj, A.; Barolli, L. Genetic Algorithms for Satellite Scheduling Problems. Mob. Inf. Syst. 2012, 8, 717658. [Google Scholar] [CrossRef] [Green Version]
  27. Belkadi, K.; Gourgand, M.; Benyettou, M. Parallel genetic algorithms with migration for the hybrid flow shop scheduling problem. J. Appl. Math. Decis. Sci. 2006, 2006, 65746. [Google Scholar] [CrossRef]
  28. Costantino, F.; De Toni, A.F.; Di Gravio, G.; Nonino, F. Scheduling Mixed-Model Production on Multiple Assembly Lines with Shared Resources Using Genetic Algorithms: The Case Study of a Motorbike Company. Adv. Decis. Sci. 2014, 2014, 874031. [Google Scholar] [CrossRef]
  29. Chen, C.; Yang, Z.; Tan, Y.; He, R. Diversity Controlling Genetic Algorithm for Order Acceptance and Scheduling Problem. Math. Probl. Eng. 2014, 2014, 367152. [Google Scholar] [CrossRef] [Green Version]
  30. Karimi-Mamaghan, M.; Mohammadi, M.; Meyer, P.; Karimi-Mamaghan, A.M.; Talbi, E.-G. Machine learning at the service of meta-heuristics for solving combinatorial optimization problems: A state-of-the-art. Eur. J. Oper. Res. 2022, 296, 393–422. [Google Scholar] [CrossRef]
  31. Wang, X.; Li, Z.; Chen, Q.; Mao, N. Meta-heuristics for unrelated parallel machines scheduling with random rework to minimize expected total weighted tardiness. Comput. Ind. Eng. 2020, 145, 106505. [Google Scholar] [CrossRef]
  32. Khalid, O.W.; Isa, N.A.M.; Mat Sakim, H.A. Emperor penguin optimizer: A comprehensive review based on state-of-the-art meta-heuristic algorithms. Alex. Eng. J. 2023, 63, 487–526. [Google Scholar] [CrossRef]
  33. Mensendiek, A.; Gupta, J.N.D.; Herrmann, J. Scheduling identical parallel machines with fixed delivery dates to minimize total tardiness. Eur. J. Oper. Res. 2015, 243, 514–522. [Google Scholar] [CrossRef]
  34. Schaller, J.E. Minimizing total tardiness for scheduling identical parallel machines with family setups. Comput. Ind. Eng. 2014, 72, 274–281. [Google Scholar] [CrossRef]
  35. Shim, S.-O.; Kim, Y.-D. Scheduling on parallel identical machines to minimize total tardiness. Eur. J. Oper. Res. 2007, 177, 135–146. [Google Scholar] [CrossRef]
  36. Min, L.; Cheng, W. A genetic algorithm for minimizing the makespan in the case of scheduling identical parallel machines. Artif. Intell. Eng. 1999, 13, 399–403. [Google Scholar] [CrossRef]
  37. Kashan, A.H.; Karimi, B.; Jenabi, M. A hybrid genetic heuristic for scheduling parallel batch processing machines with arbitrary job sizes. Comput. Oper. Res. 2008, 35, 1084–1098. [Google Scholar] [CrossRef]
  38. Cochran, J.K.; Horng, S.-M.; Fowler, J.W. A multi-population genetic algorithm to solve multi-objective scheduling problems for parallel machines. Comput. Oper. Res. 2003, 30, 1087–1102. [Google Scholar] [CrossRef]
  39. Chang, P.-C.; Chen, S.-H.; Lin, K.-L. Two-phase sub population genetic algorithm for parallel machine-scheduling problem. Expert Syst. Appl. 2005, 29, 705–712. [Google Scholar] [CrossRef]
  40. Leksakul, K.; Phetsawat, S. Nurse Scheduling Using Genetic Algorithm. Math. Probl. Eng. 2014, 2014, 246543. [Google Scholar] [CrossRef] [Green Version]
  41. Lin, S.-W.; Ying, K.-C. A hybrid approach for single-machine tardiness problems with sequence-dependent setup times. J. Oper. Res. Soc. 2008, 59, 1109–1119. [Google Scholar] [CrossRef]
  42. Laha, D. A Simulated Annealing Heuristic for Minimizing Makespan in Parallel Machine Scheduling. In Proceedings of the SEMCCO 2012: Swarm, Evolutionary, and Memetic Computing, Bhubaneswar, India, 20–22 December 2012; Panigrahi, B.K., Das, S., Suganthan, P.N., Nanda, P.K., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 198–205. [Google Scholar]
  43. Liao, C.-J.; Juan, H.-C. An ant colony optimization for single-machine tardiness scheduling with sequence-dependent setups. Comput. Oper. Res. 2007, 34, 1899–1909. [Google Scholar] [CrossRef]
  44. Anghinolfi, D.; Paolucci, M. A new ant colony optimization approach for the single machine total weighted tardiness scheduling prob-lem. Int. J. Oper. Res. 2008, 5, 44–60. [Google Scholar]
  45. Jia, Z.; Yan, J.; Leung, J.Y.T.; Li, K.; Chen, H. Ant colony optimization algorithm for scheduling jobs with fuzzy processing time on parallel batch machines with different capacities. Appl. Soft Comput. 2019, 75, 548–561. [Google Scholar] [CrossRef]
  46. Tirkolaee, E.B.; Goli, A.; Hematian, M.; Sangaiah, A.K.; Han, T. Multi-objective multi-mode resource constrained project scheduling problem using Pareto-based algorithms. Computing 2019, 101, 547–570. [Google Scholar] [CrossRef]
  47. Mohamad Shirajuddin, T.; Muhammad, N.S.; Abdullah, J. Optimization problems in water distribution systems using Non-dominated Sorting Genetic Algorithm II: An overview. Ain Shams Eng. J. 2022, 101932. [Google Scholar] [CrossRef]
  48. Yusuf, A.; Bayhan, N.; Tiryaki, H.; Hamawandi, B.; Toprak, M.S.; Ballikaya, S. Multi-objective optimization of concentrated Photovoltaic-Thermoelectric hybrid system via non-dominated sorting genetic algorithm (NSGA II). Energy Convers. Manag. 2021, 236, 114065. [Google Scholar] [CrossRef]
  49. Liu, Y.; You, K.; Jiang, Y.; Wu, Z.; Liu, Z.; Peng, G.; Zhou, C. Multi-objective optimal scheduling of automated construction equipment using non-dominated sorting genetic algorithm (NSGA-III). Autom. Constr. 2022, 143, 104587. [Google Scholar] [CrossRef]
  50. Zhou, Y.; Zhang, W.; Kang, J.; Zhang, X.; Wang, X. A problem-specific non-dominated sorting genetic algorithm for supervised feature selection. Inf. Sci. 2021, 547, 841–859. [Google Scholar] [CrossRef]
  51. Yazdani, M.; Zandieh, M.; Tavakkoli-Moghaddam, R. Evolutionary algorithms for multi-objective dual-resource constrained flexible job-shop scheduling problem. OPSEARCH 2019, 56, 983–1006. [Google Scholar] [CrossRef]
  52. Sharma, S.; Chadha, M.; Kaur, H. Multi-step crossover genetic algorithm for bi-criteria parallel machine scheduling problems. Int. J. Math. Oper. Res. 2020, 18, 71–84. [Google Scholar] [CrossRef]
  53. Li, K.; Yang, S. Non-identical parallel-machine scheduling research with minimizing total weighted completion times: Models, relaxations and algorithms. Appl. Math. Model. 2009, 33, 2145–2158. [Google Scholar] [CrossRef]
  54. Lawler, E.L.; Lenstra, J.K.; Rinnooy Kan, A.H.G. Recent Developments in Deterministic Sequencing and Scheduling: A Survey BT—Deterministic and Stochastic Scheduling. In Proceedings of the Part of the NATO Advanced Study Institutes Series Book Series (ASIC, Volume 84); Dempster, M.A.H., Lenstra, J.K., Rinnooy Kan, A.H.G., Eds.; Springer: Dordrecht, The Netherlands, 1982; pp. 35–73. [Google Scholar]
  55. Grefenstette, J.J.; Baker, J.E. How genetic algorithms work: A critical look at implicit parallelism. In Proceedings of the Third International Conference on Genetic Algorithms, George Mason University, Fairfax, VA, USA, 4–7 June 1989; Schaffer, D., Ed.; Morgan Kaufmann Publishers Inc.: Fairfax, VA, USA, 1989; pp. 20–27. [Google Scholar]
  56. Moscato, P.; Norman, M.G. A “Memetic” Approach for the Traveling Salesman Problem Implementation of a Computational Ecology for Combinatorial Optimization on Message-Passing Systems. In Proceedings of the International Conference on Parallel Computing and Transputer Applications, Barcelona, Spain, 21–25 September 1992; Valero, M., Ed.; IOS Press: Barcelona, Spain, 1992; pp. 177–186. [Google Scholar]
  57. Tanaka, S.; Araki, M. A branch-and-bound algorithm with Lagrangian relaxation to minimize total tardiness on identical parallel machines. Int. J. Prod. Econ. 2008, 113, 446–458. [Google Scholar] [CrossRef] [Green Version]
  58. Belouadah, H.; Potts, C.N. Scheduling identical parallel machines to minimize total weighted completion time. Discret. Appl. Math. 1994, 48, 201–218. [Google Scholar] [CrossRef] [Green Version]
  59. Fisher, M.L. A dual algorithm for the one-machine scheduling problem. Math. Program. 1976, 11, 229–251. [Google Scholar] [CrossRef]
Figure 1. The flowchart of the proposed DAS/GA.
Figure 1. The flowchart of the proposed DAS/GA.
Processes 10 02071 g001
Figure 2. The standardized values of I i t and I i D D for data in Table 1.
Figure 2. The standardized values of I i t and I i D D for data in Table 1.
Processes 10 02071 g002
Figure 3. Comparison between the performances of the different methods using RE measure: (a) DAS/GA vs. DAS and GA, (b) DAS/GA vs. GA, (c) DAS vs. GA, and (d) DAS/GA vs. DAS.
Figure 3. Comparison between the performances of the different methods using RE measure: (a) DAS/GA vs. DAS and GA, (b) DAS/GA vs. GA, (c) DAS vs. GA, and (d) DAS/GA vs. DAS.
Processes 10 02071 g003
Figure 4. Comparison between the performances of the different methods using StdrdTT measure: (a) DAS/GA vs. DAS and GA, (b) DAS/GA vs. GA, (c) DAS vs. GA, and (d) DAS/GA vs. DAS.
Figure 4. Comparison between the performances of the different methods using StdrdTT measure: (a) DAS/GA vs. DAS and GA, (b) DAS/GA vs. GA, (c) DAS vs. GA, and (d) DAS/GA vs. DAS.
Processes 10 02071 g004
Figure 5. Comparison between DAS/GA and GA using ARI measure.
Figure 5. Comparison between DAS/GA and GA using ARI measure.
Processes 10 02071 g005
Figure 6. Evolution lines for (a) DAS/GA and (b) GA for problem 500_10_05_05.
Figure 6. Evolution lines for (a) DAS/GA and (b) GA for problem 500_10_05_05.
Processes 10 02071 g006
Figure 7. Bar graph for the number of enhancements for DAS/GA and GA.
Figure 7. Bar graph for the number of enhancements for DAS/GA and GA.
Processes 10 02071 g007
Figure 8. Comparison between ATC-h and DAS-h performance.
Figure 8. Comparison between ATC-h and DAS-h performance.
Processes 10 02071 g008
Table 1. Data and I i D D values for the illustrative example.
Table 1. Data and I i D D values for the illustrative example.
Job i12345678910
P T i 4589427584035750653912257224
D D i 659063211528378696921490342373
I i D D 0.4440.0130.1670.8190.0590.3800.0600.6340.3310.665
Table 2. Sorted jobs based on I i D D values.
Table 2. Sorted jobs based on I i D D values.
Job i25739618104
I i D D 0.0130.0590.0600.1670.3310.3800.4440.6340.6650.819
Table 3. DDT-h schedule.
Table 3. DDT-h schedule.
Position12345
Machine 1279110
Machine 253684
Table 4. Initial schedule for ATC-h.
Table 4. Initial schedule for ATC-h.
Position12345
Machine 145612
Machine 2109783
Table 5. I i t values for data in Table 1 and schedule in Table 4.
Table 5. I i t values for data in Table 1 and schedule in Table 4.
Machine 1
Job45612Remarks
Iteration
10.00030.00270.01930.02140.0011Job 1 first
20.00040.00280.0200−10.0011Job 6 second
30.00040.0028−1−10.0011Job 5 third
40.0004−1−1−10.0011Job 2 fourth
50.0004−1−1−1−1Job 4 fifth
Machine 2
Job109783Remarks
Iteration
10.00360.00340.00140.00050.0034Job 10 first
2−10.00390.00150.00060.0036Job 9 second
3−1−10.00150.00090.0036Job 3 third
4−1−10.00150.0011−1Job 7 fourth
5−1−1−10.001−1Job 8 fifth
Table 6. ATC-h schedule.
Table 6. ATC-h schedule.
Position12345
Machine 116524
Machine 2109378
Table 7. Process times and due dates for Table 8.
Table 7. Process times and due dates for Table 8.
Job i12345678910
P T i 100115027584035750750912450224
D D i 16512003211528378696921490342373
Table 8. The proposed schedule.
Table 8. The proposed schedule.
Position12345
Machine 1279110
Machine 253684
Table 9. One possible chromosome for data in Table 1.
Table 9. One possible chromosome for data in Table 1.
Position123456789
Machine 1276319000
Machine 21058400000
Table 10. Results for the 18 problems used in the experimentation.
Table 10. Results for the 18 problems used in the experimentation.
#ProblemOpt.
T T
DASDAS/GAGA
T T R E T T R E A R I T T R E A R I
120_02_02_021471620.1020151.80.03270.9370151.70.03200.9364
220_05_02_021491950.3087161.70.08520.8292160.10.07450.8210
320_10_10_101952460.2615226.00.15900.9187218.10.11850.8866
430_02_02_02831881.265183.00.00000.441583.00.00000.4415
530_10_02_021017876.7921171.40.69700.2178170.70.69010.2169
640_02_02_0213261.000013.00.00000.500013.00.00000.5000
740_05_02_02681661.4412102.80.51180.619385.60.25880.5157
840_10_02_0200NA0.0NANA0.0NANA
950_05_02_02491882.83671041.12240.5532102.21.08570.5436
10100_02_02_02251414.6400250.00000.177325.00.00000.1773
11100_5_02_02862401.7907140.70.63600.5863104.40.21400.4350
12100_10_02_02NA1950.0285189.60.00000.9723190.00.00210.9744
13300_15_06_06NA281640.0084279290.00000.991741182.00.47451.4622
14500_10_05_05NA504280.0041502200.00000.995974571.00.48491.4788
15750_15_04_04NA477270.0004477070.00000.999673344.00.53741.5367
161000_10_06_06NA3555000.00033553960.00000.9997590010.10.66011.6597
171500_15_04_04NA1685590.0011168378.70.00000.9989274604.30.63091.6291
182000_10_04_04NA3998150.00043996730.00000.9996649413.50.62491.6243
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abu-Shams, M.; Ramadan, S.; Al-Dahidi, S.; Abdallah, A. Scheduling Large-Size Identical Parallel Machines with Single Server Using a Novel Heuristic-Guided Genetic Algorithm (DAS/GA) Approach. Processes 2022, 10, 2071. https://doi.org/10.3390/pr10102071

AMA Style

Abu-Shams M, Ramadan S, Al-Dahidi S, Abdallah A. Scheduling Large-Size Identical Parallel Machines with Single Server Using a Novel Heuristic-Guided Genetic Algorithm (DAS/GA) Approach. Processes. 2022; 10(10):2071. https://doi.org/10.3390/pr10102071

Chicago/Turabian Style

Abu-Shams, Mohammad, Saleem Ramadan, Sameer Al-Dahidi, and Abdallah Abdallah. 2022. "Scheduling Large-Size Identical Parallel Machines with Single Server Using a Novel Heuristic-Guided Genetic Algorithm (DAS/GA) Approach" Processes 10, no. 10: 2071. https://doi.org/10.3390/pr10102071

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop