Next Article in Journal
Existence, Uniqueness and Stability Analysis with the Multiple Exp Function Method for NPDEs
Previous Article in Journal
Multibranch Attention Mechanism Based on Channel and Spatial Attention Fusion
Previous Article in Special Issue
Optimal Emergency Evacuation Route Planning Model Based on Fire Prediction Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Genetic Hyper-Heuristic for an Order Scheduling Problem with Two Scenario-Dependent Parameters in a Parallel-Machine Environment

1
Department of Computer Science and Information Engineering, Cheng Shiu University, Kaohsiung City 83347, Taiwan
2
College of Information Science and Engineering, Northeastern University, Shenyang 110819, China
3
Department of E-Sport Technology Management, Cheng Shiu University, Kaohsiung City 83347, Taiwan
4
Key Lab for OCME, School of Mathematical Science, Chongqing Normal University, Chongqing 401331, China
5
Department of Statistics, Feng Chia University, Taichung 40724, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(21), 4146; https://doi.org/10.3390/math10214146
Submission received: 4 October 2022 / Revised: 23 October 2022 / Accepted: 2 November 2022 / Published: 6 November 2022
(This article belongs to the Special Issue Combinatorial Optimization Problems in Planning and Decision Making)

Abstract

:
Studies on the customer order scheduling problem have been attracting increasing attention. Most current approaches consider that either component processing times for customer orders on each machine are constant or all customer orders are available at the outset of production planning. However, these assumptions do not hold in real-world applications. Uncertainty may be caused by multiple issues including a machine breakdown, the working environment changing, and workers’ instability. On the basis of these factors, we introduced a parallel-machine customer order scheduling problem with two scenario-dependent component processing times, due dates, and ready times. The objective was to identify an appropriate and robust schedule for minimizing the maximum of the sum of weighted numbers of tardy orders among the considered scenarios. To solve this difficult problem, we derived a few dominant properties and a lower bound for determining an optimal solution. Subsequently, we considered three variants of Moore’s algorithm, a genetic algorithm, and a genetic-algorithm-based hyper-heuristic that incorporated the proposed seven low-level heuristics to solve this problem. Finally, the performances of all proposed algorithms were evaluated.

1. Introduction

In many service and manufacturing environments, the product development team independently develops modules for multiple products, and the product design is considered complete after all modules are designed. This production sequence is referred to as the customer order scheduling problem (COSP) in the literature. The COSP is encountered in diverse industries and applications; for instance, in the manufacture of semi-finished lenses [1], in determining the equilibrium of production capacity to solve a practical order rescheduling problem in the steel industry [2], and in a product–service system offering a mix of tangible products and intangible services to meet the personalized needs of customers [3]. For more applications, please refer to a review and classification of concurrent-type scheduling models by Framinan et al. [4].
COSP studies have employed different objective functions. For example, by taking the total completion time of a given set of orders as the criterion, Ahmadi et al. [1] developed constructive heuristics; Framinan et al. [5] applied both the aforementioned constructive heuristics and metaheuristics to solve the COSP.
While considering the total weighted completion time as the criterion, Sung and Yoon [6] proposed constructive heuristics to solve the COSP for a two-machine case, and Ahmadi et al. [1,7,8,9,10,11,12,13,14] developed constructive heuristics to solve the COSP for the m-machine case. More recently, Wu et al. [15] proposed an iterative greedy algorithm and various priority rules to solve the same COSP with the total weighted completion time proposed by Leung et al. [11] as the criterion. Riahi et al. [16] proposed a new constructive heuristic with eight initial priority lists and a perturbative search algorithm to solve a COSP for minimizing the total completion time. Li et al. [17] considered a problem involving customer orders on m unrelated parallel machines to minimize the total weighted completion time. They derived several optimality properties, a lower bound, and three heuristics on the basis of their poorest cases to solve the problem.
Relevant studies on the COSP with consideration of the due date criterion are summarized in this section. By considering the total number of tardy orders as the criterion, Leung et al. [18,19] developed dynamic programming methods and constructive heuristics to solve the COSP. By taking the total tardiness of a given set of orders as the criterion, Lee [20] applied a branch-and-bound (B&B) method and constructive heuristics to solve the problem. For the problem considered in [20], Xu et al. [21] adopted Biskup’s [22] position learning concept and proposed a solution involving simulated annealing (SA), particle swarm optimization (PSO), order-scheduling modified due-date, and B&B algorithms. Lin et al. [23] simultaneously introduced two agents and the concept of ready times into the COSP model by using the B&B, PSO, and opposite-based PSO algorithms to solve the problem. By following the model proposed in [19] and applying the learning concept of Koulamas and Kyparisis [24], Wu et al. [25] used a B&B method, as well as a memetic genetic algorithm (GA) and a PSO algorithm, to develop an order scheduling model for minimizing the number of tardy orders. By adopting the learning concept of Kuo and Yang [26], Wu et al. [27] applied SA, artificial bee colony, and PSO algorithms, as well as a B&B method, to solve a COSP with a learning effect based on the sum of processing times to minimize the total tardiness of orders. Lin et al. [28] employed a B&B, four bee colony algorithms, and four hybrid bee colony algorithms to solve a COSP with release dates to minimize the weighted number of tardy orders. In a study on the COSP with different forms of objective functions, Guo and Tang [2] applied a mixed-integer mathematical programming model by considering an original objective, deviation from the initial scheduling, and equilibrium of production capacity to solve a practical order rescheduling problem in the steel industry.
In the aforementioned COSP studies, the component processing times for a customer’s order on all machines were assumed to be fixed numbers. However, this assumption is unsuitable for several manufacturing scenarios due to numerous complex factors, including changes in the workspace, traffic transportation delays, machine breakdowns, and worker performance instabilities [29,30]. For more real-life applications, refer to Sotskov and Werner’s [31] comprehensive book on the stability method and models for sequencing and scheduling under uncertainty. Thus, component processing times for a customer’s order depend on the scenario at the time of order processing. Motivated by this observation, scholars have suggested that the worst-case system performance is typically more important than the average system performance. To overcome such a worst case [29,30], Kouvelis and Yu [32] and Yang and Yu [33] recommended the use of a robust (min–max regret) approach to solve the COSP. More recently, inspired by [33], Wu et al. [34] developed a B&B method along with a few new dominance rules and a lower bound, proposed five construct heuristics by combining two scenario-dependent processing times, and applied an SA hyper-heuristic to solve the COSP. Hsu et al. [35] addressed a two-machine flow-shop problem with the scenario concept to minimize the maximum total completion time between the two scenarios. They utilized a B&B method along with a lower bound and two optimality properties and developed 12 construct heuristics to solve their problem. Wu et al. [36] employed two scenarios to solve a two-stage assembly flow-shop problem in which the measurement was the makespan. They used a B&B method, developed eight construct heuristics, and proposed four variants of the cloud-theory-based simulated annealing (CSA) hyper-heuristic method to solve the problem. Wu et al. [37] applied the scenario concept to a single-machine scheduling problem with sequence-dependent setup times for minimizing the total completion time. They employed a B&B method and developed five variants of the CSA along with five new neighborhood schemes to solve the problem. Kämmerling and Kurtz [38] presented an algorithm to calculate efficiently lower bounds for the binary two-stage robust problem. Furthermore, [1] used the scenario pheromone in real-world condition for producing plastic lenses; the plastic lens procedures could be conducted by either skilled or semiskilled employees. Therefore, the component processing times of an order differ depending on whether the order was executed by a skilled employee or a semiskilled employee. Additionally, issues pertaining to customer’s due dates and the release dates in COSPs have rarely been explored. By challenging these factors, we formulated an m-parallel-machine COSP problem with two scenario-dependent component processing times, due dates, and ready times. The objective was to identify an appropriate and robust (min–max regret) approach to minimize the maximum of the sum of weighted numbers of tardy orders among the considered scenarios. More recently, Wu et al. [39] introduced a branch-and-bound algorithm and two variants of a simulated annealing hyper-heuristic for a two-agent customer order scheduling problem with scenario-dependent component processing times and release dates. Xuan et al. [40] proposed an exact method, three scenario-dependent heuristics, and a population-based iterated greedy algorithm for a single-machine scheduling problem with a scenario-dependent processing time and due date. For understanding the importance of the criteria, due dates, and release dates in real applications, please refer to Yin et al. [41] for a few production examples involving due date settings.
The contributions of this study can be summarized as follows. (1) This study presents a model of real COSPs in practical settings by addressing two scenario-dependent component processing times, ready times, and due dates. This is a new and unexplored problem. (2) The objective function minimized the maximum total weighted number of tardy orders across the two possible scenarios by considering all possible permutations instead of only the total weighted number of tardy orders. (3) Three properties and a lower bound were derived to accelerate the search for an effective B&B method. (4) Moore’s algorithm (Moore [42]) was used to construct heuristics. (5) A hyper-heuristic based on a GA that incorporated seven low-level heuristics was proposed to solve this problem.
The remainder of this study is organized as follows. In the second section, the notation definition and problem description are presented. In the third section, the derivation of a lower bound and several properties are described and used in the B&B algorithm. In the fourth section, three modified variants of Moore’s algorithm are introduced. In the fifth, the GA used herein as well as the GA-based hyper-heuristic that incorporates the proposed seven low-level heuristics are described. The sixth section outlines the parameter tuning and setting. In the seventh section, the performances of all of the five algorithms are proposed and evaluated herein. In the final section, our conclusions and an outline for future studies on the topic are presented.

2. Problem Statement

The considered problem can be described as follows: For n customer orders, these orders belong to n different clients and each customer order has m components to be processed on m machines. Each order has its important weight ( w i ), and one machine produces only one component. Because factors causing substantial uncertainties are shown, we considered that a customer order had a component processing time t i v s on Mv, a ready time r i   s , and a due date d i s in scenario s, where s = 1, 2. Our objective here was to formulate a robust policy that minimized the maximum of the weighted number of tardy orders in two scenario-dependent environments. In other words, the aim was to identify a job sequence σ * such that σ * = arg { m i n σ Ω { m a x s = 1 , 2 i = 1 n w i N T i s ( σ ) } } , where   Ω   is the set of all possible permutation schedules, and N T i s ( σ ) = 1 if customer order i is tardy in scenario s in σ and is 0 otherwise. When m = 1, the problem with the one-scenario environment is NP-hard, as demonstrated by Karp [43]; the same COSP problem with one scenario was addressed by Lin et al. [28]. Thus, the problem considered in the present study was NP-hard as well.

3. Branch-and-Bound Method

Lin et al. [28] addressed the same COSP problem, but they considered only one scenario. Following their ideas, we derived a lower bound to enhance the searching power of the B&B algorithm. Suppose σ = ( δ , δ c ) is a schedule in which δ denotes a determined schedule with q orders and δ c denotes the remaining unscheduled (nq) orders. The completion times of the orders placed after the kth position in σ are expressed as follows:
C [ k + 1 ] s ( σ ) = max v Ω M { max { z k s , r [ k + 1 ] s } + t [ k + 1 ] v s } max v Ω M { ( z k s + r [ k + 1 ] s ) / 2 + t [ k + 1 ] v s } z k s 2 + r [ k + 1 ] s 2 + t ¯ k + 1 * s z k s 2 + r [ k + 1 ] s 2 + t ¯ ( k + 1 ) * s = C [ k + 1 ] s ˜ ( σ ) ,   s = 1 ,   2 . C [ k + n 1 ] s ( σ ) = max v Ω M { max { z q s , r i   s } + t i v s } z k s 2 n 1 + i = 1 n 1 r [ k + i ] s 2 n 1 i + 1 + i = 1 n 1 t ¯ ( k + i ) * s 2 n 1 i = C [ k + n 1 ] s ˜ ( σ ) ,   s = 1 ,   2 ,
where z k s denotes the completion time of the order scheduled at the kth position in the scheduled part, s = 1, 2; r ( k + 1 ) s r ( k + n 1 )   s denotes the nondecreasing form of { r i s ,   i δ c } ; and t ¯ ( k + 1 ) * s t ¯ ( k + n 1 ) * s denotes the nondecreasing form of { t ¯ i * s = v = 1 m t i v   s m ,   i δ c } . Therefore, the following formulas can be obtained:
i = 1 n w i N T i s ( σ ) > i = 1 q w i N T i s ( σ ) + w ( 1 ) i = k + 1 n U { C [ k + 1 ] s ˜ ( σ ) > d m a x s } s ,   s = 1 ,   2
where w ( 1 ) = min { w i ,   i δ c } ; d m a x s = max { d i s ,   i δ c } , s = 1, 2; U { x > a } s   = 1 and U { x a } s = 0, s = 1, 2; and { t ( q + i ) * s ,   i δ c } denotes a set of a nonincreasing order of { v = 1 m t i v s ,   i δ c } . Therefore, the inequality formula can be derived from Equation (1) as follows:
max s = 1 , 2 { i = 1 n w i N T i s ( σ ) } > ( s = 1 2 i = 1 q w i N T i s ( σ ) + w ( 1 ) i = q + 1 n U { C [ k + 1 ] s ˜ ( σ ) > d m a x s } s ) / 2 .
Thus, a lower bound can be obtained as follows:
l o w e r b d d = ( s = 1 2 i = 1 q w i N T i s ( σ ) + w ( 1 ) i = q + 1 n U { C [ q ] s + t ( q + i ) * s > d m a x s } s ) / 2
To indicate that σ = ( δ , i , j , δ ) is no worse than σ = ( δ , j , i , δ ) , we will show that w i N T i s ( σ ) + w j N T j s ( σ ) w j N T j s ( σ ) + w i N T i s ( σ ) and C j s ( σ ) C i s ( σ ) hold for s = 1, 2.
Moreover, let (q − 1) be the number of orders in δ , and let z v s be the completion time of the (q − 1)th order in δ on machine Mv, v = 1, 2, …, m and s = 1, 2. According the definition, the completion times of order i and order j in σ and σ are given as:
C i s ( σ ) = max v Ω M { max v Ω M { z v s , r i   s } + t i v s } ,   s = 1 ,   2 , C j s ( σ ) = max v Ω M { max { C i s ( σ ) , r j   s } + t j v s } ,   s = 1 ,   2 , C j s ( σ ) = max v Ω M { max v Ω M { z v s , r j   s } + t j v s } ,   s = 1 ,   2 ,   and C i s ( σ ) = max v Ω M { max { C j s ( σ ) , r i   s } + t i v s } ,   s = 1 ,   2 .
Based on the aforementioned expressions, the below properties can be obtained to increase the speeding power of a B&B algorithm to solve the problem under study. Only the proof of Case (i) of Property 1 is given. The other cases are omitted because they can be derived in the same manner.
Property 1.
For s = 1, 2, if r j   s > r i   s max v Ω M { z v   s } , max v Ω M { t i v s } < max v Ω M { t j v s } , r i   s + max v Ω M { t i v s } > r j   s , and one of the following cases holds:
Case (i) 
r i   s + max v Ω M { t i v s } < d i   s , r j   s + max v Ω M { t j v s } + max v Ω M { t i v s } > d i   s and r i   s + max v Ω M { t i v s } + max v Ω M { t j v s } < d j   s .
Case (ii) 
r j   s + max v Ω M { t j v s } > d j   s , and r i   s + max v Ω M { t i v s } < d i   s .
Case (iii) 
d j   s > r j   s + max v Ω M { t i v s } + max v Ω M { t j v s } and r j   s + + max v Ω M { t j v s } + max v Ω M { t i v s } < d i   s .
Then, σ is no worse than   σ .
Proof: 
Details of the proof of Case (i) of Property 1 are as follows.
The completion times of jobs Oj in sequence σ and Oi in sequence σ are, respectively:
C j s ( σ ) = max v Ω M { max { C i s ( σ ) , r j   s } + t j v s } = max v Ω M { max { max v Ω M { max v Ω M { z v s , r i   s } + t i v s } , r j   s } + t j v s }   and
C i s ( σ ) = max v Ω M { max { C j s ( σ ) , r i   s } + t i v s } = max v Ω M { max { max v Ω M { max v Ω M { z v s , r j   s } + t j v s } , r i   s } + t i v s } .
By applying the condition r j   s > r i   s max v Ω M { z v   s } , we can simplify C j s ( σ ) and C i s ( σ ) as follows:
C j s ( σ ) = max v Ω M { max { r i   s + max v Ω M { t i v s } , r j   s } + t j v s }
C i s ( σ ) = max v Ω M { max { r j   s + max v Ω M { t j v s } , r i   s } + t i v s } = r j   s + max v Ω M { t j v   s } + max v Ω M { t i v s }
By applying r i   s + max v Ω M { t i v s } > r j   s to (3), we have:
    C j s ( σ ) = r i   s + max v Ω M { t i v s } + max v Ω M { t j v s }
Hence,   C i s ( σ ) C j s ( σ ) = ( r j   s r i   s ) > 0 , that is, C j s ( σ ) C i s ( σ ) , for s = 1, 2.
Next, it can be claimed that max s i = 1 n w i N T i s ( σ ) max s i = 1 n w i N T i s ( σ ) ; equivalently, w i N T i s ( σ ) + w j N T j s ( σ ) w j N T j s ( σ ) + w i N T i s ( σ ) .
It can be claimed that N T i s ( σ ) = 0 . By applying r i   s max v Ω M { z v   s } and r i   s + max v Ω M { t i v s } < d i   s in succession, this implies N T i s ( σ ) = 0 .
It can be claimed that N T j s ( σ ) = 0 . By applying r i   s max v Ω M { z v s } , r i   s + max v Ω M { t i v s } > r j   s , and r i   s + max v Ω M { t i v s } + max v Ω M { t j v s } < d j   s in succession, this implies   N T j s ( σ ) = 0 .
Because the weights w i and w j > 0, the following desired results are obtained:
0 w j N T j s ( σ ) + w i N T i s ( σ ) .
Property 2.
For s = 1, 2, if r i   s max v Ω M { z v   s } and r i   s + max v Ω M { z v   s } < r j   s , one of the following cases holds:
Case (i): 
r j   s + max v Ω M { t j v s } < d j   s , and r i   s + max v Ω M { t i v s } < d i   s < r j   s + max v Ω M { t j v s } + max v Ω M { t i v s } .
Case (ii): 
r j   s + max v Ω M { t j v s } < d j   s < r i   s + max v Ω M { t i v s } + max v Ω M { t j v s } , and w i > w j .
Case (iii): 
r j   s + max v Ω M { t j v s } > d j   s , and r i   s + max v Ω M { t i v s } < d i   s .
Case (iv): 
d j   s > r j   s + max v Ω M { t i v s } + max v Ω M { t j v s } .
Then, σ is no worse than   σ .
Property 3.
For s = 1, 2, if max v Ω M { z v   s , r j   s } > max v Ω M { z v   s , r i   s } and t i v s t j v s , v Ω M , one of the following cases holds:
Case (i): 
max v Ω M { max v Ω M { z v   s , r i   s } + t i v s + t j v s } < d j   s , and max v Ω M { max v Ω M { z v   s , r j   s } + t j v s + t i v s } > d i s > max v Ω M { max v Ω M { z v   s , r i   s } + t i v s } . .
Case (ii): 
max v Ω M { max v Ω M { z v   s , r j   s } + t j v s } > d j   s , max v Ω M { max v Ω M { z v   s , r i   s } + t i v s } < d i   s and max v Ω M { max v Ω M { z v   s , r j   s } + t j v s + t i v s } > d i s .
Case (iii): 
w i > w j , and max v Ω M { max v Ω M { z v   s , r i   s } + t i v s + t j v s } > d j s > max v Ω M { max v Ω M { z v   s , r j   s } + t j v s } .
Case (iv): 
max v Ω M { max v Ω M { z v   s , r j   s } + t j v s } > d j   s , and max v Ω M { max v Ω M { z v   s , r i   s } + t i v s } < d i   s .
Case (v): 
max v Ω M { max v Ω M { z v   s , r i   s } + t i v s + t j v s } < d j   s ,   and max v Ω M { max v Ω M { z v   s , r j   s } + t j v s + t i v s } < d i   s .
Then, σ is no worse than   σ .

4. Three Modified Moore’s Heuristics

The literature [42] indicates that Moore’s algorithm produces an optimal schedule for minimizing the total number of tardy jobs on a single machine. To find the near-optimal robust job sequences for the proposed NP-hard problem, by following the idea of Moore’s algorithm, we introduced three mixed heuristics and combined them with the scenario-dependent processing times, ready times, and due dates across two possible scenarios. Notably, Moore’s algorithm could not be applied directly to solve this model regardless of whether the scenario-dependent parameters were present. In light of the favorable performance of Moore’s algorithm in conjunction with the tardiness criterion in the classical singe-machine setting, the pairwise interchange improvement was applied to Moore’s algorithm. The process flow of Moore’s algorithm is as follows: (1) Form a schedule σ by using the rule for all orders (jobs)   O N . (2) Set the current schedule σ* = σ and S* = O N . (3) Compute the completion time of orders in S* until a tardy order is found and erase it from σ* and S* to form a new current sequence σ* and new S*. (4) Find the order O* with the longest processing time in the current sequence σ*. (5) Delete O* from σ* and S* and repeat steps (3)–(5) until no tardy orders remain. The procedures of the three proposed modified heuristics (called Moore_pi_M, Moore_pi_m, and Moore_pi_mean heuristic) derived from Moore’s algorithm are as follows:
  • Moore_pi_M heuristic:
    01:
    Let t i * = max 1 v m { t i v 1 + r i 1 , t i v 2 + r i 2 } and d i * = max { d i 1 , d i 2 } be the processing time and due date, respectively, for order Oi, i = 1, 2, …, n, and O N = {O1, O2, , On}.
    02:
    Form a schedule σ by following the earliest due dates (EDD) rule based on O N .
    03:
    Let the current schedule σ* = σ and the corresponding set of orders S* = O N
    04:
    Compute the completion times of the orders in σ until a tardy order is found and delete it from
    05:
    σ* (and from S*) to form a new current schedule σ* and new S*. Find an order O*
    06:
    with the largest processing time in S*.
    07:
    Delete order O* from σ* and S*. Repeat Steps 04–07 until no tardy order remains.
    08:
    Output the final schedule σ M = ( σ * , σ ) , where σ * denotes the sequence of orders
    09:
    completed on time and scheduled using the EDD rule, and   σ denotes an arbitrary
    10:
    sequence of orders that are tardy under σ M .
    11:
    Execute σ M by using the pairwise interchange improvement method and output the final solution.
  • Moore_pi_m heuristic:
    01:
    Let t i * = min 1 v m { t i v 1 + r i 1 ,   t i v 2 + r i 2 } and d i * = min { d i 1 ,   d i 2 } be the processing time and due date, respectively, for order Oi, i = 1, 2, …, n, and O N = {O1, O2, , On}.
    02–11:
    are the same as those in the Moore_pi_M heuristic.
  • Moore_pi_mean heuristic:
    01:
    Let t i * = max   { r i 1 + v = 1 m t i v 1 m , r i 2 + v = 1 m t i v 2 m } and d i * = ( d i 1 + d i 2 ) / 2 be the processing time and due date, respectively, for order Oi, i = 1, 2, …, n and O N = {O1, O2, , On}.
    02–11:
    are the same as those in the Moore_pi_M heuristic.
Note that the complexity of Moore’s algorithm can be seen in [42].

5. A Genetic and a Genetic Hyper-Heuristic

Most researchers have observed that in general, a heuristic seems relatively simple and easy to construct, whereas a metaheuristic is both complex and difficult to construct, as well as to use for intelligently implementing random search strategies [44,45]. The GA has been successfully utilized to obtain high-quality approximate solutions of many combinatorial problems. The GA is an effective computerized search tool for identifying the best and optimal solutions to complex problems based on genetic and neural selection mechanics such as mutation, crossover, and reproduction. These mutations have been used to successfully solve numerous NP-hard combinatorial problems. In the GA or GAHH (hyper-heuristic based on the GA framework), we applied a group of continuous real numbers to display the codes of orders by using a random-number encoding method. For example, given a chromosome (0.73, 0.62, 0.14, 0.23, 0.81) as a gene, we decoded it as a schedule (3, 4, 2, 1, 5) by using the ranking method. Specifically, in the reproduction stage, we selected the parents and recombined them by using a certain crossover operator to create offspring. In particular, in this study, we considered a linear order crossover (see Iyer and Saxena [46]). Moreover, the notations Pop, P, and IT_GA denote the number of parents, value of the mutation rate, and number of iterations (or generations), respectively, in executing the GA. The main structures of the proposed GA are summarized as follows:
Steps of genetic algorithm:
00:
InputPop, P, IT_GA.
01:
Generate a series of Pop initial parents (schedules) and find their fitness values.
02:
Do i = 1, IT_GA
03:
Choose two parents from Pop populations by using the roulette wheel method and employ a linear order crossover to reproduce a set of Pop offspring.
04:
For each offspring, generate a random number u (0 < u < 1) if u < P; then, create a new
05:
offspring by inserting a displacement mutation.
06:
Record the best one (schedule) and replace these Pop parents with their offspring.
07:
End do /* for the number of iterations (IT_GA) is fulfilled */
08:
Output the final best schedule and its fitness value.
In the following, a GA-based hyper-heuristic is applied to solve this problem by identifying problem-solving methods instead of directly finding solutions to the problem (refer to Cowling et al. [47] and Anagnostopoulos and Koulinas [48]). Hyper-heuristic processes have a high level of strategy and a low level of a group of heuristics that is used to determine a low-level heuristic to produce a new solution. Moreover, seven low-level heuristics are proposed on the basis of candidate variation operators such as the two-job swap heuristic, one step (or two steps) to the right heuristic, one step (or two steps) to the left heuristic, pulling-out and onward-moved reinsertion heuristic, and pulling-out and backward-moved reinsertion heuristic. Many studies have indicated that the two-job swapping method represents an effective improved scheme. To explore diverse search solutions, the randomly determined neighborhoods of a current job must be explored as well. They are recorded as LH1, LH2, …, and LH7. The details of the proposed seven low-level heuristics are as follows:
LH1:
Two-order swap heuristic: randomly select two orders (e.g., O 2 and O 4 ) in a schedule σ and swap the selected two orders, resulting in a new schedule σ . For example, σ = ( O 1 ,   O 2 ,   O 3 ,   O 4 ,   O 5 ) , σ = ( O 1 ,   O 4 ,   O 3 ,   O 2 ,   O 5 ) .
LH2:
One step to the right heuristic: randomly select one order (e.g., O 2 ) in a schedule σ , extract order O 2 from its position, move it one position toward the right, and reinsert it to obtain a new schedule σ . For example, σ = ( O 1 ,   O 2 ,   O 3 ,   O 4 ,   O 5 ) , σ = ( O 1 ,   O 3 ,   O 2 ,   O 4 ,   O 5 ) .
LH3:
Two steps to the right heuristic: randomly select one order (e.g., O 3 ) in a schedule σ , extract order O 3 from its position, move it two positions toward the right, and reinsert it, resulting in a new schedule σ . For example, σ = ( O 1 ,   O 2 ,   O 3 ,   O 4 ,   O 5 ) , σ = ( O 1 ,   O 2 ,   O 4 ,   O 5 ,   O 3 ) .
LH4:
One step to the left heuristic: randomly select one order (e.g., O 4 ) in a schedule σ , extract job O 4 from its position, move it one position toward the left, and reinsert it, resulting in the new schedule σ . For example,   σ = ( O 1 ,   O 2 ,   O 3 ,   O 4 ,   O 5 ) , σ = ( O 1 ,   O 2 ,   O 4 ,   O 3 ,   O 5 ) .
LH5:
Two steps to the left heuristic: randomly select one order (e.g., O 5 ) in a schedule σ , extract order O 5 from its position, move it two positions toward the left, and reinsert it to obtain a new schedule σ . For example,   σ = ( O 1 ,   O 2 ,   O 3 ,   O 4 ,   O 5 ) , σ = ( O 1 ,   O 2 ,   O 5 ,   O 3 ,   O 4 ) .
LH6:
Pulling-out and onward-moved reinsertion heuristic: Randomly select two orders (e.g., O 2 and O 5 ) in a schedule σ , extract order O 2 (the leftward of the two selected orders) from its position, and onward reinsert it just after O 5 to obtain a new schedule σ . For example, σ = ( O 1 ,   O 2 ,   O 3 ,   O 4 ,   O 5 ) , σ = ( O 1 ,   O 3 ,   O 4 ,   O 5 ,   O 2 ) .
LH7:
Pulling-out and backward-moved reinsertion heuristic: randomly select two orders (e.g., O 2 and O 5 ) in a schedule σ , extract order O 5 (the rightward of the two selected orders) from its position, and backward reinsert it just before O 2 to obtain a new schedule σ . For example,   σ = ( O 1 ,   O 2 ,   O 3 ,   O 4 ,   O 5 ) , σ = ( O 1 ,   O 5 ,   O 2 ,   O 3 ,   O 4 ) .
Notably, as the value of n increases (for example, when n > 10,) in general, the operators LH6 and LH7 differ from the other five heuristics, especially when n = 100 or 200.
In what follows, the genetic algorthim hyper-heuristic is introduced based on the GA framework as well; it was labeled GAHH. In the execution of the GAHH, we randomly selected a low-level heuristic based on its selection probability and applied it once to each population over several iterations (named L_no). The current solution was replaced with a newly generated solution if the new solution was superior to the current solution; otherwise, it was accepted with a certain probability. Let f l = 1 / 7 be the initial probability of each LHl; l = 1, 2, …, 7. Assume that π l is the recorded total frequency of obtaining a superior solution when cyclically executing LHl. To ensure that all of the seven low-level heuristics in the pool were in the GAHH, we set π l = max { 1 , π l } . The procedures of the GAHH were as follows:
Steps of genetic algorthim hyper-heuristic:
00:
InputPop, P, ITRN, L_no.
01:
Generate a series of Pop initial parents and find their fitness values.
02:
Do c = 1, ITRN
03:
set f l = 1 / 7 , l = 1, 2, …, 7.
04:
Do i = 1, pop /* for each parent σ i
05:
Dok = 1, L_no
06:
Select an L H l by using the roulette wheel method based on the value
07:
of f l to improve σ i for generating another new schedule σ t .
08:
Replace π l = π l + 1 with LHj if RC( σ t ) < RC( σ i ).
09:
Retain each superior current parent σ i
10:
End do /* for the low-level heuristics */
11:
End do /* i = 1, 2, …, Pop */
12:
Update the probabilities { f l , l = 1, 2, …, 7} of LH 1 , LH 2 , …, and LH 7 according
13:
their past records as { f l = π l / j = 1 7 π j ,   l = 1 , ,7}
14:
Select two parents from Pop populations by using the roulette wheel method and
15:
employ a linear order crossover to reproduce a set of Pop offspring.
16:
For each offspring, generate a random number u (0 < u < 1) if u < P; then, create a
17:
new offspring by inserting a displacement mutation.
18:
Retain the best offspring, and replace the parents of this Pop with their offspring.
19:
End do /*when the iterative number of high-level cycles (ITRN) is */
20:
Output the final best sequence and its fitness value.
The flowchart of the GAHH is depicted in Figure 1.
To find an exact solution for small-sized orders, the best of the schedules found using the three proposed heuristics, a GA, and the GAHH was used as the upper bound in a depth-first B&B method. To help cut the branching trees, the proposed properties and the lower bound were used in the method. The orders were first scheduled in a forward manner, and we selected a systematic search and branch down each tree [25,28,34,36].

6. Tuning Genetic Algorithm Hyper-Heuristic Parameters

With reference to the scheme of Montgomery [49], in this section, we present an approach that used one factor at a time to tune the relevant GAHH parameters. The GAHH proposed in Section 5 had four parameters; namely, population size (Pop), number of low-level heuristics applied (L_no), mutation probability (P), and number of high-level cycles (ITRN). To reduce the computation time or obtain superior solutions, the values of these parameters must be tuned before conducting a simulation study. To obtain suitable parameter settings, we computed the average error percentage (AEP) as AEP = 100 i = 1 n ( H i B i * B i * ) [%], where Hi is the objective solution searched using the GAHH, and B i * is the optimal objective value obtained using the B&B method for each instance i. With reference to the designs of Leung et al. [10,11,12,13,14,16], Lee [20], Lin et al. [28], and Yang and Yu [33], the weights w i were generated from the uniform distribution U (1, 100). The component processing time t i v ( 1 ) and ready time r i ( 1 ) of an order were generated from the uniform distributions U (1, 100) and U (1, 100 × n × λ ); the due dates of an order were generated from the uniform distribution U (TPTbar(1) (1 − τρ/2), TPTbar(1) (1 − τ + ρ/2)) in Scenario 1. In Scenario 2, the component processing time t i v ( 2 ) and ready time r i ( 2 ) of an order were generated from the uniform distributions U (1, 200) and U (1, 200 × n × λ ), and the due dates of an order were generated from the uniform distribution U (TPTbar(2) (1 − τρ/2), TPTbar(2) (1 – τ + ρ/2)), where T P T b a r ( s ) = v = 1 m i = 1 n t i v ( s ) / m ; τ and ρ describe the tardiness factor and range of due dates, respectively; and 0 < λ < 1 is a controllable parameter. For simplification, we set n = 10, m = 3, τ = 0.5, ρ = 0.5, and λ = 0.3 and generated 100 problem instances for testing.
With Pop = 10, P = 0.05, and L_no = 20, a simulation was conducted and designed as ITRN was varied from 1 to 10 in incremental steps of 1. It can be seen in Figure 2a that the lowest point of the AEP was located at ITRN = 6. With Pop = 10, ITRN = 6, and L_no = 20, a simulation was conducted and designed as P was varied from 0.01 to 0.1 in incremental steps of 0.1. It can be seen in Figure 2b that the AEP was approximately zero (below 0.59%) when P was 0.04, which indicated an effective reduction in the AEP. With ITRN = 6, P = 0.04, and L_no = 20, a simulation was conducted and designed as Pop was varied from 10 to 20 in incremental steps of 2. It can be seen in Figure 2c that the AEP was the minimum (~0.29%) when Pop = 20, which indicated an effective reduction in the AEP with an increase in Pop. With ITRN = 6, P = 0.04, and Pop = 20, a simulation was conducted and designed as L_no was varied from 10 to 50 in incremental steps of 4. It can be seen in Figure 2d that the AEP was the lowest (~0.03%) when L_no equaled 46.
Finally, the optimal setting values of the parameter (Pop, P, ITRN, L_no) were set to (20, 0.04, 6, 46) for small-sized orders. However, for large-sized orders, where n = 100 and 200, a greater number of lower-level heuristics (L_no) was required to obtain superior solutions; following the same approach as above, we eventually set L_no to 560 and 1000 for n = 100 and n = 200, respectively, for use in subsequent simulation studies.
Notably, three stopping conditions are commonly used in metaheuristics; namely, the number of generations, the difference between the current best solution and the previous best solution, and the limitation of CPU time. To fairly compare the proposed GAHH and GA, the same values for the population size, the same crossover scheme, and the same mutation rates were used in both approaches. We only revised the number of generations (IT_GA) in case of the GA to approximate to the number of repeated cycles (ITRN) multiplied by the times of the low-level heuristics per cycle (L_no) in the GAHH, that is, IT_GA = ITRN × L_no. After our preliminary tests, the parameters (Pop, P, IT_GA) in the GA method were set to (20, 0.04, 276) for small-sized orders (n = 9, 11), (20, 0.04, 3360) for n = 100, and (20, 0.04, 6000) for n = 200, respectively. Due to the integration of high-level strategic heuristics with a group of low-level heuristics, a small GAHH population was adequate.

7. Simulation Study

In this section, the performances of the B&B method, three heuristics, the GA, and the GAHH were evaluated through simulation studies. All of the algorithms were coded in FORTRAN and executed on a personal computer equipped with an Intel Core i7 CPU (2.66 GHz) and 4 GB of RAM and running the Windows XP operating system. With reference to the design of Leung et al. [10,12,13,14], Lee [20], and Lin et al. [28], we generated the component processing times t i v ( 1 ) from U (1, 100) for the order instances in Scenario 1. The t i v ( 2 ) values of an order i on m machines were independently obtained from U (1, 200) for m = 2, 3, and 4 (small-sized orders) and m = 5, 10, and 20 (large-sized orders). The weight wi of each order was randomly generated from another U (1, 100) independently. In addition, we generated the due dates of the orders from another U (TPTbar(s) (1 − τρ/2), TPTbar(s) (1 – τ + ρ/2)), where T P T b a r ( s ) = v = 1 m i = 1 n t i v ( s ) / m , τ denotes the tardiness factor, and ρ denotes range of due dates. The combinations of (τ, ρ) included (0.25, 0.25), (0.25, 0.5), (0.5, 0.75), (0.5, 0.5), (0.5, 0.25), and (0.25, 0.75). With reference to the design of Reeves [50], we generated the ready times from U (1, 100) for Scenario 1 and U (1, 200) for Scenario 2, respectively, where λ is the control variable. The value of λ was set to 0.1, 0.3, and 0.5. Herein, the two parts of the simulation study were designed to address small- and large-sized orders.

7.1. Results Obtained for Small-Sized Orders

For the small-sized orders, the number of orders n was set as 9 and 11 and the number of machines m as 2, 3, and 4. A total of 100 problem instances were examined for each combination of (n, m, λ, τ, ρ). Thus, in total, 10,800 (=100·2·3·3·2·3) instances were tested in this experiment. The B&B method was set to stop and run the next instance when the number of trimmed nodes exceeded 108.
The average and maximum number of nodes and the average and maximum execution times (in seconds) were recorded to determine the performance of the B&B method. To assess the performance of the three modified Moore’s heuristics, GA, and GAHH, the AEP and maximum error percentage were recorded. The performance of the B&B method is summarized in Table 1. The performances of the proposed heuristics and algorithms are summarized in Table 2.
The effects of the parameters on the execution of the B&B method are summarized in Table 1. The averages of the mean nodes and CPU time increased dramatically as n was increased from 9 to 11 regardless of the parameter. This phenomenon illustrated one characteristic of an NP-hard problem. The number of machines and ρ had little effect on the performance of the B&B method. In terms of the effects of the parameters m, λ, and τ on the mean number of nodes in the B&B method, the means of both the number of nodes and CPU time tended to increase for both n = 9 and n = 11 as the value of one of these parameters was increased and the other parameters were fixed. The number of nodes and CPU time decreased as the value of ρ increased for both values of n. This result implied that in the instances with a smaller value, the optimal solution was easily obtained using the B&B method.
In terms of the effects of the aforementioned parameters on the performance of the three Moore’s-type heuristics, the mean AEPs of the three heuristics, GA, and GAHH tended to decrease as the value of m, λ, or τ increased (Table 2). In contrast, the mean AEP of the three heuristics and both GAs increased as the value of ρ increased. Moreover, the results in Table 2 confirmed that on average, the GAHH outperformed the other algorithms.
To compare the solution quality among the Moore_pi_M, Moore_pi_m, Moore_pi_mean, GA, and GAHH (with mean AEPs of 62.09, 59.44, 63.13, 19.77, and 1.22, respectively), the AEPs obtained under variations in different factors including the algorithm; number of jobs; number of machines; and the parameters λ, τ, and ρ were fitted to a linear model. The analysis of variance results for the AEP revealed significant differences in the performance of all factors when the significance level was 0.05, but the normality of the error term was significantly invalid as indicated by a Shapiro–Wilk test (with the statistic = 0.8535 and p < 0.0001). The boxplots in Figure 3 display the AEP distributions of all the proposed heuristics and algorithms. Accordingly, a nonparametric statistical method was used to perform multiple comparisons between these methods. Then, a Freidman test (with p < 0.0001) showed that the AEP samples were not followed the same distribution under a significance level of 0.05 according to the basis of AEP ranks on the 108 (n·m·λ·τ·ρ = 2·3·3·2·3) blocks of test problem instances.
Furthermore, to find the pairwise differences among the three heuristics, GA, and GAHH, we conducted a Wilcoxon–Nemenyi–McDonald–Thompson (WNMT) test (see chapter 7 in Hollander et al. [51]). Table 3 lists the sum of ranks of AEPs for the three heuristics, GA, and GAHH. The behaviors of all of the proposed algorithms could be grouped into four groups under a significance level of 0.05. As can be inferred from columns 2 and 3 of Table 3, the performance of the GAHH was the best (rank-sum = 110.0) followed by GA (rank-sum = 223), whereas Moores_pi_m and Moores_pi_mean (rank-sums = 414.0 and 472.0, respectively) exhibited the poorest performances.
As for the usage of the seven low-level heuristics, the variations in the probabilities of them being called in the GAHH are displayed in Figure 4. HL3 was called most frequently and was typically followed by HL1. However, HL4, HL5, HL6, and HL7 were rarely called.

7.2. Results for Large-Sized Orders

For a large number of jobs, we set the order size n to 100 and 200 and the number of machines m to 5, 10, and 20. One hundred problem instances were randomly examined for each parameter combination. Consequently, a total of 10,800 (=100·2·3·3·2·3) problem instances were examined in this simulation. To evaluate the performance of the three heuristics, GA, and GAHH, we reported the mean and maximum relative percentage deviation (RPD). RPD is defined as follows: RPD = 100 i = 1 n ( H i H * H * ) [%], where H i is the value of the objective function searched for using the three heuristics, GA, and GAHH; and H* is the minimum among these five methods for each instance. The RPDs obtained with variations in n, m, λ, τ, and ρ are summarized in Table 4.
As summarized in Table 4, the average RPDs of the three modified Moore’s-type heuristics and the GA strictly decreased as the number of machines and values of the parameters λ and τ increased for n = 100 and 200. In contrast, the average RPDs of the three Moore’s-type heuristics decreased as the value of ρ increased. Moreover, with an average RPD of close to 0 for both values of n, the GAHH outperformed the other methods.
The boxplots of the RPDs of the five algorithms are depicted in Figure 5. The GAHH outperformed the other four algorithms when n was large. Overall, the average RPDs of the Moores_pi_M, Moores_pi_m, Moores_pi_mean, GA, and GAHH approaches were 106.68, 125.22, 105.45, 76.79, and 0.00, respectively. The experimental results further corroborated the superiority of the GAHH approach.
In view of the significant differences in the average RPDs of the three heuristics, GA, and GAHH, we performed an analysis to confirm the differences and make direct pairwise comparisons statistically. Another linear model that was run in the SAS 9.4 environment was fitted to the RPDs obtained under variations in different factors including the algorithm; number of jobs and machines; and parameters λ, τ, and ρ. Subsequently, the normality of the error term was tested for significance by running the Shapiro–Wilk test (statistic = 0.9114 and p < 0.0001) under a significance level of 0.05. A nonparametric statistical method—the WNMT test—was used to make multiple comparisons of the performance of the five algorithms. Table 3 lists the sum of ranks of RPDs of the three proposed heuristics, GA, and GAHH. The performance levels of all proposed algorithms could be grouped into four groups at the 0.05 significance level. As can be inferred from columns 4 and 5 of Table 3, the GAHH (rank-sum = 108.0) yielded the best performance followed by the GA (rank-sum = 255), whereas Moores_pi_m (rank-sum = 496) yielded the poorest performance.
The CPU times of all of the proposed algorithms were extremely short in the small-order-size case; thus, they were omitted here. Table 5 lists the CPU times of all of the proposed heuristics and metaheuristics in the large-order-size case. As summarized and illustrated in Table 5 and Figure 6, respectively, on average, the AEPs of the Moore_pi_M, Moore_pi_m, Moore_pi_mean, GA, and GAHH approaches were 0.32, 0.30, 0.30, 0.02, and 2.14, respectively, in the large-order-size case regardless of the parameter values, except for the number of orders (n). Furthermore, in terms of the differences between the GA and GAHH, under the same design with the same population, the same mutation, and It_GA = ITRN × L_no, the computation time of the GAHH was 2 s longer than that of the GA because the GAHH required considerable CPU time to select low-level heuristics. In the worst case, GAHH required 8.02 s to solve an instance. However, the mean of the maximum solving times of the GAHH approach was 2.51 s. This result confirmed that the GAHH approach maintained a small population and limited the number of generations required for obtaining a high-quality solution.
On the basis on the test results, we concluded that the GAHH, on average, outperformed the other heuristics and the GA in the simulation tests regardless of the order size.

8. Conclusions and Future Work

Customer order scheduling models have emerged as a major challenge in manufacturing environments and practical applications (Framinan et al. [4]). Diverging from the common assumption that component processing times, ready times, and due dates are fixed, this study investigated cases involving scenario-dependent component processing times, ready times, and due dates; the objective was to find an appropriate and robust sequence of orders to minimize the maximum of the sum of weighted tardy orders across the considered scenarios. To solve this problem, first, dominance properties and a lower bound were derived and a B&B method was applied to search for the optimal solutions for small-sized orders. Second, three variants (heuristics) of Moore’s algorithm and the GAHH were developed along with a conventional GA to obtain approximate solutions for large-sized orders. Simulations were performed to evaluate the capability of the B&B method and the effects of parameter values on its performance. The experimental results obtained by considering the dominance properties and the derived lower bound indicated that the B&B method could solve problem instances with n values up to 11 within a reasonable CPU time (Table 1). The experimental tests further demonstrated that the GA and GAHH performed satisfactorily in term of efficacy and efficiency for problem instances involving both small- and large-sized orders. In the case of the GAHH with a scheme for randomly selecting from among seven operators, we only required a small population and a small number of iterative cycles (ITRN) to obtain a high-quality solution compared with the GA without a neighborhood heuristic. Overall, on the basis of the simulation results, we can recommend using the GAHH approach to solve the problem considered herein due to its superior performance and speed in attaining high-quality solutions in terms of the AEP and RPD. In the future, researchers can investigate the COSP with more than three scenario-dependent processing times as well as order rejection based on different criteria; for example, the total completion time, the makespan, or even a multiobjective case. Another direction for future study could involve using the GAHH with seven versions to evaluate the impact of having no operator, one operator, and multiple operators.

Author Contributions

Conceptualization, L.-Y.L., J.-Y.X., X.Z. and C.-C.W.; Data curation, C.-C.W.; Formal analysis, W.-C.L.; Investigation, J.-Y.X.; Methodology, L.-Y.L., J.-Y.X., S.-R.C., X.Z. and C.-C.W.; Project administration, S.-R.C.; Resources, L.-Y.L. and S.-R.C.; Software, W.-C.L., J.-C.L. and Z.-L.W.; Visualization, J.-C.L. and Z.-L.W.; Writing—original draft, W.-C.L.; Writing—review and editing, C.-C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This paper did not accept any specific grants from funding agencies in the public, commercial, or not-for-profit sectors.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

We thank the guest editors and two referees for their positive comments and useful suggestions. This paper was supported in part by the Ministry of Science and Technology of Taiwan (MOST 110-2221-E-035-082-MY2) and in part by the National Natural Science Foundation of China (grant number 72271048).

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Ahmadi, R.; Bagchi, U.; Roemer, T.A. Coordinated scheduling of customer orders for quick response. Nav. Res. Logist. 2005, 52, 493–512. [Google Scholar] [CrossRef]
  2. Guo, Q.; Tang, L. Modelling and discrete differential evolution algorithm for order rescheduling problem in steel industry. Comput. Ind. Eng. 2019, 130, 586–595. [Google Scholar] [CrossRef]
  3. Zhang, Y.; Dan, Y.; Dan, B.; Gao, H. The order scheduling problem of product-service system with time windows. Comput. Ind. Eng. 2019, 133, 253–266. [Google Scholar] [CrossRef]
  4. Framinan, J.M.; Perez-Gonzalez, P.; Fernandez-Viagas, V. Deterministic assembly scheduling problems: A review and classification of concurrent-type scheduling models and solution procedures. Eur. J. Oper. Res. 2019, 273, 401–417. [Google Scholar] [CrossRef] [Green Version]
  5. Framinan, J.; Perez-Gonzalez, P. New approximate algorithms for the customer order scheduling problem with total completion time objective. Comput. Oper. Res. 2017, 78, 181–192. [Google Scholar] [CrossRef]
  6. Sung, C.S.; Yoon, S.H. Minimizing total weighted completion time at a pre- assembly stage composed of two feeding machines. Int. J. Prod. Econ. 1998, 54, 247–255. [Google Scholar] [CrossRef]
  7. Yoon, S.H.; Sung, C.S. Fixed pre-assembly scheduling on multiple fabrication machines. Int. J. Prod. Econ. 2005, 96, 109–118. [Google Scholar] [CrossRef]
  8. Wang, G.; Cheng, T.C.E. Customer order scheduling to minimize total weighted completion time. Omega 2007, 35, 623–626. [Google Scholar] [CrossRef] [Green Version]
  9. Leung, J.Y.T.; Li, H.; Pinedo, M.; Sriskandarajah, C. Open shops with jobs overlap-revisited. Eur. J. Oper. Res. 2005, 163, 569–571. [Google Scholar] [CrossRef]
  10. Leung, J.Y.T.; Li, H.; Pinedo, M. Approximation algorithms for minimizing total weighted completion time of orders on identical machines in parallel. Nav. Res. Logist. 2006, 53, 243–260. [Google Scholar] [CrossRef]
  11. Leung, J.Y.T.; Li, H.; Pinedo, M. Scheduling orders for multiple product types to minimize total weighted completion time. Discret. Appl. Math. 2007, 155, 945–970. [Google Scholar] [CrossRef] [Green Version]
  12. Leung, J.Y.T.; Li, H.; Pinedo, M.; Zhang, J. Minimizing total weighted completion time when scheduling orders in a flexible environment with uniform machines. Inf. Process. Lett. 2007, 103, 119–129. [Google Scholar] [CrossRef]
  13. Leung, J.Y.T.; Li, H.; Pinedo, M. Scheduling orders on either dedicated or flexible machines in parallel to minimize total weighted completion time. Ann. Oper. Res. 2008, 159, 107–123. [Google Scholar] [CrossRef]
  14. Leung, J.Y.T.; Lee, C.Y.; Ng, C.W.; Young, G.H. Preemptive multiprocessor order scheduling to minimize total weighted flowtime. Eur. J. Oper. Res. 2008, 190, 40–51. [Google Scholar] [CrossRef]
  15. Wu, C.-C.; Yang, T.-H.; Zhang, X.; Kang, C.C.; Chung, I.H.; Lin, W.-C. Using heuristic and iterative greedy algorithms for the total weighted completion time order scheduling with release times. Swarm Evol. Comput. 2019, 44, 913–926. [Google Scholar] [CrossRef]
  16. Riahi, V.; Hakim Newton, M.A.; Polash, M.M.A.; Sattar, A. Tailoring customer order scheduling search algorithms. Comput. Oper. Res. 2019, 108, 155–165. [Google Scholar] [CrossRef]
  17. Li, H.; Li, Z.; Zhao, Y.; Xu, X. Scheduling customer orders on unrelated parallel machines to minimize total weighted completion time. J. Oper. Res. 2020, 72, 1726–1736. [Google Scholar] [CrossRef]
  18. Leung, J.Y.T.; Li, H.; Pinedo, M. Scheduling orders for multiple product types with due date related objectives. Eur. J. Oper. Res. 2006, 168, 370–389. [Google Scholar] [CrossRef]
  19. Lin, B.M.T.; Kononov, A.V. Customer order scheduling to minimize the number of late jobs. Eur. J. Oper. Res. 2007, 183, 944–948. [Google Scholar] [CrossRef]
  20. Lee, I.-S. Minimizing total tardiness for the order scheduling problem. Int. J. Prod. Econ. 2013, 144, 128–134. [Google Scholar] [CrossRef]
  21. Xu, J.; Wu, C.-C.; Yin, Y.; Zhao, C.L.; Chiou, Y.-T.; Lin, W.-C. An order scheduling problem with position-based learning effect. Comput. Oper. Res. 2016, 74, 175–186. [Google Scholar] [CrossRef]
  22. Biskup, D. Single-machine scheduling with learning considerations. Eur. J. Oper. Res. 1999, 115, 173–178. [Google Scholar] [CrossRef]
  23. Lin, W.-C.; Yin, Y.; Cheng, S.R.; Cheng, T.C.E.; Wu, C.H.; Wu, C.-C. Particle swarm optimization and opposite-based particle swarm optimization for two-agent multi-facility customer order scheduling with ready times. Appl. Soft Comput. 2017, 52, 877–884. [Google Scholar] [CrossRef]
  24. Kuolamas, C.; Kyparisis, G.J. Single-machine and two-machine flowshop scheduling with general learning functions. Eur. J. Oper. Res. 2007, 178, 402–407. [Google Scholar] [CrossRef]
  25. Wu, C.-C.; Liu, S.C.; Zhao, C.L.; Wang, S.Z.; Lin, W.-C. A multi-machine order scheduling with learning using the memetic genetic algorithm and particle swarm optimization. Comput. J. 2018, 61, 14–31. [Google Scholar] [CrossRef]
  26. Kuo, W.H.; Yang, D.L. Minimizing the total completion time in a single- machine scheduling problem with a time-dependent learning effect. Eur. J. Oper. Res. 2006, 174, 1184–1190. [Google Scholar] [CrossRef]
  27. Wu, C.-C.; Lin, W.C.; Zhang, X.; Chung, I.H.; Yang, T.H.; Lai, K. Tardiness minimization for a customer order scheduling problem with sum-of processing-time-based learning effect. J. Oper. Res. Soc. 2019, 70, 487–501. [Google Scholar] [CrossRef]
  28. Lin, W.-C.; Xu, J.; Bai, D.; Chung, I.H.; Liu, S.C.; Wu, C.-C. Artificial bee colony algorithms for the order scheduling with release dates. Soft Comput. 2019, 23, 8677–8688. [Google Scholar] [CrossRef]
  29. Daniels, R.L.; Kouvelis, P. Robust scheduling to hedge against processing time uncertainty in single-stage production. Manag. Sci. 1995, 41, 363–376. [Google Scholar] [CrossRef]
  30. Sabuncuoglu, I.; Goren, S. Hedging production schedules against uncertainty in manufacturing environment with a review of robustness and stability research. Int. J. Comput. Integr. Manuf. 2009, 22, 138–157. [Google Scholar] [CrossRef]
  31. Sotskov, Y.N.; Sotskova, N.Y.; Lai, T.-C.; Werner, F. Scheduling with Uncertainty: Theory and Algorithms; Belorusskaya Nauka: Minsk, Belarus, 2010. [Google Scholar]
  32. Kouvelis, P.; Yu, G. Robust Discrete Optimization and Its Applications; Kluwer Academic Publishers: Amsterdam, The Netherlands, 1997. [Google Scholar]
  33. Yang, J.; Yu, G. On the robust single machine scheduling problem. J. Comb. Optim. 2002, 6, 17–33. [Google Scholar] [CrossRef]
  34. Wu, C.-C.; Bai, D.Y.; Chen, J.-H.; Lin, W.-C.; Xing, L.; Lin, J.C.; Cheng, S.R. Several variants of simulated annealing hyper-heuristic for a single-machine scheduling with two-scenario-based dependent processing times. Swarm Evol. Comput. 2021, 60, 100765. [Google Scholar] [CrossRef]
  35. Hsu, C.L.; Lin, W.C.; Duan, L.; Liao, J.R.; Wu, C.-C.; Chen, J.H. A robust two-machine flow-shop scheduling model with scenario- dependent processing times. Discret. Dyn. Nat. Soc. 2020, 2020, 3530701. [Google Scholar] [CrossRef]
  36. Wu, C.-C.; Gupta, J.N.D.; Cheng, S.R.; Lin, B.M.T.; Yip, S.H.; Lin, W.-C. Robust scheduling of a two-stage assembly shop with scenario-dependent processing times. Int. J. Prod. Res. 2021, 59, 5372–5387. [Google Scholar] [CrossRef]
  37. Wu, C.-C.; Lin, W.-C.; Zhang, X.G.; Bai, D.Y.; Tsai, Y.W.; Ren, T.; Cheng, S.R. Cloud theory-based simulated annealing for a single- machine past sequence setup scheduling with scenario-dependent processing times. Complex Intell. Syst. 2021, 7, 345–357. [Google Scholar] [CrossRef]
  38. Kämmerling, N.; Kurtz, J. Oracle-based algorithms for binary two-stage robust optimization. Comput. Optim. Appl. 2020, 77, 539–569. [Google Scholar] [CrossRef]
  39. Xuan, G.; Lin, W.C.; Cheng, S.R.; Shen, W.L.; Pan, P.A.; Kuo, C.L.; Wu, C.-C. A robust single-machine scheduling problem with two scenarios of job parameters. Mathematics 2022, 10, 2176. [Google Scholar] [CrossRef]
  40. Wu, C.-C.; Gupta, J.N.D.; Lin, W.C.; Cheng, S.R.; Chiu, Y.L.; Chen, C.J.; Lee, L.Y. Robust scheduling of Two-Agent Customer Orders with Scenario-Dependent Component Processing Times and Release Dates. Mathematics 2022, 10, 1545. [Google Scholar] [CrossRef]
  41. Yin, Y.; Wang, D.; Cheng, T.C.E. Due Date-Related Scheduling with Two Agents: Models and Algorithms; Uncertainty and Operations Research book series; Springer Nature Singapore: Singapore, 2020. [Google Scholar]
  42. Karp, R.M. Reducibility among Combinatorial Problems. In Complexity of Computer Computations; Miller, R.E., Thatcher, J.W., Eds.; Springer: Berlin/Heidelberg, Germany, 1972; pp. 85–103. [Google Scholar]
  43. Moore, J.M. An n job one machine sequencing algorithm for minimizing the number of late jobs. Manag. Sci. 1968, 14, 102–109. [Google Scholar] [CrossRef]
  44. Holland, J. Adaptation in Natural and Artificial Systems; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  45. Essafi, I.; Matib, Y.; Dauzere-Peres, S. A genetic local search algorithm for minimizing total weighted tardiness in the job-shop scheduling problem. Comput. Oper. Res. 2008, 35, 2599–2616. [Google Scholar] [CrossRef]
  46. Iyer, S.K.; Saxena, B.S. Improved memetic genetic algorithm for the permutation flowshop scheduling problem. Comput. Oper. Res. 2004, 31, 593–606. [Google Scholar] [CrossRef]
  47. Cowling, P.; Kendall, G.; Han, L. An investigation of a hyperheuristic memetic genetic algorithm applied to a trainer scheduling problem. In Proceedings of the 2002 Congress on Evolutionary Computation (CEC 2002), Honolulu, HI, USA, 12–17 May 2002; IEEE Computer Society Press: Honolulu, HI, USA, 2002; pp. 1185–1190. [Google Scholar]
  48. Anagnostopoulos, K.P.; Koulinas, G.K. A simulated annealing hyperheuristic for construction resource levelling. Constr. Manag. Econ. 2010, 28, 163–175. [Google Scholar] [CrossRef]
  49. Montgomery, D.C. Design and Analysis of Experiments; 5/e, John Wiley & Sons, Inc.: New York, NY, USA, 2001. [Google Scholar]
  50. Reeves, C. Heuristics for scheduling a single machine subject to unequal job release times. Eur. J. Oper. Res. 1995, 80, 397–403. [Google Scholar] [CrossRef]
  51. Hollander, M.; Wolfe, D.A.; Chicken, E. Nonparametric Statistical Methods, 3rd ed.; John Wiley& Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
Figure 1. Flowchart of GAHH.
Figure 1. Flowchart of GAHH.
Mathematics 10 04146 g001
Figure 2. Tuning the GAHH parameters.
Figure 2. Tuning the GAHH parameters.
Mathematics 10 04146 g002
Figure 3. Boxplots of AEPs of heuristics and algorithms (small n).
Figure 3. Boxplots of AEPs of heuristics and algorithms (small n).
Mathematics 10 04146 g003
Figure 4. Variations in probabilities of low-level heuristics.
Figure 4. Variations in probabilities of low-level heuristics.
Mathematics 10 04146 g004
Figure 5. Boxplots of RPDs of heuristics and algorithms (large n).
Figure 5. Boxplots of RPDs of heuristics and algorithms (large n).
Mathematics 10 04146 g005
Figure 6. Boxplots of CPU times of the heuristics and algorithms (large n).
Figure 6. Boxplots of CPU times of the heuristics and algorithms (large n).
Mathematics 10 04146 g006
Table 1. Number of B&B nodes and CPU time for small values of n.
Table 1. Number of B&B nodes and CPU time for small values of n.
NodeCPU_Time
nmMeanMaxMeanMax
92102,15035,65940.561.25
3108,84235,90430.681.48
4116,378386,3300.821.79
112606,504725,997,81357.42193.89
36,701,24027,681,78674.84235.82
46,952,75627,613,65190.24272.42
Mean3,341,0691,373,253637.43117.78
n λ MeanMaxMeanMax
90.163,88315,82720.521.11
0.384,005374,2650.611.56
0.5179,482569,4300.931.86
110.1316,95179,901,03747.60131.62
0.34,378,85321,526,66361.14236.57
0.512,170,67349,865,551113.76333.93
Mean3,341,06913,732,53637.43117.78
n τ MeanMaxMeanMax
90.2573,284287,3130.551.40
0.50144,963447,3310.821.62
110.253,734,19417,472,35653.12200.44
0.509,411,83536,723,14495.21267.64
Mean3,341,06913,732,53637.43117.78
n ρ MeanMaxMeanMax
90.25120,476374,1040.711.49
0.50105,302379,5700.671.51
0.75101,593348,2940.681.53
110.257,668,64229,579,79580.48255.49
0.506,353,62526,905,93371.67225.25
0.755,696,77624,807,52370.35221.39
Mean3,341,06913,732,53637.43117.78
Table 2. AEPs of the proposed heuristics and algorithms for small n.
Table 2. AEPs of the proposed heuristics and algorithms for small n.
Moores_pi_MMoores_pi_mMoores_pi_MeanGAGAHH
nmMeanMaxMeanMaxMeanMaxMeanMaxMeanMax
9263.19684.2763.39851.0165.54650.5518.47278.180.7726.61
347.03335.8745.22355.3547.88284.1815.21130.370.3514.42
445.80319.1543.15237.9045.31307.2913.7992.950.3214.15
11281.46478.1078.19448.1483.70448.7326.99226.092.4569.87
368.64467.1364.53460.2170.29473.6522.55121.511.8933.21
466.44432.3462.17326.8666.07340.2121.62125.731.5527.67
Mean62.09452.8159.44446.5863.13417.4419.77162.471.2230.99
n λ MeanMaxMeanMaxMeanMaxMeanMaxMeanMax
90.1107.491009.37103.091121.00108.48927.9027.76348.971.1034.63
0.338.61220.8237.83248.4439.25234.5914.14106.100.2616.50
0.59.9275.2710.8474.8211.0179.535.5746.440.074.04
110.1148.71990.73134.73822.99148.81891.5441.34303.504.4698.83
0.353.71301.3754.70323.0355.24282.3721.32116.751.1218.92
0.514.1285.4715.4689.1916.0188.678.5053.080.3013.01
Mean62.09447.1759.44446.5863.13417.4319.77162.471.2230.99
nτMeanMaxMeanMaxMeanMaxMeanMaxMeanMax
90.2584.43796.0782.39871.9185.42727.4724.21271.870.7526.91
0.5019.58100.9418.7890.9320.40100.547.4462.470.209.87
110.25116.61813.22110.02722.44117.08735.2835.40260.653.1670.70
0.5027.75105.1726.57101.0429.63106.4412.0454.900.7616.47
Mean62.09453.8559.44446.5863.13417.4319.77162.471.2230.99
n ρ MeanMaxMeanMaxMeanMaxMeanMaxMeanMax
90.2538.27233.8937.68247.7338.07212.7411.7375.190.4719.79
0.5053.90296.5451.15289.0954.89294.1416.97122.250.5118.54
0.7563.85816.1962.93907.4465.77735.1318.77304.060.4516.85
110.2551.33272.4352.18292.9152.07280.8717.2597.821.0121.54
0.5075.80463.1072.77441.3977.43416.3125.34169.341.8131.39
0.7589.40642.0479.93500.9290.56565.4028.57206.173.0777.82
Mean62.09454.0359.44446.5863.13417.4319.77162.471.2230.99
Table 3. Nonparametric multiple comparisons of AEP and relative percentage deviation.
Table 3. Nonparametric multiple comparisons of AEP and relative percentage deviation.
Small Job Size nLarge Job Size n
Pairwise Comparison between Algorithm|Pairwise Rank-Sum Difference|Difference > 64.4 *|Pairwise Rank-Sum Difference|Difference > 64.4 *
Moores_pi_M vs. Moores_pi_m|401.0–414.0|NO|359.0–496.0|YES
Moores_pi_M vs. Moores_pi_mean|401.0–472.0|YES|359.0–402.0|NO
Moores_pi_M vs. GA|401.0–223.0|YES|359.0–255.0|YES
Moores_pi_M vs. GAHH|401.0–110.0|YES|359.0–108.0|YES
Moores_pi_m vs. Moores_pi_mean|414.0–472.0|NO|496.0–402.0|YES
Moores_pi_m vs. GA|414.0–223.0|YES|496.0–255.0|YES
Moores_pi_m vs. GAHH|414.0–110.0|YES|496.0–108.0|YES
Moores_pi_mean vs. GA|472.0–223.0|YES|402.0–255.0|YES
Moores_pi_mean vs. GAHH|472.0–110.0|YES|402.0–108.0|YES
GA vs. GAHH|223.0–110.0|YES|255.0–108.0|YES
* Approximated using a formula reported in Hollander et al. [51], page 316.
Table 4. RPDs obtained using proposed heuristics and algorithms for large n.
Table 4. RPDs obtained using proposed heuristics and algorithms for large n.
Moores_pi_MMoores_pi_mMoores_pi_MeanGAGAHH
nmMean MaxMeanMaxMeanMaxMeanMaxMeanMax
1005117.98190.76134.45225.48115.52194.4779.90143.190.000.00
10111.11175.04127.34213.33108.83174.5074.67126.330.000.00
15107.61196.72123.20207.02105.71201.1972.78130.910.000.00
2005105.19154.86127.30180.75105.31156.3881.17123.840.000.00
10100.35146.16120.94172.3299.54151.1777.28114.040.000.00
1597.85144.41118.07166.9997.77139.8474.94109.500.000.00
Mean106.68167.99125.22194.32105.45169.5976.79124.640.000.00
nλMeanMaxMeanMaxMeanMaxMeanMaxMeanMax
1000.1184.26323.65235.51406.75174.74320.61114.44212.680.000.00
0.3109.43168.40103.81162.00109.91175.2271.13118.470.000.00
0.543.0270.4645.6777.0945.4274.3341.7869.280.000.00
2000.1161.27245.04210.16302.04158.99243.33124.74189.480.000.00
0.3101.68139.58110.93150.51101.66141.2968.4598.370.000.00
0.540.4560.8145.2267.5241.9762.7640.2159.540.000.00
Mean106.68167.99125.22194.32105.45169.5976.79124.640.000.00
nτMeanMaxMeanMaxMeanMaxMeanMaxMeanMax
1000.25158.29274.79190.88332.02155.80284.64106.99195.690.000.00
0.5066.18100.2265.7898.5364.2495.4744.5871.270.000.00
2000.25139.15209.37170.33245.76139.91212.30110.48167.200.000.00
0.5063.1287.5873.87100.9561.8385.9645.1264.390.000.00
Mean106.68167.99125.22194.32105.45169.5976.79124.640.000.00
nρMeanMaxMeanMaxMeanMaxMeanMaxMeanMax
1000.2593.79156.60108.21173.5994.62160.0370.15119.170.000.00
0.50117.33211.28134.50232.41115.43217.5479.01145.330.000.00
0.75125.60194.63142.28239.83120.02192.5978.19135.930.000.00
2000.2586.77128.64106.01146.4589.63129.3073.60107.260.000.00
0.50105.66155.60126.86181.39105.13156.2881.10122.020.000.00
0.75110.97161.18133.44192.22107.86161.8078.70118.100.000.00
Mean106.68167.99125.22194.32105.45169.5976.79124.640.000.00
Table 5. CPU times obtained with proposed heuristics and algorithms for large n.
Table 5. CPU times obtained with proposed heuristics and algorithms for large n.
Moore_pi_MMoore_pi_mMoore_pi_MeanGAGAHH
n mMeanMaxMeanMaxMeanMaxMeanMaxMeanMax
10050.040.050.040.050.040.050.010.020.510.58
100.070.090.070.090.070.090.010.030.911.01
150.100.130.100.130.100.130.010.041.331.52
20050.280.350.280.330.290.350.010.031.781.98
100.500.670.490.570.490.560.020.053.113.47
150.901.130.831.050.831.070.030.085.216.46
mean0.320.400.300.370.300.380.020.042.142.50
n λ MeanMaxMeanMaxMeanMaxMeanMaxMeanMax
1000.10.070.090.070.090.070.090.010.030.901.01
0.30.070.090.070.090.070.090.010.030.891.01
0.50.070.090.070.090.070.090.010.020.911.02
2000.10.510.620.490.590.510.610.030.063.223.65
0.30.540.690.520.640.530.660.030.073.424.26
0.50.610.780.560.680.550.670.010.033.283.80
mean0.310.390.300.360.300.370.020.042.102.46
nτMeanMaxMeanMaxMeanMaxMeanMaxMeanMax
1000.250.070.090.070.090.070.090.010.030.931.05
0.500.070.090.070.090.070.090.010.030.911.02
2000.250.550.690.530.640.530.640.030.063.394.00
0.500.580.750.540.660.540.680.020.053.353.94
mean0.320.410.300.370.300.380.020.042.152.50
n ρ MeanMaxMeanMaxMeanMaxMeanMaxMeanMax
1000.250.070.090.070.090.070.090.010.030.921.05
0.500.070.100.070.090.070.090.010.030.911.02
0.750.070.090.070.090.070.090.010.030.931.05
2000.250.560.700.520.630.530.650.020.053.343.97
0.500.550.700.530.630.530.660.020.063.383.99
0.750.580.750.540.680.550.670.030.063.383.96
mean0.320.410.300.370.300.380.020.042.142.51
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, L.-Y.; Xu, J.-Y.; Cheng, S.-R.; Zhang, X.; Lin, W.-C.; Lin, J.-C.; Wu, Z.-L.; Wu, C.-C. A Genetic Hyper-Heuristic for an Order Scheduling Problem with Two Scenario-Dependent Parameters in a Parallel-Machine Environment. Mathematics 2022, 10, 4146. https://doi.org/10.3390/math10214146

AMA Style

Li L-Y, Xu J-Y, Cheng S-R, Zhang X, Lin W-C, Lin J-C, Wu Z-L, Wu C-C. A Genetic Hyper-Heuristic for an Order Scheduling Problem with Two Scenario-Dependent Parameters in a Parallel-Machine Environment. Mathematics. 2022; 10(21):4146. https://doi.org/10.3390/math10214146

Chicago/Turabian Style

Li, Lung-Yu, Jian-You Xu, Shuenn-Ren Cheng, Xingong Zhang, Win-Chin Lin, Jia-Cheng Lin, Zong-Lin Wu, and Chin-Chia Wu. 2022. "A Genetic Hyper-Heuristic for an Order Scheduling Problem with Two Scenario-Dependent Parameters in a Parallel-Machine Environment" Mathematics 10, no. 21: 4146. https://doi.org/10.3390/math10214146

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop