Next Article in Journal
A Quadruple Integral Containing the Gegenbauer Polynomial Cn(λ)(x): Derivation and Evaluation
Next Article in Special Issue
An Evolutionary Numerical Method of Supply Chain Trust Networks with the Degree of Distribution
Previous Article in Journal
Backward vs. Forward Gait Symmetry Analysis Based on Plantar Pressure Mapping
Previous Article in Special Issue
A Modification of the Imperialist Competitive Algorithm with Hybrid Methods for Multi-Objective Optimization Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Imperialist Competitive Algorithm for the Distributed Unrelated Parallel Machines Scheduling Problem

1
Faculty of Computer Science and Information Engineering, Hubei University, Wuhan 430061, China
2
School of Automation, Wuhan University of Technology, Wuhan 430062, China
*
Authors to whom correspondence should be addressed.
Symmetry 2022, 14(2), 204; https://doi.org/10.3390/sym14020204
Submission received: 17 December 2021 / Revised: 12 January 2022 / Accepted: 14 January 2022 / Published: 21 January 2022
(This article belongs to the Special Issue Meta-Heuristics for Manufacturing Systems Optimization)

Abstract

:
In this paper, the distributed unrelated parallel machines scheduling problem (DUPMSP) is studied and a hybrid imperialist competitive algorithm (HICA) is proposed to minimize total tardiness. All empires were categorized into three types: the strongest empire, the weakest empire, and other empires; the diversified assimilation was implemented by using different search operator in the different types of empires, and a novel imperialist competition was implemented among all empires except the strongest one. The knowledge-based local search was embedded. Extensive experiments were conducted to compare the HICA with other algorithms from the literature. The computational results demonstrated that new strategies were effective and the HICA is a promising approach to solving the DUPMSP.

1. Introduction

In recent years, single-factory or centralized production has been continuously replaced by multi-factory production or distributed manufacturing with the further development of globalization. Distributed manufacturing enables manufacturers to be closer to their customers and suppliers, to produce and market their products more effectively, to respond to market changes more quickly and achieve better product quality, lower production cost, reduce management risk, etc. As an important part of manufacturing systems, scheduling is shifted from single-factory scheduling to distributed scheduling with the change of production mode. Distributed scheduling has been considered fully in the past decade [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30].
As a common distributed scheduling problem, the distributed parallel machine scheduling problem (DPMSP) has attracted some attention since the works of Hooker [1]. Chen and Pundoor [2] analyzed the computational complexity of scheduling in supply chains and proposed some fast heuristics. Terrazas-Moreno and Grossmann [3] gave a hybrid bi-level and spatial Lagrangian decomposition method for the scheduling and planning problem in multiple factories with different sites. Behnamian and Fatemi Ghomi [4] applied a heuristic and a genetic algorithm (GA) with a new encoding scheme to minimize the makespan in heterogeneous factories. Behnamian [5] provided a decomposition-based hybrid variable neighborhood search/tabu search algorithm to deal with a DPMSP with the total cost of some factories and the profit of other factories. Behnamian and Fatemi Ghomi [6] developed a Monte-Carlo-based heuristic with seven local search algorithms to minimize the cost-related objective. Behnamian [7] designed a two-phase algorithm with particle swarm optimization for a DPMSP with parallel jobs.
Lei et al. [8] dealt with the distributed unrelated parallel machine scheduling problem (DUPMSP) in heterogeneous factories and proposed a novel imperialist competitive algorithm (ICA) with memory, new revolution, and imperialist competition to minimize the makespan. Lei and Liu [9] studied the DUPMSP with preventive maintenance and makespan minimization and presented an artificial bee colony algorithm in which the wholes swarm is divided into one employed bee colony and three onlooker bee colonies, and the four colonies differed from each other in their search strategies. Lei et al. [10] considered a multi-objective DUPMSP and developed an improved ABC algorithm to minimize the makespan and total tardiness simultaneously, in which problem-related knowledge was proven and knowledge-based neighborhood search was proposed. Pan et al. [11] solved an energy-efficient DUPMSP by using a knowledge-based two-population optimization algorithm to minimize total energy consumption and total tardiness simultaneously, in which the nondominated sorting genetic algorithm-II and differential evolution performed cooperatively and two knowledge-based local search operators were proposed.
The ICA is constructed by simulating sociopolitical behaviors. It is made up of initialization, initial empires, assimilation, revolution, and imperialist competition. Unlike other meta-heuristics [31,32,33], the ICA has some characteristic features, which are a good neighborhood search ability, an effective global search property, and a good convergence rate [34].
As an effective method for solving production scheduling [35], the ICA has been extensively applied to many scheduling problems. Banisadr et al. [36] gave a hybrid ICA for single-machine scheduling. For assembly flow shop scheduling, Shokrollahpour et al. [37] developed a new ICA to optimize two conflicting objectives, and Seidgar et al. [38] proposed an ICA to solve a two-stage problem. With respect to the flexible job shop scheduling problem, Zandieh et al. [39] applied an improved ICA to solve the problem with maintenance. Karimi et al. [40] developed a hybrid algorithm with the ICA and simulated annealing for a problem with transportation times. Lei et al. [41] presented a two-phase algorithm based on the ICA and variable neighborhood search for a problem with an energy consumption threshold. Abedi et al. [42] gave a multi-objective ICA for identical parallel batch-processing machines’ scheduling. Yazdani et al. [43] proposed a hybrid algorithm with the ICA and local search for a two-agent parallel machine scheduling problem. An improved ICA was proposed by Zhang et al. [44] to solve photolithography machines’ scheduling. Li et al. [45] presented an ICA with empire cooperation for solving a fuzzy distributed assembly scheduling problem. Li et al. [46] proposed an ICA with feedback to solve an energy-efficient FJSP with transportation and a sequence-dependent setup time.
As a meta-heuristic with significant features, the ICA has great potential to generate high-quality solutions for the DPMSP; on the other hand, in the existing ICA [34,35,36], each empire has the same search operator in assimilation, all empires compete in imperialist competition, and the problem-related properties are not used fully. In general, assimilation and revolution should be performed based on the features of the empires. Generally, assimilation in the weakest empire should be intensified and implemented by moving colonies toward imperialists of other empires or other good solutions. It is useful to avoid prematurely excluding the strongest empire from imperialist competition, and the effective usage of problem-related properties or knowledge can improve the search efficiency. Thus, it is beneficial to investigate these improvements of the ICA to solve the DUPMSP efficiently.
In this study, the DUPMSP with total tardiness was considered and a hybrid imperialist competitive algorithm (HICA) is proposed. In the HICA, all empires are divided into three types: the strongest empire, the weakest empire, and other empires. Diversified assimilation was implemented, that is each kind of empire was provided a different assimilation from other kinds of empires. A novel imperialist competition was implemented among all empires except the first type of empire. Problem-related knowledge was included in the local search, and an effective way was applied to integrate knowledge with the ICA. Many computational experiments were conducted. The computational results validated the effectiveness of the new strategies of the HICA and its promising advantages in the DUPMSP.
The paper is organized as follows. The problem description is introduced in Section 2. The HICA for the DUPMSP is described in Section 3. Numerical test experiments are given in Section 4, and the conclusions and the future topics are provided in the final section.

2. Problem Description

The DUPMSP is described below. There exist n jobs J 1 , J 2 , , J n processed among F factories at different sites. Factory f = 1 , 2 , , F possesses m f unrelated parallel machines, which are M s f + 1 , , M s f + m f and available at all times, where s f = l = 1 f 1 m l , f > 1 , s 1 = 0 . The total number of machines in all factories is W = f = 1 F m f . Each job J i is available at Time 0 and has a due date d i and p i k , which is the processing time of job J i on machine M k , k = 1 , 2 , , W .
There are some constraints on jobs and machines.
Each machine can deal with at most one job at a time.
No jobs can be processed on more than one machine at a time.
Operations are not allowed to be interrupted.
The goal of the problem is to minimize the total tardiness.
T a r d t o t = i = 1 n T i
where C i and T i are the completion time and tardiness of job J i , T i = max C i d i , 0 , and T a r d t o t indicates the total tardiness.
The DUPMSP is the extended version of the unrelated parallel machine scheduling problem (PMSP). The PMSP with total tardiness is NP-hard, so the DUPMSP with total tardiness is also NP-hard. When all parallel machines are identical and the objective is the makespan, any two machines are symmetrical to each other, for example for M s f + 1 and M s f + 2 , a job can be finished at the same time when it is processed on these two machines, respectively. In the DUPMSP, there is a certain amount of destruction of the above symmetry; however, the symmetry still exists.
For the DUPMSP with total tardiness, it consists of three sub-problems: factory assignment, machine assignment, and scheduling. Factory assignment is used to decide which jobs are allocated into each factory, and machine assignment is for selecting an appropriate machine for each job in a factory. Lei et al. [8] analyzed the strong coupled relation among sub-problems and pointed out that two assignment sub-problems can be integrated into an extended machine assignment, in which each job is allocated to one of W machines, not limited to a machine in a factory.

3. HICA for the DUPMSP

In ICA, a country represents a solution of the problem, and solutions in population P are categorized into two parts: imperialists and colonies, the former are some best solutions in P and the latter are all solutions of P except imperialists. The main steps of the ICA consist of the initial empires’ construction, assimilation, revolution, and imperialist competition.
In the existing ICA [34,36,41], the improvements are made by using new strategies on initial empires, assimilation, revolution, and imperialist competition; however, assimilation and revolution are frequently executed in the same ways for all empires and seldom implemented in terms of the characteristics of empires; on the other hand, if the strongest empire has the biggest normalized total cost T C ¯ k , then this empire wins likely in the imperialist competition, and colonies can be easily reallocated into the empire; as a result, premature removal may occur. A novel algorithm named the HICA is designed based on the above analyses.

3.1. Initialization and Initial Empires

As stated above, the DUPMSP is composed of an extended machine assignment and scheduling. Lei et al. [10] proposed a two-string representation. For the DUPMSP, its solution is represented as a machine assignment string [ M h 1 , M h 2 , , M h n ] and a scheduling string [ q 1 , q 2 , , q n ] . Machine M h i is allocated for job J i , and q l is a real number and corresponds to J l . The decoding procedure [10] was directly used in this study.
Initial population P with N solutions is randomly generated, then all solutions are sorted according to the cost. N i m solutions with the smallest cost are chosen as imperialists, and the remaining countries are used as colonies, then the initial empires are constructed. Algorithm 1 shows the steps of the initial empires, where T a r d t o t , i is the total tardiness of the i-th imperialist, and F k and N C k are defined by:
F k = c ¯ k c ¯ k l S i m c ¯ l l S i m c ¯ l
N C k = r o u n d F k × N c o l
N C k is the number of colonies possessed by imperialist k. S i m is the set of all imperialists. N c o l = N N i m indicates the number of colonies. r o u n d ( x ) denotes the nearest integer to x.
Algorithm 1 Initial empires.
1:
Determine N i m imperialists from population P, and sort them in ascending order of total tardiness
2:
Compute the normalized cost c ¯ k = T a r d t o t , N i m k + 1 for imperialist k = 1 , 2 , , N i m
3:
Calculate the power F k and N C k and randomly allocate N C k colonies for each imperialist
In the HICA, the cost c i of a solution x i is defined as its objective value. When all imperialists are sorted, Imperialist 1 has the smallest total tardiness T a r d 1 , t o t , Imperialist 2 possesses the second smallest total tardiness T a r d 2 , t o t , and so on; obviously, T a r d 1 , t o t T a r d 2 , t o t T a r d N i m , t o t .
For each initial empire k, its total cost T C k is computed. Suppose that T C 1 T C 2 T C N i m , that is Empire 1 is the strongest one and empire N i m is the weakest one. All empires are divided into three types: the first type just has Empire 1, and third type only has empire N i m ; the second type is made up of empires 2 , , N i m 1 .
T C k = c k + ζ λ Q k c λ c λ N C k N C k
where Q k is the set of colonies possessed by imperialist k and ζ is a real number and set to 0.1.

3.2. Diversified Assimilation and New Revolution

Assimilation and revolution are the main paths to produce new solutions. In general, the process of assimilation is identical for all empires. In the HICA, diversified assimilation is implemented, that is assimilation is performed differently in the different kinds of empires. Algorithm 2 shows the steps of diversified assimilation, where α is a real number, β is an integer, and the retained set Ω is used to store the solutions generated in Empire 1, that is a new solution is produced in Empire 1 and added into Ω directly, U A i and U R i are the probability of assimilation and revolution. In this study, we set U A 2 = 0.8 , U R 2 = 0.1 , U R 3 = 0.2 , α = 0.1 , and β = 5 based on experiments.
Assimilation is executed by a global search between the colony and its learning object. In all empires except the worst one, the learning objective of the colony is its imperialist. For colony x i and its learning object y, the global search is shown below. Two positions k 1 , k 2 are stochastically decided, and machines between these positions on the first string of x i are directly replaced with those on the first string of y; a new solution z is generated; if z has smaller T a r d t o t than or identical T a r d t o t with x i , then replace x i with z; otherwise, genes of z between two positions k 1 and k 2 of the scheduling string are displaced by those of y on the same positions; if the newly obtained solution z and x i meets the above condition, then z substitutes for x i .
Algorithm 2 Diversified assimilation and new revolution.
1:
Add all solutions of Empire 1 into the retained set Ω
2:
Choose colonies from Empire 1 by probability U A 1 , and execute assimilation between each chosen colony and its imperialist.
3:
Select colonies of Empire 1 in terms of revolution probability U R 2 , and apply multiple neighborhood search to each chosen colony.
4:
Choose the best N C 1 + 1 solutions from the retained set Ω ; replace all solutions in Empire 1; decide on the new imperialist.
5:
for k = 2 to N i m 1  do
6:
   calculate c ˜ k = x i Q k c i c k c ˜ k = x i Q k c i c k N C k N C k .
7:
   if  c ˜ k < α × c k  then
8:
     implement assimilation with U A 1 and revolution with U R 1 as done in Empire 1.
9:
   else
10:
     execute assimilation with 1 U A 1 and revolution with 1 U R 1 as done in Empire 1.
11:
   end if
12:
end for
13:
For empire N i m , choose colonies with probability U A 2 ; execute assimilation between each chosen colony and one of the best β solutions in Ω ; perform revolution with U R 3 in the same way as for Empire 1.
In Algorithm 2, the assimilation probability or learning object differ from each other in different types of empires. For example, in the worst empire, its imperialist may have low quality and cannot guide the colony, so the colony learns from one best solution of P to obtain high-quality solutions.
Algorithm 2 also shows revolution in each type of empire. A multiple-neighborhood search is used in revolution. There are seven neighborhood structures: N 1 , N 2 , , N 7 .
N 1 is described as follows. Move a job with the biggest tardiness from its current machine M k to a randomly chosen machine M w ( w k ). N 2 is performed for two randomly chosen machines M k 1 and M k 2 ; shift the job with the biggest tardiness from M k 1 to M k 2 . N 3 is also applied for two randomly chosen machines M k 1 and M k 2 by exchanging the job with the biggest tardiness on M k 1 and the job with the smallest tardiness on M k 2 .
N 4 acts on the scheduling string by exchanging two randomly chosen genes q i 1 and q i 2 . N 5 is used to produce a new solution by inserting a randomly chosen gene q i into a new position l , l i . N 6 is shown as follows. Two positions k 1 and k 2 are randomly chosen, and genes between two positions are reversed.
N 7 is depicted below. Begin with machine M k with the biggest total tardiness of jobs; all jobs on M k are in the ascending order of their due dates, then let the sequence of their q l of these jobs be identical to the order of the due dates. For example, for jobs J 1 , J 17 , J 11 on machine M 1 , the ascending order of their due dates is J 11 , J 17 , J 1 , and the sequence of their q l becomes 0.33 , 0.23 , 0.32 , that is new q 1 is 0.33, new q 11 = 0.23 , and new q 17 = 0.32 .
The revolution of colony x i is shown as follows. Let g = 1 , t = 1 ; repeat the following steps until t = R : a new solution z N g ( x i ) is produced; if x i can be replaced with z according to the condition in the assimilation, then z substitutes for x i ; otherwise, g = g + 1 ; let g = 1 if g = 8 . t = t + 1 .
When there are only two empires, colonies are chosen in terms of U A 1 for assimilation between the colony and its imperialist, and revolution is implemented using U R 1 and the above multiple-neighborhood search.
When assimilation and revolution are performed in all empires, the exchange step is executed. In each empire k, if the total tardiness of imperialist k is bigger than that of a colony, then imperialist k is exchanged with the colony.
Total cost T C k is calculated for each empire k, and all empires are categorized into three types, as done in Section 3.1.

3.3. Imperialist Competition

In general, all empires compete with each other, and the weakest colony of the weakest empire is directly added into the winning empire; in this study, a new imperialist competition was applied and shown in Algorithm 3, where T C ¯ k is defined by:
T C ¯ k = T C N i m k + 1
When N i m 2 , all empires compete with each other, as done in the above procedure.
Algorithm 3 Imperialist competition.
1:
Determine the normalized total cost T C ¯ k for empire k = 2 , 3 , , N i m
2:
Compute power P O W k for empire k = 2 , , N i m and construct vector P O W 2 r 2 , , P O W N i m r N i m
3:
Decide on an empire g with the biggest P O W g r g
4:
Randomly choose a solution x from Empire 1; directly substitute for the weakest colony; add into empire g; then, execute multiple-neighborhood search on x, as done in revolution.
Unlike the existing imperialist competition, our competition process directly eliminates the possibility of including the weakest colony in empire g because the weakest colony is a worse solution and difficult to improve even if it is included in a strong empire.

3.4. Local Search

Problem-related knowledge was incorporated in the search procedure of the scheduling algorithm [10,11,47,48,49]. The inclusion of knowledge can avoid useless searching and make the algorithm search in the possible regions of the optimal solutions. There are some papers on knowledge-based scheduling [47,48,49], and the knowledge used is mainly about the makespan. In this study, the knowledge-based local search acts on the imperialist of each empire to intensify the local search ability of the HICA.
Theorem [10]: For two adjacent jobs J i and J j on a machine M k , suppose t is the beginning time of J i :
(1)
If T i > 0 and T j > 0 :
a.
If t + p j k d j 0 and t + p i k d j > 0 , then the sum of their tardiness will diminish after two jobs are exchanged;
b.
If t + p j k d j > 0 and p i k p j k > 0 , then the sum of their tardiness will diminish after two jobs are exchanged;
(2)
If T i 0 , T j > 0 , and one of the following conditions is met:
a.
t + p i k + p j k d i 0 ;
b.
t + p i k + p j k d i > 0 , t + p j k d j 0 , and d i > d j ;
c.
t + p i k + p j k d i > 0 , t + p j k d j > 0 , d i t p j k > 0 ; then the sum of their tardiness will diminish after two jobs are exchanged.
T i * and T j * are the tardiness of J i and J j after the exchange.
The local search is shown below. For empire k = 1 , 2 , , N i m , the following steps are executed on the imperialist of the empire: on each machine M l , l = 1 , 2 , , W , start with the first job on M l ; repeat the following steps until all adjacent jobs are checked: for Job J i and its adjacent job J j , exchange them if they meet the conditions in the theorem.

3.5. Algorithm Description

The HICA is shown in Algorithm 4, and its flow chart is described in Figure 1.
Algorithm 4 HICA.
1:
Randomly produce an initial population P
2:
while stopping condition is not met do
3:
   Construct initial empires
4:
   if  N i m > 2  then
5:
     Sort empires, and categorize empires into three types
6:
     Execute Algorithm 2, and exchange for each empire
7:
     Implement the new imperialist competition for all empires except Empire 1
8:
   else
9:
     Perform Algorithm 2, and exchange
10:
     Execute imperialist competition
11:
   end if
12:
   Apply local search to the imperialist of each empire
13:
end while
The HICA has the following time complexity O ( N × R × m a x _ g ) , where m a x _ g indicates the repeated number of Lines 3–12 of Algorithm 4. The HICA has the following features: (1) Three types of empires are constructed, and diversified assimilation is performed in these types of empires. (2) A new imperialist competition is implemented. Not all empires compete with each other, and the weakest colony is not included in the winning empire and replaced with a solution produced by the multiple-neighborhood search. (3) The knowledge-based local search is used to improve the imperialists of all empires for high search efficiency. The above features can provide a good balance between exploration and exploitation for the HICA and result in good performance.

4. Computational Experiments

Extensive experiments were conducted to test the performance of the HICA for the DUPMSP with total tardiness. All experiments were implemented by using Microsoft Visual C++ 2015 and run on a 4.0 G RAM 2.00 GHz CPU PC.

4.1. Test Instances, Metrics, and Comparative Algorithms

Ninety-six combinations were randomly generated, and their basic descriptions are given in Table 1. Five instances were stochastically produced for each combination, and four-hundred eighty instances were obtained. The due date of J i is decided by:
d i = j = 1 W p i j × n j = 1 W p i j × n 2 W 2 2 W 2
Behnamian and Fatemi Ghomi [4] proposed a GA for the DPMSP. Hulett et al. [50] presented a particle swarm optimization (PSO [50]) algorithm for a non-identical PMSP with total weighted tardiness. This GA can be directly applied to solve the DUPMSP. This PSO algorithm also can be used to solve our DUPMSP after the decoding part is replaced with the decoding procedure in the HICA, so they were chosen as comparative algorithm.
Each algorithm ran 20 times for each instance. For a combination, m n i and m x i were the best solution, the worst solution obtained in 20 runs for its instance i = 1 , 2 , 3 , 4 , 5 , respectively. a g i is the average makespan of 20 elite solutions for instance i. Three indices M i n , M a x , and A v g are defined by M i n = i = 1 5 m n i / 5 , M a x = i = 1 5 m x i / 5 , and A v g = i = 1 5 a g i / 5 .

4.2. Parameter Settings

The HICA has the following parameters: N, N i m , m a x _ i t , U A 1 , U R 1 . The Taguchi method is often applied to decide parameter settings for optimization algorithm and was also adopted in this study. Table 2 describes the levels of all parameters.
The HICA ran 20 times for an instance of Problem Combination 14. the orthogonal array L 27 ( 3 5 ) was executed. The results of the M i n and S/N ratio are shown in Figure 2. The S/N ratio is defined as 10 log 10 0.1 × M i n 2 , where M i n denotes the best solution obtained in 20 runs. As shown in Figure 2, the best settings were N = 100 , N i m = 6 , m a x _ i t = 10 5 , U A 1 = 0.7 , U R 1 = 0.3 .

4.3. Results and Discussion

Two variants named HICA1 and HICA2 are given. HICA1 was obtained after the knowledge-based local search was deleted from the HICA. The comparison between the HICA and HICA1 was to validate the impact of the local search on the performance of the HICA. In HICA2, when assimilation was performed, each colony moved toward its imperialist. The usage of HICA2 was to show if the diversified assimilation was useful to improve the performance of the HICA.
Table 3, Table 4, Table 5 and Table 6 describe the computational results of the HICA, the two variants, and the two comparative algorithms, in which data highlighted in bold are results of HICA being better than any other algorithms. The Wilcoxon test was performed, and the corresponding results are shown in Table 7. The box plots of all algorithms are given in Figure 3. The convergence curves of all algorithms on Combinations 24 and 76 are listed in Figure 4 and Figure 5.
It can be found from Table 3, Table 4, Table 5 and Table 6 that the HICA performed better than its two variants on the three metrics. The HICA produced better M i n than HICA1 on 65 combinations; the M a x of HICA1 was less than that of the HICA on 37 combinations; the HICA had a smaller A v g than HICA1 on 69 combinations. Obviously, the HICA had better convergence performance than HICA1, and the inclusion of the local search really improved the performance of the HICA. HICA2 could not generate better results than the HICA on most of the instances and was superior to the HICA on a limited number of instances. In the HICA, assimilation was implemented differently in different types of empires. The comparison between the HICA and HICA2 really proved the effectiveness of the diversified assimilation. Table 7 reveals that the HICA had a better M i n , M a x , and A v g than HICA1 and HICA2 in the statistical sense and produced a similar S t d as its two variants; this conclusion also can be seen from Figure 3, Figure 4 and Figure 5. Thus, it is necessary to add the local search and the diversified assimilation to the HICA.
The parameters of the GA and the PSO algorithm were directly adopted from Behnamian and Fatemi Ghomi [4] and Hulett et al. [50], except the stopping condition. Three algorithms had the same termination condition: m a x _ i t = 10 5 . The computational results and times are shown in Table 3, Table 4, Table 5 and Table 6 and Table 8, respectively.
As shown in Table 3, Table 4, Table 5, Table 6 and Table 7, the HICA generated a smaller M i n than the two comparative algorithms on all instances, that is the HICA converged significantly better than the PSO algorithm and the GA. The M a x and A v g of the HICA were less than those of the PSO algorithm on 95 of 96 combinations and better than those of the GA on all combinations. The HICA obtained a smaller S t d than the GA on most of the instances. The performance differences on the M a x , M i n , and A v g with respect to the HICA, PSO algorithm, and GA also can be found from Figure 3, Figure 4 and Figure 5. Although HICA performed worse than the PSO algorithm on the S t d , the HICA possessed better convergence, smaller average results, and a smaller M a x than its two comparative algorithms.
In the HICA, The strongest empire was excluded from imperialist competition to avoid premature removal. The diversified assimilation and new revolution were implemented differently in the different types of empires, and the local search could effectively improve the search efficiency; on the contrary, the PSO algorithm and GA have a strong global search ability; however, their local search ability was not intensified, and this feature was the main reason for their low performance. Thus, the HICA can effectively solve the DUPMSP.

5. Conclusions

In this study, a new algorithm by hybridizing the ICA with the knowledge-based local search was proposed to solve the DUPMSP with total tardiness minimization, in which empires were divided into three types. To obtain high-quality solutions, the diversified assimilation and new revolution were designed; imperialist competition was newly implemented on N i m 1 empires without the strongest one; the problem-related properties were proven; the knowledge-based local search was applied to improve the quality of the imperialists. Extensive experiments were conducted on 480 instances. The computational results demonstrated that the new strategies were effective and that the HICA had promising advantages in the considered DUPMSP.
The DPMSP is an important scheduling topic. We will focus on the DPMSP with various constraints such as additional resources and machine eligibility and try to solve the problem by using meta-heuristics with new optimization mechanisms such as reinforcement learning. Other distributed scheduling problems including distributed assembly hybrid flow shop scheduling are also our future topics. We will solve distributed scheduling problems with factory eligibility or energy-related constraints by using reinforcement-learning-based meta-heuristics.

Author Contributions

Methodolog, Y.Z.; computation experiments, Y.Z.; writing, Y.Z.; methodology, Y.Y.; computation experiments; Q.Z.; Writing, D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (61803149).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hooker, J.N. A hybrid method for the planning and scheduling. Constraints 2005, 10, 385–401. [Google Scholar] [CrossRef]
  2. Chen, Z.L.; Pundoor, G. Order assignment and scheduling in a supply chain. Oper. Res. 2006, 54, 555–572. [Google Scholar] [CrossRef]
  3. Terrazas, M.; Grossmann, I.E. A multiscale decomposition method for the optimal planning and scheduling of multi-site continuous multiproduct plants. Chem. Eng. Sci. 2011, 66, 4307–4318. [Google Scholar] [CrossRef]
  4. Behnamian, J.; Ghomi, S.F. The heterogeneous multi-factory production network scheduling with adaptive communication policy and parallel machine. Inform. Sci. 2013, 219, 181–196. [Google Scholar] [CrossRef]
  5. Behnamian, J. Decomposition based hybrid VNS-TS algorithm for distributed parallel factories sceduling with virtual corporation. Comput. Oper. Res. 2014, 52, 181–191. [Google Scholar] [CrossRef]
  6. Behnamian, J.; Ghomi, S.F. Minimizing cost-related objective in synchronous scheduling of parallel factories in the virtual production network. Appl. Soft Comput. 2015, 29, 221–232. [Google Scholar] [CrossRef]
  7. Behnamian, J. Graph colouring-based algorithm to parallel jobs scheduling on parallel factories. Int. J. Prod. Res. 2016, 29, 622–635. [Google Scholar] [CrossRef]
  8. Lei, D.M.; Yuan, Y.; Cai, J.C.; Bai, D.Y. An imperialist competitive algorithm with memory for distributed unrelated parallel machines scheduling. Int. J. Prod. Res. 2020, 58, 597–614. [Google Scholar] [CrossRef]
  9. Lei, D.M.; Liu, M.Y. An artificial bee colony with division for distributed unrelated parallel machine scheduling with preventive maintenance. Comput. Ind. Eng. 2020, 141, 106320. [Google Scholar] [CrossRef]
  10. Lei, D.M.; Yuan, Y.; Cai, J.C. An improved artificial bee colony for multi objective distributed unrelated parallel machine scheduling. Int. J. Prod. Res. 2020, 59, 5259–5271. [Google Scholar] [CrossRef]
  11. Pan, Z.X.; Lei, D.M.; Wang, L. A knowledge-based two-population optimization algorithm for distributed energy-effificient parallel machines scheduling. IEEE Trans. Cyber. 2021; in press. [Google Scholar]
  12. Zhao, F.; Zhao, L.; Wang, L.; Song, H. An ensemble discrete differential evolution for the distributed blocking flowshop scheduling with minimizing makespan criterion. Exp. Syst. Appl. 2020, 160, 113678. [Google Scholar] [CrossRef]
  13. Shao, Z.; Pi, D.; Shao, W. Hybrid enhanced discrete fruit fly optimization algorithm for scheduling blocking flow-shop in distributed environment. Exp. Syst. Appl. 2020, 145, 113147. [Google Scholar] [CrossRef]
  14. Chen, S.; Pan, Q.-K.; Gao, L.; Sang, H.-Y. A population-based iterated greedy algorithm to minimize total flowtime for the distributed blocking flowshop scheduling problem. Eng. Appl. Artif. Intel. 2021, 104, 104375. [Google Scholar] [CrossRef]
  15. Ribas, I.; Companys, R.; Tort-Martorell, X. An iterated greedy algorithm for the parallel blocking flow shop scheduling problem and sequence-dependent setup times. Exp. Syst. Appl. 2021, 184, 115535. [Google Scholar] [CrossRef]
  16. Li, Y.-Z.; Pan, Q.-K.; Li, J.-Q.; Gao, L.; Tasgetiren, M.F. An adaptive iterated greedy algorithm for distributed mixed no-idle permutation flowshop scheduling problems. Swarm Evol. Comput. 2021, 63, 100874. [Google Scholar] [CrossRef]
  17. Lu, C.; Gao, L.; Gong, W.; Hu, C.; Yan, X.; Li, X. Sustainable scheduling of distributed permutation flow-shop with non-identical factory using a knowledge-based multi-objective memetic optimization algorithm. Swarm Evol. Comput. 2021, 60, 100803. [Google Scholar] [CrossRef]
  18. Cai, J.C.; Zhou, R.; Lei, D.M. Dynamic shuffled frog-leaping algorithm for distributed hybrid flow shop scheduling with multiprocessor tasks. Eng. Appl. Artif. Intel. 2020, 90, 103540. [Google Scholar] [CrossRef]
  19. Jiang, E.D.; Wang, L.; Wang, J.J. Decomposition-based multi-objective optimization for energy-aware distributed hybrid flow shop scheduling with multiprocessor tasks. Tsinghua Sci. Technol. 2021, 26, 646–663. [Google Scholar] [CrossRef]
  20. Cai, J.C.; Lei, D.M.; Li, M. A shuffled frog-leaping algorithm with memeplex quality for bi-objective distributed scheduling in hybrid flow shop. Int. J. Prod. Res. 2021, 59, 5404–5421. [Google Scholar] [CrossRef]
  21. Zheng, J.; Wang, L.; Wang, J.J. A cooperative coevolution algorithm for multi-objective fuzzy distributed hybrid flow shop. Know-Based Syst. 2020, 194, 105536. [Google Scholar] [CrossRef]
  22. Wang, L.; Li, D.D. Fuzzy distributed hybrid flow shop scheduling problem with heterogeneous factory and unrelated parallel machine: A shuffled frog leaping algorithm with collaboration of multiple search strategies. IEEE Access 2020, 8, 214209–214223. [Google Scholar] [CrossRef]
  23. Cai, J.C.; Lei, D.M. A cooperated shuffled frog-leaping algorithm for distributed energy-efficient hybrid flow shop scheduling with fuzzy processing time. Complex Intel. Syst. 2021, 7, 2235–2253. [Google Scholar] [CrossRef]
  24. Lei, D.M.; Wang, T. Solving distributed two-stage hybrid flowshop scheduling using a shuffled frog-leaping algorithm with memeplex grouping. Eng. Optim. 2020, 52, 1461–1474. [Google Scholar] [CrossRef]
  25. Cai, J.C.; Zhou, R.; Lei, D.M. Fuzzy distributed two-stage hybrid flow shop scheduling problem with setup time: Collaborative variable search. J. Intel. Fuzzy Syst. 2020, 38, 3189–3199. [Google Scholar] [CrossRef]
  26. Shao, Z.S.; Shao, W.S.; Pi, D.C. Effective constructive heuristic and metaheuristic for the distributed assembly blocking flow-shop scheduling problem. Appl. Intel. 2020, 50, 4647–4649. [Google Scholar] [CrossRef]
  27. Zhao, F.Q.; Zhao, J.L.; Wang, L.; Tang, J.X. An optimal block knowledge driven backtracking search algorithm for distributed assembly No-wait flow shop scheduling problem. Appl. Soft Comput. 2021, 112, 107750. [Google Scholar] [CrossRef]
  28. Zhang, G.; Xing, K.; Cao, F. Scheduling distributed flowshops with flexible assembly and set-up time to minimize makespan. Int. J. Prod. Res. 2018, 56, 3226–3244. [Google Scholar] [CrossRef]
  29. Lin, J.; Zhang, S. An effective hybrid biogeography-based optimization algorithm for the distributed assembly permutation flow-shop scheduling problem. Comput. Ind. Eng. 2016, 97, 128–136. [Google Scholar] [CrossRef]
  30. Lin, J.; Wang, Z.J.; Li, X.D. A backtracking search hyper-heuristic for the distributed assembly flow-shop scheduling problem. Swarm Evol. Comput. 2017, 36, 124–135. [Google Scholar] [CrossRef]
  31. Farshi, T.R. Battle royale optimization algorithm. Neural Comput. Appl. 2020; in press. [Google Scholar]
  32. Seyyedabbasi, A.; Kiani, F. I-GWO and Ex-GWO: Improved algorithms of the grey wolf optimizer to solve global optimization problems. Eng. Comput. 2021, 37, 509–532. [Google Scholar] [CrossRef]
  33. Nadimi-Shahraki, M.H.; Fatahi, A.; Zamani, H. Migration-based moth-flame optimization algorithm. Processes 2021, 9, 2276. [Google Scholar] [CrossRef]
  34. Hosseini, S.; Khaled, A.A. A survey on the imperialist competitive algorithm metaheuristic: Implementation in engineering domain and directions for future research. Appl. Soft Comput. 2014, 24, 1078–1094. [Google Scholar] [CrossRef]
  35. Lei, D.M.; Cai, J.C. Multi-population meta-heuristics for production scheduling: A survey. Swarm Evol. Comput. 2020, 58, 100739. [Google Scholar] [CrossRef]
  36. Banisadr, A.H.; Zandieh, M.; Mahdavi, I. A hybrid imperialist competitive algorithm for single-machine scheduling problem with linear earliness and quadratic tardiness penalties. Int. J. Adv. Manuf. Technol. 2013, 65, 981–989. [Google Scholar] [CrossRef]
  37. Shokrollahpour, E.; Zandieh, M.; Dorri, B. A novel imperialist competitive algorithm for bi-criteria scheduling of the assembly flow shop problem. Int. J. Prod. Res. 2011, 49, 3087–3103. [Google Scholar] [CrossRef]
  38. Seidgar, H.; Kiani, M.; Abedi, M.; Fazlollahtabar, H. An efficient imperialist competitive algorithm for scheduling in the two-stage assembly flow shop problem. Int. J. Prod. Res. 2014, 52, 1240–1256. [Google Scholar] [CrossRef]
  39. Zandieh, M.; Kahmati, A.R.; Rahmati, S.H.A. Flexible job shop scheduling under condition-based maintenance: Improved version of imperialist competitive algorithm. Appl. Soft Comput. 2017, 58, 449–464. [Google Scholar] [CrossRef]
  40. Karimi, S.; Ardalan, Z.; Naderi, B.; Mohammadi, M. Scheduling flexible job-shops with transportation times: Mathematical models and a hybrid imperialist competitive algorithm. Appl. Math. Model. 2017, 41, 667–682. [Google Scholar] [CrossRef]
  41. Lei, D.M.; Li, M.; Wang, L. A two-phase meta-heuristic for multi-objective flexible job shop scheduling problem with total energy consumption threshold. IEEE Trans. Cyber. 2019, 49, 1097–1109. [Google Scholar] [CrossRef]
  42. Abedi, M.; Seidgar, H.; Fazlollahtabar, H.; Bijani, R. Bi-objective optimisation for scheduling the identical parallel batch-processing machines with arbitary job sizes, unequal job release times and capacity limits. Int. J. Prod. Res. 2015, 53, 1680–1711. [Google Scholar] [CrossRef]
  43. Yazdani, M.; Khalili, S.M.; Jolai, F. A parallel machine scheduling problem with two-agent and tool change activities: An efficient hybrid metaheuristic algorithm. Int. J. Comput. Int. Manuf. 2016, 29, 1075–1088. [Google Scholar] [CrossRef]
  44. Zhang, P.; Lv, Y.L.; Zhang, J. An improved imperialist competitive algorithm based photolithography machines scheduling. Int. J. Prod. Res. 2018, 56, 1017–1029. [Google Scholar] [CrossRef]
  45. Li, M.; Su, B.; Lei, D.M. A Novel imperialist competitive algorithm for fuzzy distributed assembly flow shop Scheduling. J. Intel. Fuzzy Syst. 2021, 40, 4545–4561. [Google Scholar] [CrossRef]
  46. Li, M.; Lei, D.M. An imperialist competitive algorithm with feedback for energy-efficient flexible job shop scheduling with transportation and sequence-dependent setup times. Eng. Appl. Artif. Intel. 2021, 103, 104307. [Google Scholar] [CrossRef]
  47. Zheng, X.L.; Wang, L. A collaborative multiobjective fruit fly optimization algorithm foe the resource constrained unrelated parallel machine green scheduling problem. IEEE Trans. Syst. Man Cyber. Syst. 2018, 48, 790–800. [Google Scholar] [CrossRef]
  48. Wang, J.J.; Wang, L. A knowledge-Based cooperative algorithm for energy-efficient scheduling of distributed flow-shop. IEEE Trans. Syst. Man Cyber. Syst. 2020, 50, 1805–1819. [Google Scholar] [CrossRef]
  49. Wang, L.; Zheng, X.L. A knowledge-guided multi-objective fruit fly optimization algorithm for the multi-skill resource constrained project scheduling problem. Swarm Evol. Comput. 2018, 38, 54–63. [Google Scholar] [CrossRef]
  50. Hulett, M.; Damodaran, P.; Amouie, M. Scheduling non-identical parallel batch processing machines to minimize total weighted tardiness using particle swarm optimization. Comput. Ind. Eng. 2017, 113, 425–436. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the HICA.
Figure 1. Flowchart of the HICA.
Symmetry 14 00204 g001
Figure 2. The mean M i n and the mean S/N ratio of the M i n .
Figure 2. The mean M i n and the mean S/N ratio of the M i n .
Symmetry 14 00204 g002
Figure 3. Box plot of the five algorithms.
Figure 3. Box plot of the five algorithms.
Symmetry 14 00204 g003
Figure 4. Convergence curves on an instance of Combination 24.
Figure 4. Convergence curves on an instance of Combination 24.
Symmetry 14 00204 g004
Figure 5. Convergence curves on an instance of Combination 76.
Figure 5. Convergence curves on an instance of Combination 76.
Symmetry 14 00204 g005
Table 1. Information on combinations.
Table 1. Information on combinations.
InstanceF m f Instancen
1–924 51,10,19,28,37,46,55,64,78,84,90,9680
10–1834 5 62,11,20,29,38,47,56,65100
19–2744 5 6 33,12,21,30,39,48,57,66120
28–3654 5 6 3 74,13,22,31,40,49,58,67140
37–4523 45,14,23,32,41,50,59,68160
46–5433 4 46,15,24,33,42,51,60,69180
55–6343 4 4 27,16,25,34,43,52,61,70200
64–7253 4 4 2 58,17,26,35,44,53,62,71220
73–7822 39,18,27,36,45,54,63,72240
79–8432 3 373,79,85,9130
85–9042 3 3 274,80,86,9240
91–9652 3 3 2 275,81,87,9350
76,82,88,9460
p i k [ 30 , 50 ] 77,83,89,9570
Table 2. Parameters and their levels.
Table 2. Parameters and their levels.
ParameterFactor Level
123
N80100120
N i m 567
m a x _ i t 80,000100,000120,000
U A 1 0.60.70.8
U R 1 0.20.30.4
Table 3. Computational results of all algorithms on M i n .
Table 3. Computational results of all algorithms on M i n .
InsHICAHICA1HICA2PSOGAInsHICAHICA1HICA2PSOGA
11947.91951.21995.22636.93712.0494446.04590.64621.06549.29700.8
22899.32939.52942.23709.96153.5505821.75953.05914.48411.913,050
34056.73955.34143.95777.08730.9517214.57490.57518.910,61716,315
45446.65526.25449.88083.311,836528863.09336.38862.413,16319,163
56797.97193.96974.09818.114,5185310,81511,06110,86316,15924,050
68620.18621.38806.113,10819,9765413,07812,91613,16418,19428,495
710,62811,14010,58115,76324,182551457.21485.11563.72063.03294.1
812,73513,34013,48420,57228,028562200.62175.52272.03131.24824.8
915,00315,60915,59522,00233,668572971.23022.63118.64401.66599.4
101441.91389.31447.31753.62843.6584036.74011.74127.66000.78789.3
111995.11961.72026.72584.63997.1595111.95209.35269.27187.711,781
122768.92808.32930.43660.36098.7606440.06590.36297.79362.414,327
133637.53676.83662.65195.97896.9618063.47895.17825.111,78617,114
144690.84651.64666.46505.510,745629562.69868.59389.914,11021,170
155794.65947.05905.08133.613,2156311,01011,71810,96816,09424,475
167187.87240.06912.810,21716,849641218.71244.61251.21649.02486.9
178560.68704.48446.912,47019,056651811.41799.01861.62415.03744.6
1810,19310,56410,44314,67621,849662377.22256.42414.33352.65008.1
191258.31271.71300.71649.02463.2673201.13026.83163.64429.46823.8
201849.51761.01864.62415.03804.5683965.64109.14126.65557.09059.4
212407.22349.02306.03352.64799.2695229.45077.05242.36796.811,469
223078.03206.23152.14429.47010.2706045.56140.06142.08721.313,997
234053.54078.94204.05557.08623.0717116.57490.07446.110,60316,531
245115.25206.75061.56796.811,079728699.28938.08669.712,51719,977
256118.86176.36192.28721.314,14473610.00616.00629.00658.60830.40
267376.17518.17381.810,60316,65274959.40942.00982.601039.61426.4
278615.08787.08576.612,51719,640751368.01379.01363.01645.02367.0
281040.11042.51039.41203.82223.8761849.61850.81918.82117.63408.6
291587.21555.81629.81867.53365.1772451.62453.02506.23205.84639.0
301903.21907.31920.92539.14515.7782950.83054.03165.63832.06242.6
312616.52471.72442.03382.95927.779432.42428.30433.23474.95654.86
323230.03270.33160.04384.67567.180673.75669.31661.44761.631064.3
334035.44145.34083.95429.39204.381934.42926.27958.021047.51662.9
344795.04944.64820.86626.010,775821251.81212.11238.41610.22465.2
355640.95786.45716.07713.613,871831657.81676.91687.02108.73216.3
366665.76771.76563.89498.114,968842060.92085.82144.02635.54429.1
372314.22305.02279.12931.84815.885385.15393.15398.00399.70618.50
383336.93445.03498.34413.27375.886592.20588.80600.20649.80997.20
394813.14814.15000.36990.111,02687826.00823.25804.25939.501438.5
406533.66674.36567.79515.314,633881090.61074.11066.31421.12135.7
418428.28635.58512.712,28017,861891411.21460.71399.41866.92630.2
4210,47610,90311,01316,21624,908901792.41753.41807.62458.23392.8
4312,96313,60613,15020,76429,99391382.17385.97385.96402.63623.90
4415,96016,43116,17526,08836,90992521.75528.17517.67571.06872.31
4518,72920,15119,52129,48640,08393744.85709.73740.42863.571364.2
461725.91719.21684.92248.03454.0941015.1999.421003.11061.71937.4
472490.62471.62529.53218.75230.0951276.41249.31252.81662.12434.6
483384.23486.13590.95029.97563.4961523.21547.11518.31950.43048.9
Table 4. Computational results of all algorithms on M a x .
Table 4. Computational results of all algorithms on M a x .
InsHICAHICA1HICA2PSOGAInsHICAHICA1HICA2PSOGA
12144.42231.12135.22741.24489.2494907.35028.74960.07012.811,949
23195.63145.43135.54217.97010.6506135.96388.76339.78585.715,309
34487.24591.44399.26018.210,320517709.87954.78009.611,28719,728
45844.46022.86114.98785.613,535529615.99732.59676.813,24924,731
57485.37592.87492.810,93518,1225311,49712,12411,92717,39629,209
69212.99670.09238.613,47524,5905414,46114,11413,90018,33233,131
711,61011,69911,66016,97830,082551623.11664.81696.02171.43453.6
813,67914,85313,81121,05733,401562345.72389.52418.53224.05472.3
916,44917,28916,90624,13041,408573278.53282.83397.84425.67303.3
101592.01578.11606.21767.63259.5584423.04603.74315.96170.110,419
112237.92183.72301.82768.14922.6595414.55712.15606.87863.813,143
123153.23166.33176.63733.17027.6606699.37011.97050.59603.016,804
133990.54126.53996.55370.49961.5618487.08826.48475.112,11721,839
145078.65056.45153.86908.912,3286210,17310,62210,12715,03525,253
156458.66469.66439.08504.016,3486311,80912,65611,88316,72327,980
167709.67829.97780.910,60019,084641409.31332.31390.31737.73044.7
179359.89616.19325.913,02923,261652016.01949.41967.32490.04267.4
1811,38611,34210,99415,01325,834662649.02596.22618.43378.66094.5
191412.81366.81427.71737.73059.2673519.93425.83506.44585.68016.1
202006.51985.12002.22490.04815.9684522.94468.44555.95931.49955.4
212650.02787.62582.03378.66454.4695451.75628.25587.47235.013,841
223606.43463.73619.54585.67786.0706604.26632.26637.18975.315,733
234584.14514.84564.15931.411,277718074.48222.17910.210,62320,741
245422.75559.95716.47235.013,419729606.89802.59444.312,89925,562
256795.26810.96718.28975.316,08373646.60667.40653.80693.601025.0
267933.98438.38176.410,62321,03174999.401046.01092.21138.21806.8
279674.39684.29715.612,89923,756751604.01479.01491.01721.02717.0
281147.11161.51256.31282.42521.3762028.62078.62119.42291.04023.0
291777.01752.31788.11915.53895.7772790.42734.42747.23320.65453.2
302131.12120.42113.62722.65342.7783349.83379.43386.04188.27193.8
312841.22818.72764.83451.96667.579491.83458.91527.95500.66814.91
323537.33529.63565.94495.09292.280732.81762.75752.81768.501400.9
334375.84602.74703.65640.111,199811071.01006.11051.71234.71940.1
345473.25220.85325.46843.213,188821331.91368.91373.81729.33107.8
356285.06339.36253.38071.216,488831830.61788.11854.82365.83732.5
367337.07576.37223.69712.718,942842240.32229.42497.92856.45045.9
372550.92516.72578.23180.05511.485438.30434.15458.80416.30780.65
383800.33735.23850.54939.98454.086643.60688.80690.20682.801243.8
395187.85174.75408.98057.213,23987915.00921.00927.751025.31810.5
406857.67199.77158.410,68317,267881241.31173.51214.11518.32545.8
419538.59458.19844.512,76521,890891604.51608.91555.22022.13274.0
4211,10812,03211,71916,86227,020901980.01896.61980.22532.24188.0
4314,07314,80814,85921,38633,38791402.38410.88404.50448.21808.35
4417,68019,13718,33726,70843,54992550.11548.50562.47595.031139.2
4520,78921,74921,48030,06751,56993789.12798.17842.88963.721619.8
461876.71863.61828.82327.63892.5941090.81086.61101.01164.42220.4
472679.12761.52709.03659.16012.3951366.21377.71393.51717.52843.5
483804.73762.83795.25147.78219.2961632.91662.91655.82036.93787.4
Table 5. Computational results of all algorithms on A v g .
Table 5. Computational results of all algorithms on A v g .
InsHICAHICA1HICA2PSOGAInsHICAHICA1HICA2PSOGA
12035.62064.82057.82691.44243.9494698.84831.44797.16704.510,896
23002.23053.53057.63890.56473.0506022.46155.76097.48524.414,070
34223.84215.44250.35914.69605.7517510.67698.67701.110,90317,516
45613.95708.75718.78243.412,941529201.39481.69272.113,19522,054
57156.87366.37289.210,04616,7765311,04811,53611,24116,92426,378
68880.19150.48999.213,35221,4205413,39313,66513,41218,28330,927
711,03811,44811,14716,22126,089551561.51590.01617.92073.93356.0
813,31313,90713,71420,68430,579562276.52299.92329.63181.05105.2
915,65116,80016,29022,97036,951573164.93172.73239.14406.06951.7
101522.51472.01530.01755.03043.6584175.34227.64214.16030.89334.8
112111.42065.02166.22646.14490.8595297.25433.75417.47461.012,347
122972.93032.03019.63684.76490.1606574.76818.36715.39450.215,622
133843.13867.93878.25249.38493.0618213.38320.48141.011,82718,956
144878.44894.24949.56728.211,613629800.610,1269867.114,55123,048
156107.96211.76179.98303.114,4976311,41112,15911,62316,23426,497
167436.27618.57506.010,41818,180641300.31288.81312.91676.62809.3
178964.19100.88747.512,67721,782651912.51860.11907.42432.63957.0
1810,60410,93310,71314,78324,038662470.22488.72524.03366.45469.4
191329.51313.11351.91676.62834.0673388.83252.13304.74504.57534.8
201931.21899.81934.52432.64163.4684196.44248.44299.05696.79494.5
212530.22535.82510.03366.45683.4695319.45246.25348.86980.812,284
223327.83382.83399.04504.57498.1706295.26453.86364.98801.114,707
234302.84266.84399.15696.79992.6717568.27845.57618.210,60518,240
245301.45392.45404.86980.812,282729046.89268.38952.312,70921,217
256453.66542.36464.68801.115,04673629.66640.70639.68665.20904.20
267662.48003.47791.110,60518,63674978.68999.321029.11078.71633.1
279113.09250.69155.812,70921,583751437.01429.21441.51704.52509.7
281090.11098.81108.71243.22407.5761946.01963.71990.02176.83767.7
291683.81657.51731.31883.63552.8772554.42555.02585.63256.14988.9
301987.71984.82006.92586.74821.6783170.33175.83234.23957.76701.8
312721.02649.22641.33403.76351.579447.43444.50459.59483.07732.87
323320.73392.23361.74440.38187.980707.75703.47716.30765.961164.5
334250.94305.84354.55497.210,17581990.89956.92996.131111.81808.7
345097.65090.95083.66765.311,815821284.11290.11319.81647.02638.4
355963.46081.75996.07853.215,015831755.91729.61753.02196.73525.7
366958.37132.06872.49619.817,167842146.82160.82207.62782.24720.6
372398.82381.62430.13104.55079.185416.04416.14413.36403.54699.71
383602.83588.73640.24562.07939.386631.92642.00637.68665.321125.4
394982.15027.95154.47369.911,80787873.58872.88881.6972.381606.3
406692.56868.96792.89987.415,872881171.31144.81168.11469.82278.1
418871.38922.38868.912,60120,413891518.71520.01512.41923.23007.6
4210,80111,43111,36116,43225,801901852.11831.61877.92477.53879.1
4313,50114,02813,78920,97031,84491390.75394.54394.75414.03717.61
4416,89117,48816,87126,54439,89492538.26541.04543.13574.711029.5
4519,93020,91220,37329,59945,94093768.41759.94782.24879.571512.9
461773.21783.71772.72265.03703.7941045.91050.01037.71126.41998.2
472586.42614.82608.83321.45623.1951319.01302.51335.81669.52674.3
483649.53640.23689.05056.97902.6961596.71604.31581.41973.43432.8
Table 6. Computational results of the five algorithms on metric S t d .
Table 6. Computational results of the five algorithms on metric S t d .
InstanceHICAHICA1HICA2PSOGAInstanceHICAHICA1HICA2PSOGA
164.59281.65961.21236.281218.8449123.48114.59123.40185.23580.19
280.08062.62159.686159.12246.465098.966122.97138.1357.304661.26
3124.15151.0279.15074.637498.9551132.39141.64143.98257.70886.46
4114.59171.28176.50253.56484.6252237.57139.02247.1039.8331537.1
5192.64123.85176.14341.271023.753210.31304.32308.27327.581502.1
6216.92304.43111.85103.131263.354403.67322.32220.0061.1931580.5
7293.76166.53340.71382.551584.75555.72664.03840.21232.52358.745
8236.31418.52101.54176.971451.95645.99665.20948.40932.398174.61
9461.58511.25385.46892.762096.45792.34169.44283.7769.0250237.24
1040.78950.18653.9774.2000132.8958124.74166.3366.11048.322477.68
1181.35069.73277.43862.690248.875990.399155.12125.42225.83418.85
1297.549117.3773.25320.745253.906086.427135.58224.1975.957856.96
13102.54125.5389.39063.110598.3861113.29286.39192.6496.8591177.7
14127.77143.34153.85112.01545.4362186.12240.55222.73358.681133.1
15175.88168.69180.15138.90971.1463245.48244.08254.60165.991161.0
16134.55164.77257.70156.08768.976452.72527.65247.18725.169192.51
17204.76246.05260.76184.941049.36558.27148.58332.90625.187158.90
18317.30203.60167.8096.7811246.06680.386110.7661.89012.278346.99
1940.54428.39435.94325.169171.7367113.83101.46102.7837.022372.04
2040.07772.50640.05925.187294.4168172.50107.70136.6898.365215.83
2170.752132.4276.23612.278430.306967.291160.17107.15129.48606.22
22147.4384.221131.1237.022267.8870196.40167.57129.1393.038546.25
23161.07126.98102.9598.365739.0171248.17235.40127.085.85241116.1
24108.45116.11181.21129.48740.4472292.83263.67221.5998.0951515.6
25214.52197.63179.3293.038643.467312.36015.4808.150710.89269.643
26164.00260.50253.465.85241313.17416.10134.46330.36626.850115.16
27294.44298.82307.0998.0951335.27563.88725.46337.02521.139105.22
2833.95631.16757.81523.557107.587650.01076.59257.76549.321208.33
2961.44455.09550.97913.568171.477797.41278.21266.04043.161226.59
3068.12168.67853.89954.716284.1678126.0993.54665.632114.13292.21
3177.914104.9689.01720.532224.807917.24310.74325.8348.848646.016
3289.02877.661122.2241.395528.608016.77733.58724.1063.166289.043
33107.50129.55168.5574.696596.148137.28526.05328.27960.18875.397
34200.7789.597165.5169.103606.588224.20349.11235.66646.434183.81
35184.90161.39170.09101.9743.408363.17436.07948.43079.620138.93
36207.98250.55209.0663.5641022.28460.59040.744103.7086.945180.85
3764.05358.11492.79489.894228.308515.61112.69716.7765.027954.738
38136.5191.828113.95170.77386.668614.51129.22924.37412.27364.962
39126.86117.41120.42379.27585.978734.73829.64236.06230.091115.75
40116.69155.74185.42319.56866.038845.45328.01442.43628.557115.37
41316.88240.44366.72153.211316.88950.91445.79841.66248.458209.48
42195.24302.56174.97276.65828.459051.04453.20661.19720.645225.61
43310.43404.96537.20184.911011.0916.09817.35175.839616.29446.835
44519.81713.56660.90252.452129.6929.19086.042312.6947.232592.698
45579.61572.59665.16181.973264.79312.68329.53532.93529.32075.813
4641.18848.87144.99528.589135.699422.69429.21627.80641.22581.823
4770.778103.0564.466145.58259.379531.51337.41740.74116.417129.29
48121.1481.98066.90030.808249.629632.52937.90446.59031.404237.22
Table 7. Results of the Wilcoxon test.
Table 7. Results of the Wilcoxon test.
Wilcoxon-TestMinMaxAvgStd
Wilcoxon test (H, H1)0.0000.0010.0000.262
Wilcoxon test (H, H2)0.0000.0000.0000.764
Wilcoxon test (H, P)0.0000.0000.0000.002
Wilcoxon test (H, G)0.0000.0000.0000.000
Table 8. Computational times of the HICA and the two comparative algorithms.
Table 8. Computational times of the HICA and the two comparative algorithms.
InsRunning Time (s)InsRunning Time (s)InsRunning Time (s)InsRunning Time (s)
HICAPSOGAHICAPSOGAHICAPSOGAHICAPSOGA
14.087.0447.32513.346.8146498.3415.961.1731.291.576.98
25.2210.158.02614.951.2162509.7119.869.8741.892.489.05
36.6112.770.82716.549.51795111.722.679.5752.342.9111.2
48.2614.882.7285.3220.980.65213.426.192.9763.063.5614.1
510.318.198.4296.5329.390.95315.528.8108773.604.3517.3
612.019.8111307.8435.91135417.436.0117784.365.2620.6
714.225.5126319.2838.6131554.1011.531.6791.392.177.79
815.926.41403211.050.1147565.3713.539.1801.682.899.40
918.230.71483312.456.1166576.4715.045.4812.173.5412.7
104.2712.461.03413.857.8179588.2018.155.3822.874.7614.8
115.4413.874.33515.869.4203599.7821.368.7833.325.4717.2
126.7019.986.53617.572.62266011.426.174.9843.876.2020.0
138.2822.0105374.135.8833.96113.129.084.8851.492.307.15
149.8128.2121385.397.8748.16214.532.895.4861.753.269.48
1511.228.5138396.9610.356.46317.039.8105872.364.1811.6
1613.232.6155408.8512.865.4644.4716.149.5882.715.4313.7
1715.040.71734110.815.072.9655.6717.156.1893.307.8016.3
1816.942.51864212.617.780.9666.9221.070.6903.799.1518.7
194.4515.156.04315.020.690.1678.3125.582.3911.492.916.67
205.7818.771.54416.824.399.16810.332.694.3921.933.958.95
217.0821.085.14520.026.91076911.434.399.1932.385.4010.8
228.3326.7103464.009.9334.47013.144.4124942.966.1313.4
239.9529.7117475.2910.442.47114.849.0140953.437.9315.7
2411.339.8133486.4313.151.37216.448.1144963.919.1218.3
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zheng, Y.; Yuan, Y.; Zheng, Q.; Lei, D. A Hybrid Imperialist Competitive Algorithm for the Distributed Unrelated Parallel Machines Scheduling Problem. Symmetry 2022, 14, 204. https://doi.org/10.3390/sym14020204

AMA Style

Zheng Y, Yuan Y, Zheng Q, Lei D. A Hybrid Imperialist Competitive Algorithm for the Distributed Unrelated Parallel Machines Scheduling Problem. Symmetry. 2022; 14(2):204. https://doi.org/10.3390/sym14020204

Chicago/Turabian Style

Zheng, Youlian, Yue Yuan, Qiaoxian Zheng, and Deming Lei. 2022. "A Hybrid Imperialist Competitive Algorithm for the Distributed Unrelated Parallel Machines Scheduling Problem" Symmetry 14, no. 2: 204. https://doi.org/10.3390/sym14020204

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop