Next Article in Journal
A Robust Single-Machine Scheduling Problem with Two Job Parameter Scenarios
Next Article in Special Issue
An Efficient Heap Based Optimizer Algorithm for Feature Selection
Previous Article in Journal
A Survey on Characterizing Trees Using Domination Number
Previous Article in Special Issue
A Developed Frequency Control Strategy for Hybrid Two-Area Power System with Renewable Energy Sources Based on an Improved Social Network Search Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Variable Neighborhood Search Approach for the Dynamic Single Row Facility Layout Problem

Faculty of Informatics, Kaunas University of Technology, Studentu 50-408, 51368 Kaunas, Lithuania
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(13), 2174; https://doi.org/10.3390/math10132174
Submission received: 9 May 2022 / Revised: 16 June 2022 / Accepted: 20 June 2022 / Published: 22 June 2022
(This article belongs to the Special Issue Advanced Optimization Methods and Applications)

Abstract

:
The dynamic single row facility layout problem (DSRFLP) is defined as the problem of arranging facilities along a straight line during a multi-period planning horizon with the objective of minimizing the sum of the material handling and rearrangement costs. The material handling cost is the sum of the products of the flow costs and center-to-center distances between facilities. In this paper, we focus on metaheuristic algorithms for this problem. The main contributions of the paper are three-fold. First, a variable neighborhood search (VNS) algorithm for the DSRFLP is proposed. The main version of VNS uses an innovative strategy to start the search from a solution obtained by constructing an instance of the single row facility layout problem (SRFLP) from a given instance of the DSRFLP and applying a heuristic algorithm for the former problem. Second, a fast local search (LS) procedure is developed. The innovations of this procedure are two-fold: (i) the fast insertion and swap neighborhood exploration techniques are adapted for the case of the dynamic version of the SRFLP; and (ii) to reduce the computational time, the swap operation is restricted on pairs of facilities of equal lengths. Provided the number of planning periods is a constant, the neighborhood exploration procedures for n facilities have only O ( n 2 ) time complexity. The superiority of these procedures over traditional LS techniques is also shown by performing numerical tests. Third, computational experiments on DSRFLP instances with up to 200 facilities and three or five planning periods are carried out to validate the effectiveness of the VNS approach. The proposed VNS heuristic is compared with the simulated annealing (SA) method which is the state of the art algorithm for the DSRFLP. Experiments show that VNS outperforms SA by a significant margin.

1. Introduction

The facility layout problems are concerned with finding an efficient arrangement of physical facilities (e.g., machines or manufacturing cells) on a planar site. An important member of this class of problems is the single row facility layout problem (SRFLP). It asks to arrange the facilities along a straight line so as to minimize the sum of the products of the flow costs and center-to-center distances between facilities [1]. The objective function of the SRFLP represents the material handling cost that is a good measure to evaluate the efficiency of a layout. A feasible solution is a permutation of facilities. It can be noted that the SRFLP is a static problem, because the material flows between facilities are assumed to be constant. This assumption, however, may not be always valid in practice. In a dynamic layout, the flows of material between facilities can change during a planning horizon. There are many factors that play a role in this, such as the change in the design of existing products, removing an existing product or adding a new product to the product line, varying product demand, shortening life cycle of products, the change in the sequence of operations, and the introduction of new safety standards. Because of changes in the layout environment, a multi-period planning horizon is considered. The material flows between facilities can change from one planning period to the next. When minimizing the material handling cost, it may happen that the permutation of facilities in period t, t > 1 , is different from the permutation of facilities in period t 1 . In such a case, one or more facilities are moved (shifted) to new locations at the beginning of period t. The objective function of the dynamic facility layout problem consists of two parts: the material handling cost, and the rearrangement costs of facilities.
A dynamic version of the SRFLP, called the dynamic single-row facility layout problem (DSRFLP) was introduced by Şahin et al. [2]. In their problem formulation, the rearrangement cost for a facility occurs when the center coordinate of this facility in one planning period is different from that in the preceding or succeeding periods. Figure 1 shows a layout plan for a DSRFLP instance with seven facilities and three planning periods. Costs are incurred for the relocation of facilities 3, 4, and 5 at the beginning of period 2 and for the relocation of facilities 2 and 4 at the beginning of period 3. Notice that facilities 2 and 4 are of equal size and therefore, facility 5 is not moved in period 3.
Let us denote the set of facilities by H, their number by n, and the number of planning periods by m. A feasible solution of the DSRFLP is an ordered collection p = { p 1 , , p m } of m n element permutations p t = ( p t ( 1 ) , , p t ( n ) ) , t = 1 , , m , where p t ( i ) H , i { 1 , , n } , is the facility in the ith position during period t. We denote the set of all feasible solutions of the DSRFLP by Π m . Let L s , s { 1 , , n } , stand for the length of facility s. For convenience, the main notations used in this paper are presented in Table 1. Mathematically, the DSRFLP can be expressed as follows:
min p Π m F ( p ) = t = 1 m i = 1 n 1 j = i + 1 n w t p t ( i ) p t ( j ) d t ( p t ( i ) , p t ( j ) ) + t = 2 m s I ( p t 1 , p t ) r t s ,
where
d t ( p t ( i ) , p t ( j ) ) = L p t ( i ) / 2 + i < k < j L p t ( k ) + L p t ( j ) / 2 ,
w t p t ( i ) p t ( j ) = ϕ t p t ( i ) p t ( j ) ψ p t ( i ) p t ( j ) ,
ϕ t p t ( i ) p t ( j ) represents the material flow between facilities p t ( i ) and p t ( j ) during period t, ψ p t ( i ) p t ( j ) is the cost of transferring a unit of material per distance unit between facilities p t ( i ) and p t ( j ) , r t s is the rearrangement cost of facility s at the beginning of period t, and  I ( p t 1 , p t ) is the set of facilities whose location during period t differs from that during period t 1 (in Figure 1, I ( p 1 , p 2 ) = { 3 , 4 , 5 } and I ( p 2 , p 3 ) = { 2 , 4 } ). Equation (2) determines the distance between facilities, and Equation (3) gives the cost of material flow per distance unit between facilities.

1.1. Related Work

If t = 1 , then the second term in (1) vanishes, and the problems (1)–(3) becomes the SRFLP. Many algorithms for solving the SRFLP have been proposed. Several exact methods have been developed in recent years. They include mixed-integer linear programming [3], cutting plane algorithm [4], branch-and-cut [5], and semidefinite programming approaches [6,7]. The largest SRFLP instance solved to optimality is of size 42 [7]. To deal with larger sized SRFLP instances, one common option is to use metaheuristic-based algorithms. The more recent metaheuristic approaches available in the literature are tabu search [8,9], genetic algorithms [10,11,12], a Lin–Kernighan heuristic [13], hybrid estimation of distribution algorithm [14], scatter search [15], variable neighborhood search (VNS) [16], greedy randomized adaptive search procedure (GRASP) with path relinking [17], a hybrid algorithm of VNS and ant colony optimization [18], simulated annealing [19], a cross-entropy approach [20], a population-based improvement heuristic with local search [21], GRASP [22], a simplified swarm optimization algorithm [23], differential evolution [24], and algebraic differential evolution for permutations [25]. Sun et al. [26] used graphics processing units to solve the SRFLP with the two-opt-based simulated annealing algorithm. A review of the mathematical models and solution techniques of the SRFLP can be found in [1,27].
In the literature, there are several other facility layout problems that are akin to the SRFLP. One of them is the constrained SRFLP in which some facilities need to be placed in certain locations or in specified orders. A permutation-based genetic algorithm [28] and a fireworks algorithm [29] were proposed for solving this problem. Keller [30] developed three construction heuristics for the SRFLP with machine-spanning clearances. Amaral [31] and Yang et al. [32] proposed mixed-integer programming models for the parallel row ordering problem. Several algorithms were presented to solve the corridor allocation problem, such as simulated annealing and tabu search [33], a permutation-based genetic algorithm hybridized with a local search technique [34], and a scatter search algorithm [35]. Recently, attention was attracted to the space-free multi-row facility layout problem. An exact method for this problem was presented in [36]. An efficient VNS algorithm for the same problem was developed in [37].
A mixed-integer linear programming model and solution approaches for the DSRFLP were proposed by Şahin et al. [2]. They used CPLEX to solve the linear model. However, the solver failed to produce a provably optimal solution for instances of size greater than 10. To find high quality solutions for larger DSRFLP instances, Şahin et al. [2] proposed two metaheuristic-based approaches: a genetic algorithm (GA) and a simulated annealing (SA) technique. Their GA is strengthened by integrating a restart strategy and applying the concept of acceptance probability. The proposed SA algorithm is also enhanced with a restart strategy. The authors reported computational results on 20 problem instances with up to 100 facilities and 3 or 5 planning periods. The empirical results demonstrated the superiority of the SA algorithm over the GA implementation.
There is a vast amount of literature related to the dynamic version of the facility layout problems. Gong et al. [38] proposed a hybrid algorithm of harmony search for the dynamic parallel row ordering problem. Their algorithm combines a harmony search technique with a tabu search heuristic. The authors presented the results of computational experiments on problem instances with up to 70 facilities. Guan et al. [39] proposed a hybrid evolution algorithm for solving a dynamic extended row facility layout problem. Both combinatorial and continuous aspects of the problem were taken into account. However, historically, most algorithms in the field were developed for the dynamic facility layout problem (DFLP) in which the layout of each timeframe is modeled as a quadratic assignment problem. The first such algorithms (dynamic programming-based optimal and heuristic procedures) were proposed by Rosenblatt [40]. Later, many metaheuristic-based algorithms for the DFLP were reported. Balakrishnan and Cheng [41] developed a genetic algorithm for solving this problem. A hybrid GA for the DFLP was proposed in [42]. A simulated annealing heuristic for this problem was presented in [43]. Ant colony optimization algorithms for the DFLP were developed in [44,45]. Hybrid ant system heuristics were proposed in [46]. Şahin and Türkbey [47] presented an algorithm-hybridizing SA with tabu search. Three new tabu search heuristics for the DFLP were developed by McKendall and Liu [48]. Hosseini-Nasab and Emami [49] designed a hybrid particle swarm optimization algorithm to solve the DFLP. Turanoğlu and Akkaya [50] proposed a hybrid algorithm combining SA and a bacterial foraging optimization technique. The results of the experimental comparison of a number of different algorithms from the literature were provided in a recent study on the DFLP by Zouein and Kattan [45]. A review of the recent advances on the DFLP can be found in [51]. The extensive literature on both static and dynamic versions of the facility layout problems was reviewed in [52].

1.2. Our Contribution

The analysis of literature shows that there has been considerable interest in developing algorithms for the dynamic facility layout problems. One of such problems is the DSRFLP. However, research on the DSRFLP is in its early stages. Considering this observation, our motivation is to develop a reasonably fast heuristic algorithm that could perform well on large DSRFLP instances. Our algorithm is based on the VNS metaheuristic. One purpose of this paper is to design a fast local search (LS) procedure. It takes only O ( 1 ) time to compute the gain of a swap or insertion operation. The procedure plays a central role within the VNS framework. Our specific goal is to compare the performance of the VNS technique with that of the SA algorithm proposed in [2].
The main contributions of this paper are summarized as follows:
  • A variable neighborhood search algorithm to solve the DSRFLP. It is one of the first heuristic approaches proposed to deal with this new problem. The approach uses an innovative strategy to start the search from a solution obtained by constructing an instance of the SRFLP from a given instance of the DSRFLP and applying a heuristic algorithm for the former problem. A simpler version of VNS uses a random permutation of facilities as a starting solution.
  • A fast LS procedure. The innovations of the proposed procedure are two-fold: (a) the fast insertion and swap neighborhood exploration techniques are adapted for the case of the dynamic version of the SRFLP; and (b) to reduce the computational time, the swap operation is restricted on pairs of facilities of equal lengths. The superiority of the fast neighborhood exploration procedures over traditional LS techniques is shown by performing numerical tests. The importance of the proposed LS method goes beyond VNS: it may be considered a useful ingredient for designing other metaheuristic algorithms for the DSRFLP.
  • Numerical experimentation on DSRFLP instances of size up to 200 to validate the effectiveness of the VNS approach. The two versions of VNS are compared with the SA algorithm of Şahin et al. [2]. Experiments show the excellent performance of the VNS version that starts with a heuristically constructed initial solution. This VNS implementation outperforms SA by a significant margin.
  • Preparation of a set of publicly available DSRFLP instances. Experiments show that larger instances in this set are very difficult for both VNS and SA algorithms. To find improved solutions, a large amount of CPU time may be required. These instances may be used for evaluating future approaches to dynamic single row facility layout.
The rest of the paper is organized as follows. In the next section, we present our VNS approach. In Section 3, we give a detailed description of our LS procedure. Section 4 is devoted to the experimental analysis and comparisons of algorithms. Section 5 provides an empirical analysis of LS variants. Concluding remarks are given in Section 6.

2. Variable Neighborhood Search

The variable neighborhood search method is a metaheuristic that has been shown to be very successful in solving many combinatorial optimization problems. The basic idea of VNS is the systematic change of a neighborhood combined with solution perturbation and local search procedures. During algorithm execution, the neighborhood of a solution is explored using a set of predefined neighborhood structures. Since its introduction in 1997 [53], VNS has undergone various modifications and enhancements. A discussion of the basic concepts and successful applications of VNS can be found in survey papers [54].
Before presenting our algorithm, we need to define neighborhood structures used in the search process. Let p = ( p 1 , , p m ) Π m represent a solution of the DSRFLP. For z { 1 , , z max } , we denote by N ˜ z ( p ) the set of all solutions that can be obtained from p by performing z pairwise interchanges of facilities subject to the condition that each facility changes its position in each permutation p t t { 1 , , m } at most once. The neighborhood structures { N ˜ z ( p ) p Π m } , z = 1 , , z max , are appropriate in cases where a permutation-based combinatorial optimization problem is solved [55,56,57]. We use these structures in the shaking (solution perturbation) step of the algorithm.
The steps of our implementation of the VNS method are given in Algorithm 1. The algorithm starts by generating an initial solution, either randomly or by a heuristic procedure. This step will be discussed later in this section. The initial solution is improved by applying a local search procedure LS (Line 2). We defer the description of LS to the next section. The main part of VNS is the “while” loop (Lines 4–14) which executes until the time condition is satisfied. The search is terminated when the maximum time limit, T lim , is reached. The algorithm has three parameters that control the search process. The parameter z min determines the size of the neighborhood the search begins from. The largest possible size of the neighborhood is defined by z max = ρ n , where ρ is another parameter of the algorithm. The variable z step is used to move from the current neighborhood to the next one. We set z step = max ( z max / θ , 1 ) , where the scaling factor θ > 0 is the third parameter of VNS. The best solution to the algorithm is denoted as p * and its value is f * . The inner “while” loop of the algorithm iterates over the following three steps: the perturbation of the best solution p * (procedure shake in Line 7), local search (Line 8), and the selection of the size of the next neighborhood (procedure neighborhood_change in Line 9). These steps are executed until z becomes greater than z max . At the end of each iteration, the termination condition of VNS is checked (Line 10).
Algorithm 1 Variable neighborhood search
      VNS( z min , z max , z step , T lim )
Input: Instance of the DSRFLP, parameters z min , z max , z step , and  T lim .
Output: Best found solution p * and its value f * .
// z min , z max , and  z step control the size of the neighborhood
// T lim is the time limit for VNS
1:   Generate an initial solution p Π m
2:    f : = LS(p)
3:   Assign p to p * and f to f *
4:   while time limit T lim not reached do
5:       z : = z min
6:      while  z z max  do
7:          p : = shake( p * , z )
8:          f : = LS(p)
9:          z : = neighborhood_change( p , p * , f , f * , z , z min , z step )
10:        if elapsed time is more than T lim  then
11:            Break from the loop
12:         end if
13:      end while
14:   end while
15:   Stop with the solution p * of value f *
The pseudo-code of the shaking procedure and the neighborhood change function is given in Algorithms 2 and 3, respectively. The aim of shake is to perturb the best solution p * (or, more precisely, its copy p). The parameter z is the total number of pairwise interchange moves that are executed by the procedure. The number of moves performed on the permutation p t , t { 1 , , m } , is denoted by q t . The procedure uniformly and randomly chooses an integer in the interval [ 1 , m ] z times. The value of q t is equal to the number of times that the integer t is selected. Further steps of the procedure (Lines 3–10) are executed for each planning period t with q t > 0 . The inner loop (Lines 5–9) starts by randomly selecting two facilities from the set { p t ( i ) i I } . They are denoted as p t ( k ) and p t ( l ) . Then, the permutation p t is updated by swapping the positions of the selected facilities. The role of the set I in the algorithm is to guarantee that each facility is selected at most once. The solution returned by shake belongs to the neighborhood N ˜ z ( p * ) . The procedure neighborhood_change either increases the shaking parameter z by z step or sets it to the minimum value z min if an improved solution has been found. The procedure is responsible for memorizing this solution (Line 2).
Algorithm 2 Shake function
      shake( p * , z )
Input: Best-so-far solution p * , parameter z.
Output: Solution p in the neighborhood of p * .
1:   Assign p * to p
2:   Randomly split z into m nonnegative numbers q t , t = 1 , , m
3:   for  t = 1 , , m such that q t > 0  do
4:       I : = H
5:      for  q t times do
6:         Randomly select k and l k from I
7:         Swap positions of facilities p t ( k ) and p t ( l ) in p t
8:          I : = I { k , l }
9:      end for
10:   end for
11:   return  p = ( p 1 , , p m )
Algorithm 3 Neighborhood change function
      neighborhood_change( p , p * , f , f * , z , z min , z step )
Input: Current solution p of value f, best-so-far solution p * of value f * ,
           parameters z min and z step .
Output: Possibly updated p * and f * , parameter z.
1:   if  f < f *  then
2:      Assign p to p * and f to f *
3:       z : = z min
4:   else
5:       z : = z + z step
6:   end if
7:   return z
Now, we return to the question of generating an initial solution to the DSRFLP. A simple way is to randomly generate a permutation of n facilities and assign this permutation to p t for each t { 1 , , m } . We call this configuration of our algorithm VNS1. An alternative strategy is to apply a heuristic technique. Our choice in this work is to use a VNS algorithm proposed in [16] for solving the SRFLP. We obtain an instance of the SRFLP with the flow matrix W = ( w s u ) from an instance of the DSRFLP with a set of flow matrices W t = ( w t s u ) , t = 1 , , m , straightforwardly by summing the matrices W t . Formally, w s u = t = 1 m w t s u , s , u = 1 , , n . Like VNS presented in this paper, the algorithm in [16] is equipped with a CPU time-based stopping criterion. Therefore, we have to share the time resources between the generation of an initial solution (Line 1 of Algorithm 1) and the remaining steps of the VNS. We set the cutoff time for the first step (Line 1) to β T lim . A suitable value of the parameter β should be chosen experimentally. Intuitively, β is expected to be less than 0.1 . An experiment for selecting β is described in Section 4. Assume that p ˜ is the solution produced by the algorithm in [16] for the SRFLP instance with the flow matrix W . We constructed an initial solution for the DSRFLP by setting p t = p ˜ for each t = 1 , , m . We refer to this configuration of VNS as VNS2. It can be noticed that VNS1 is obtained from VNS2 by taking β = 0 .

3. Local Search

An important issue in the design of local search algorithms is the choice of a neighborhood structure or structures. Our first priority is to use the insertion neighborhood. Let us denote this neighborhood for p = ( p 1 , , p m ) Π m by N ( p ) . It consists of all solutions that can be obtained from p by removing a facility from its current position in a permutation p t , t { 1 , , m } , and inserting it at a different position in the same permutation. Given p Π m , the construction of a solution in the neighborhood N ( p ) is called an insertion move. As noted by Schiavinotto and Stützle [55], insertion move is one of the basic operators in permutation-based optimization problems. Assume that p = ( p 1 , , p m ) N ( p ) is obtained from p by removing facility s = p t ( k ) from position k and inserting it at position l. The change in cost between p and p is called the move gain. We denote it by δ ( p , k , l ) = F ( p ) F ( p ) . The insertion operation is illustrated in Figure 2 for n = 6 and m = 3 . Facility 1 is moved at the beginning of planning period 2 from its current position 3 to position 5. As a result, facilities 2 and 6 are also relocated.
To describe methods for computing δ ( p , k , l ) , we need some notations. We denote by X = ( x t u ) an m × n matrix whose entry x t u is the center coordinate of facility u during planning period t. We also define the following two functions:
g 1 ( u , t , x ) = r t u if x t u = x t 1 , u x r t u if x t u x t 1 , u = x 0 otherwise ,
g 2 ( u , t , x ) = r t + 1 , u if x t u = x t + 1 , u x r t + 1 , u if x t u x t + 1 , u = x 0 otherwise .
We note that x t u , x t 1 , u , and  x t + 1 , u are original positions of facility u in periods t, t 1 , and  t + 1 , respectively, and x is the position of facility u after the movement during period t. Clearly, (4) is defined for t { 2 , , m } and (5) is defined for t { 1 , , m 1 } . Let us consider the general case of t { 2 , , m 1 } . Assume that the center coordinate of facility u changes from x t u to x when u is moved to a new location at the beginning of period t. Then, the sum G ( u , t , x ) = g 1 ( u , t , x ) + g 2 ( u , t , x ) expresses the change in rearrangement cost of p incurred by this move operation. In Figure 2, this sum is r 2 , 1 r 3 , 1 for facility 1, r 3 , 2 for facility 2, and  r 3 , 6 for facility 6. The above-defined sum reduces to a single term G ( u , t , x ) = g 2 ( u , t , x ) for t = 1 and G ( u , t , x ) = g 1 ( u , t , x ) for t = m .
Now we return to the computation of δ ( p , k , l ) . Let δ ( p , k , l ) denote the change in material flow cost incurred by the insertion move producing solution p from the solution p. For our purposes, it is convenient to write the move gain as
δ ( p , k , l ) = δ ( p , k , l ) + G ( s , t , x t s ( l ) ) ,
where s = p t ( k ) as before and x t s ( l ) stands for the center coordinate of facility s in the layout during period t, which is obtained by inserting s at position l in the permutation p t . The reason for using (6) is to compute both terms at the right-hand side of (6) (more precisely, δ ( p , k , l ) and x t s ( l ) ) recursively.
To proceed, suppose that l < k . We note that the insertion move can be reduced to a sequence of elementary moves in which facility s is interchanged with its left neighbor. The insertion process starts by interchanging s with facility p t ( k 1 ) . Next s is interchanged with p t ( k 2 ) , then with p t ( k 3 ) , etc. Eventually, this process is going to end when facility s reaches position l in the permutation p t . Let us focus on the last step in the described process: facility s is interchanged with facility v = p t ( l ) . At this point, δ ( p , k , l + 1 ) and x t s ( l + 1 ) are assumed to be known. The last step of the insertion process is illustrated in Figure 3. We see that after swapping the positions of facilities s and v, the distance between facility p t ( i ) = u , i < l , and s decreases by L v and that between u and v increases by L s . Similarly, the distance between facility p t ( i ) , i > l , i k , and s increases by L v and that between p t ( i ) and v decreases by L s . Based on these observations, we can write
δ ( p , k , l ) = δ ( p , k , l + 1 ) + i = 1 l 1 ( w t p t ( i ) v L s w t p t ( i ) s L v ) + i = l + 1 , i k n ( w t p t ( i ) s L v w t p t ( i ) v L s ) + G ( v , t , x t v + L s ) .
The last term in (7) represents the change in the cost resulting from the rearrangement of facility v. The new center coordinate of facility s is computed as follows:
x t s ( l ) = x t s ( l + 1 ) L v .
The initial conditions of the recurrence relations (7) and (8) are δ ( p , k , k ) = 0 and x t s ( k ) = x t s , respectively.
If k < l , then (7) and (8) are replaced by the following equations:
δ ( p , k , l ) = δ ( p , k , l 1 ) + i = 1 , i k l 1 ( w t p t ( i ) s L v w t p t ( i ) v L s ) + i = l + 1 n ( w t p t ( i ) v L s w t p t ( i ) s L v ) + G ( v , t , x t v L s ) ,
x t s ( l ) = x t s ( l 1 ) + L v .
The initial conditions of (9) and (10) are the same as in the case of l < k .
The approach we have just described, however, is not very efficient. It takes O ( n ) time to compute δ ( p , k , l ) using (7) or (9). We adopt a method that was proposed in [16] for solving the SRFLP. We use an m × ( n 1 ) matrix C = ( c t q ) with entries c t q = i = 1 q j = q + 1 n w t p t ( i ) p t ( j ) , t = 1 , , m , q = 1 , , n 1 . Its entry c t q represents the sum of flow costs between the first q facilities and the remaining n q facilities during period t. To compute C, we use two other matrices E 1 = ( e t u 1 ) and E 2 = ( e t u 2 ) of size m × n , in which e t u 1 = i = 1 q 1 w t p t ( i ) u , e t u 2 = i = q + 1 n w t p t ( i ) u , and where it is assumed that u = p t ( q ) . The rows of the matrices E 1 and E 2 are indexed by periods and the columns by facilities. It is not difficult to see that
c t q = c t , q 1 + e t p t ( q ) 2 e t p t ( q ) 1 , t = 1 , , m , q = 1 , , n 1 ,
where by convention c t 0 = 0 , t = 1 , , m . From C, we can build the matrix C = ( c t q ) with entries c t q = c t q c t , q 1 and the matrix C + = ( c t q + ) with entries c t q + = c t q + c t , q 1 , where t = 1 , , m and q = 1 , , n . In these definitions, it is assumed that c t n = 0 , t = 1 , , m . An efficient way to compute δ ( p , k , l ) is provided by the following formulas.
Proposition 1.
Let p Π m , t { 1 , , m } , k { 1 , , n } , and  s = p t ( k ) . Then, for  l = k 1 , k 2 , , 1 ,
δ ( p , k , l ) = δ ( p , k , l + 1 ) + L s ( w t s v c t l ) + L v ( α l + α l + 1 + c t k ) + G ( v , t , x t v + L s ) ,
where v = p t ( l ) , α l = α l + 1 + w t s v , and initially δ ( p , k , k ) = 0 and α k = 0 .
For l = k + 1 , k + 2 , , n ,
δ ( p , k , l ) = δ ( p , k , l 1 ) + L s ( w t s v + c t l ) + L v ( α l 1 + α l c t k ) + G ( v , t , x t v L s ) ,
where v = p t ( l ) , α l = α l 1 + w t s v , and the initial conditions are the same as in (12).
Proof. 
Consider the case of l < k . Suppose that the facility s is interchanged with its current left neighbor v = p t ( l ) . Let the resulting change in material handling cost be denoted by δ 1 ( p , k , l ) and the cost of rearranging facility v be denoted by δ 2 ( p , k , l ) . Thus, δ ( p , k , l ) = δ ( p , k , l + 1 ) + δ 1 ( p , k , l ) + δ 2 ( p , k , l ) . Based on Proposition 6 in [16], it can be concluded that δ 1 ( p , k , l ) is equal to L s ( w t s v c t l ) + L v ( α l + α l + 1 + c t k ) . Moreover, from the definition of the functions g 1 , g 2 , and G it is obvious that δ 2 ( p , k , l ) = G ( v , t , x t v + L s ) . Taken together, these two observations prove (12). The same line of reasoning applies to Equation (13).    □
When facility s is relocated from position k to position l in the permutation p t , the row of the matrix C corresponding to period t needs to be updated. Suppose first that l < k . Then, for each facility v = p t ( q ) , q { l , , k 1 } , the material flow cost w t s v is added to e t v 1 and e t s 2 and subtracted from e t v 2 and e t s 1 . If l > k , then, for each facility v = p t ( q ) , q { k + 1 , , l } , w t s v is added to e t v 2 and e t s 1 and subtracted from e t v 1 and e t s 2 . This step of the algorithm also includes updating the tth row of the matrix X and the permutation p t . Having updated E 1 and E 2 , the new entries in the tth row of C are computed using (11). This is only performed for q = min ( k , l ) , , max ( k , l ) 1 . Finally, the tth row of the matrix C is updated according to the definition of C .
To make the search more productive, our LS algorithm also applies the swap operator. It consists of swapping the positions of two facilities. The obtained solution belongs to the neighborhood N ˜ 1 defined in Section 2. To lessen the computational burden, the algorithm employs a reduced swap neighborhood N ˜ 1 . For p Π m , a solution p N ˜ 1 ( p ) belongs to N ˜ 1 ( p ) if and only if it is obtained by interchanging in p t , t { 1 , , m } , two facilities of equal length. Let these facilities be denoted by s = p t ( k ) and u = p t ( l ) . Assume w.l.o.g. that k < l . Since L s = L u , the center coordinates of facilities p t ( i ) , i = k + 1 , , l 1 , in  p are the same as in p. This fact significantly reduces the complexity of computing the difference between F ( p ) and F ( p ) . We call this difference the gain of the swap move, and denote it by Δ ( p , k , l ) = F ( p ) F ( p ) . One possible method to compute Δ ( p , k , l ) uses a distance matrix. Let us denote this n × n matrix by D t = ( d t ( p t ( i ) , p t ( j ) ) ) where d t ( p t ( i ) , p t ( j ) ) is the distance between the centers of facilities p t ( i ) and p t ( j ) during period t as defined by Equation (2). Assume that the positions of facilities s and u with L s = L u are swapped in the permutation p t . This operation changes the material flow cost between each facility v H { s , u } and facilities s and u. This change is equal to w t v s d t ( v , u ) w t v s d t ( v , s ) + w t v u d t ( v , s ) w t v u d t ( v , u ) = ( w t v s w t v u ) ( d t ( v , u ) d t ( v , s ) ) . Therefore, we can write
Δ ( p , k , l ) = v = 1 , v s , v u n ( w t v s w t v u ) ( d t ( v , u ) d t ( v , s ) ) + G ( s , t , x t u ) + G ( u , t , x t s ) .
The described method is simple but not very efficient. A good alternative is to adopt an approach proposed in [16]. To present an expression for Δ ( p , k , l ) , we use the following matrices: λ + = ( λ u v + = ( L u + L v ) / 2 ) , λ = ( λ u v = ( L u L v ) / 2 ) , u , v H , A t = ( a t v j ) , v H , j = 1 , , n , t { 1 , , m } , with entries
a t v j = i = j q 1 w t v p t ( i ) if j < q i = q + 1 j w t v p t ( i ) if j > q 0 if j = q
for v = p t ( q ) , and  B t = ( b t v j ) , v H , j = 1 , , n , t { 1 , , m } , with entries
b t v j = a t v j λ v p t ( j ) + + i = j + 1 q 1 a t v i λ p t ( i 1 ) p t ( i ) + if j < q a t v j λ v p t ( j ) + + i = q + 1 j 1 a t v i λ p t ( i ) p t ( i + 1 ) + if j > q 0 if j = q
for v = p t ( q ) . Matrices C and C + were already defined earlier in this section. With these matrices, we can provide a formula to calculate the gain Δ ( p , k , l ) .
Proposition 2.
For p Π m , t { 1 , , m } , k { 1 , , n 1 } , l { k + 1 , , n } , s = p t ( k ) , and  u = p t ( l ) ,
Δ ( p , k , l ) = ( c t l c t k + 2 w t s u ) d t ( s , u ) + 2 ( b t s , l 1 + b t u , k + 1 ) + λ s u ( c t l + c t k + ) + G ( s , t , x t u ) + G ( u , t , x t s ) .
Equation (15) follows from Proposition 2 in [16] and the definition of function G. If  l = k + 1 , then (15) reduces to the following equation
Δ ( p , k , l ) = ( c t l c t k + 2 w t s u ) d t ( s , u ) + λ s u ( c t l c t , k 1 ) + G ( s , t , x t u ) + G ( u , t , x t s ) .
Before starting to use (15) and (16), a number of matrices, including B t , t = 1 , , m , need to be initialized. To build B t , first the matrix A t should be computed. Consider facility v = p t ( q ) . By definition, a t v q = 0 . Other entries in the vth row of A t can be iteratively obtained by setting
a t v j = a t v , j 1 + w t v p t ( j ) , j > q ,
a t v j = a t v , j + 1 + w t v p t ( j ) , j < q .
The corresponding row of the matrix B t can be constructed by starting with b t v q = 0 and applying the following equations:
b t v j = b t v , j 1 + a t v , j 1 λ v p t ( j ) + a t v j λ v p t ( j ) + , j > q ,
b t v j = b t v , j + 1 + a t v , j + 1 λ v p t ( j ) + a t v j λ v p t ( j ) + , j < q .
The proof of (17)–(20) can be found in [16].
Suppose that facilities s = p t ( k ) and u = p t ( l ) , l > k , are interchanged in the permutation p t . Then, the matrices A t and B t need to be updated. This can be easily done using Equations (17)–(20). For simplicity reasons, we assume that p in these equations refers to the solution obtained after performing the swap move. Let v = p t ( q ) . If q < k , then (17) is used for j = k , , l 1 and (19) for j = k , , n . If q > l , then (18) is used for j = l , l 1 , , k + 1 and (20) for j = l , l 1 , , 1 . If k < q < l , then A t is updated by Equation (17) for j = l , , n and by (18) for j = k , k 1 , , 1 . Similarly, B t is updated by Equation (19) for j = l , , n and by (20) for j = k , k 1 , , 1 . Finally, if  q = k (in this case, v = u ) or q = l (in this case, v = s ), the vth row of A t and B t is created anew by first setting a t v q = b t v q = 0 and then applying Equations (17)–(20).
Our local search algorithm for the DSRFLP explores the reduced swap neighborhood N ˜ 1 and the insertion neighborhood N repeatedly. This fact also implies the need to update the matrices A t and B t after performing an insertion move. Like in the case of pairwise interchange of facilities, the matrices are updated using Equations (17)–(20). Assume that facility s is moved from position k to position l in permutation p t . Let i = min ( k , l ) , i = max ( k , l ) , and consider facility v = p t ( q ) (here, p t stands for the updated permutation). If  q < i , then a t v j , j = i , , i 1 , are calculated by Equation (17) and b t v j , j = i , , n , are calculated by Equation (19). Other entries in the vth row of A t and B t remain unchanged. If q > i , then (18) is used for j = i , i 1 , , i + 1 and (20) for j = i , i 1 , , 1 . If q = l (in this case, v = s = p t ( l ) ), then first a t v l as well as b t v l are set to 0 and then Equations (17)–(20) are applied. Consider now the remaining case where i q i and q l . If k < l , then a t v j is set to a t v j = a t v , j + 1 and b t v j is set to b t v j = b t v , j + 1 for j = k , , l 1 . If  k > l , then a t v j is set to a t v j = a t v , j 1 and b t v j is set to b t v j = b t v , j 1 for j = k , k 1 , , l + 1 . Moreover, a t v j is calculated by (17) and b t v j by (19) for j = j , , n , where j = l if k < l and j = k + 1 if k > l . Furthermore, a t v j is calculated by (18) and b t v j by (20) for j = j , j 1 , , 1 , where j = k 1 if k < l and j = l if k > l . The step-by-step routines to update the matrices A t and B t can be found in [16].
Algorithm 4 gives the pseudocode of the LS component of the approach. The input to LS includes the solution p from which the search is started. This solution is possibly replaced by a better one during the search process. The returned solution p is locally optimal with respect to two neighborhoods, the reduced swap neighborhood N ˜ 1 ( p ) and the insertion neighborhood N ( p ) . The variable f stores the objective function value of the solution p. The algorithm starts with the initialization of the matrices C, C , C + , X, D t , and  B t , t = 1 , , m . Since B t depends on the matrix A t , the latter, for each t { 1 , , m } , is initialized as well. Then, the algorithm proceeds iteratively. At each iteration, it first repeatedly explores the reduced swap neighborhood N ˜ 1 ( p ) of the current solution p until a locally optimal solution is reached (Lines 4–8). Then, the procedure explore_insertion_neighborhood is triggered. It either returns an improved solution (if Δ * < 0 ) or says that the solution p is locally optimal with respect to the neighborhood N ( p ) (if Δ * = 0 ). In the latter case, the algorithm terminates. We apply the explore_swap_neighborhood procedure first and explore_insertion_neighborhood second because the number of possible swap moves is expected to be much less than the number of possible insertion moves. This is because the swap operation is restricted on pairs of facilities with equal lengths.
Algorithm 4 Local search
      LS(p)
Input: Solution p.
Output: Possibly improved solution p and its value f.
1:   Initialize data (matrices C, C , C + , X, D t , and  B t , t = 1 , , m )
2:   Compute f = F ( p ) and set μ : = true
3:   while  μ = true do
4:       Δ * : = 1
5:      while  Δ * < 0  do
6:          Δ * := explore_swap_neighborhood ( p )
7:         if  Δ * < 0 then f : = f + Δ *  end if
8:      end while
9:       μ : = false
10:      Δ * := explore_insertion_neighborhood ( p )
11:     if  Δ * < 0  then
12:          f : = f + Δ *
13:          μ : = true
14:      end if
15:   end while
16:   return f // possibly improved solution p is also returned
The pseudocode of the procedure explore_swap_neighborhood is given in Algorithm 5. The nested loops in Lines 2–10 implement Formulas (15) and (16). An improving move is represented by the triplet ( t * , k * , l * ) . If such a move has been detected, then the positions of the selected facilities are swapped in p t * (Line 12). This step is followed by updating the matrices used in the calculation of the move gain (Line 13). The following statement shows the efficiency of the procedure.
Algorithm 5 Swap neighborhood exploration procedure
      explore_swap_neighborhood(p)
Input: Solution p.
Output: Best move gain Δ * and solution p (improved if Δ * < 0 ).
1:    Δ * : = 0
2:   for  t = 1 , , m  do
3:      for each pair s = p t ( k ) H and u = p t ( l ) H such that l > k and L s = L u  do
4:         Compute Δ : = Δ ( p , k , l ) by (15) if l > k + 1 or by (16) if l = k + 1
5:         if  Δ < Δ *  then
6:             Δ * : = Δ
7:            Set t * : = t , k * : = k , and  l * : = l
8:         end if
9:      end for
10:   end for
11:   if  Δ * < 0  then
12:      Swap positions of facilities p t * ( k * ) and p t * ( l * ) in p t *
13:      Update matrices C, C , C + , X, D t * , and  B t *
14:   end if
15:   return  Δ *
Proposition 3.
The computational complexity of the procedure explore_swap_neighborhood is O ( m n 2 ) .
Proof. 
Observe that loops 2–10 run in O ( m n 2 ) time. This is implied by the fact that the time complexity of the Δ calculation (Line 4) is O ( 1 ) . Other parts of the procedure have less complexity. The procedure performs O ( n 2 ) operations to update matrices D t * and B t * , O ( n ) operations to update  matrices C, C , and  C + , and  O ( 1 ) operations to update matrix X.    □
We remark that the computational complexity of Algorithm 5 asymptotically matches the size of the neighborhood N ˜ 1 , which is O ( m n 2 ) . Thus, the neighborhood exploration is performed in an efficient way. When the number of planning periods m is a constant, the running time of Algorithm 5 reduces to O ( n 2 ) . As an alternative to the described procedure, one might use Equation (14) instead of (15) and (16) in Line 4 of Algorithm 5. However, the worst case complexity of such an implementation would be O ( m n 3 ) .
To present our procedure for exploring the insertion neighborhood, we rewrite (12) as δ ( p , k , l ) = δ ( p , k , l + 1 ) + δ l 1 , where
δ l 1 = L s ( w t s v c t l ) + L v ( α + α + c t k ) + G ( v , t , x t v + L s ) ,
α = α l , and  α = α l + 1 . Similarly, we rewrite (13) as δ ( p , k , l ) = δ ( p , k , l 1 ) + δ l 2 , where
δ l 2 = L s ( w t s v + c t l ) + L v ( α + α c t k ) + G ( v , t , x t v L s ) ,
α = α l 1 , and  α = α l .
The pseudocode of the procedure explore_insertion_neighborhood is shown in Algorithm 6. It can be seen that Lines 4–14 implement the first part of Proposition 1 (with (12)) and Lines 15–25 implement the second part of this proposition (with (13)). The sum Δ + γ stands for δ ( p , k , l ) as given by (6). The best move is stored using a triplet ( t * , k * , l * ) , where t * is the selected period and k * and l * are the current and, respectively, the target position of the selected facility in permutation p t * . If an improving move is found, then the facility p t * ( k * ) is moved to position l * (Line 29). Then, the matrices C, C , C + , X, D t * , and  B t * are updated in Line 30. The following result is similar to Proposition 3 and should be clear.
Proposition 4.
The computational complexity of the procedure explore_insertion_neighborhood is O ( m n 2 ) .
Algorithm 6 Insertion neighborhood exploration procedure
      explore_insertion_neighborhood(p)
Input: Solution p.
Output: Best move gain Δ * and solution p (improved if Δ * < 0 ).
1:    Δ * : = 0
2:   for  t = 1 , , m  do
3:      for  k = 1 , , n and s = p t ( k )  do
4:         Set Δ : = 0 and α : = 0 // α stands for α l in (21)
5:         for  l = k 1 to 1 by 1 , and  v = p t ( l )  do
6:             α : = α // α stands for α l + 1 in (21)
7:             α : = α + w t s v
8:            Compute δ l 1 by (21) and γ = G ( s , t , x t s ( l ) )
9:             Δ : = Δ + δ l 1 // Δ stands for δ ( p , k , l ) in (12)
10:           if  Δ + γ < Δ *  then
11:                Δ * : = Δ + γ
12:               Set t * : = t , k * : = k , and  l * : = l
13:            end if
14:         end for
15:         Set Δ : = 0 and α : = 0 // α stands for α l in (22)
16:         for  l = k + 1 to n by 1, and  v = p t ( l )  do
17:             α : = α // α stands for α l 1 in (22)
18:             α : = α + w t s v
19:            Compute δ l 2 by (22) and γ = G ( s , t , x t s ( l ) )
20:             Δ : = Δ + δ l 2 // Δ stands for δ ( p , k , l ) in (13)
21:            if  Δ + γ < Δ *  then
22:                Δ * : = Δ + γ
23:               Set t * : = t , k * : = k , and  l * : = l
24:            end if
25:         end for
26:      end for
27:   end for
28:   if  Δ * < 0  then
29:      Move facility p t * ( k * ) to position l * in p t *
30:      Update matrices C, C , C + , X, D t * , and  B t *
31:   end if
32:   return  Δ *
We note that the size of the insertion neighborhood is O ( m n 2 ) . It thus follows that the procedure takes O ( 1 ) time per move. A variant of the procedure can be implemented by replacing Equations (12) and (13) with (7) and (9). However, such a replacement increases the time complexity to O ( m n 3 ) .

4. Computational Results

In this section, we report on the results of computational tests to assess the performance of the developed variable neighborhood search algorithm for solving the DSRFLP. The effectiveness of the approach is evaluated by comparing the VNS against the simulated annealing algorithm proposed by Şahin et al. [2].

4.1. Experimental Setup

The VNS algorithm described in previous sections was coded in the C++ programming language. For comparison purposes, we also coded the simulated annealing algorithm of Şahin et al. [2]. These authors used the name SA-R to refer to this algorithm. We keep this name. We ran SA-R with the parameter settings used in [2]. However, to be able to apply a time-based stopping criterion, we implemented SA-R as a multistart algorithm. Each time SA-R starts with a randomly generated solution. The experiments with VNS and SA-R were carried out on a PC with an Intel Core i5-9400F CPU running at 2.90 GHz.
Since research on the DSRFLP is in its infancy, there is no available benchmarks in the literature. Therefore, we performed our experiments on a set of randomly generated problem instances. The entries of the cost matrix ( ψ s u ) and the material flow matrices ( ϕ t s u ) , t { 1 , , m } , in these instances, are random numbers drawn uniformly between 1 and 5 and between 1 and 10, respectively. The costs associated with rearranging facilities are randomly and uniformly sampled from the interval [ 250 , 500 ] if n 100 and interval [ 1000 , 2000 ] if n > 100 . The length of each facility is a random integer number between 1 and 5. The generated instances have from 10 to 200 facilities. The number of planning periods is either 3 or 5. The dataset is publicly available (https://drive.google.com/file/d/1P36xdv4QpZxFmROhAa8Z2dOXU9zJOJKl/view?usp=sharing, accessed on 1 May 2022). Here, it is important to mention that, with the increase in the number of facilities, a more realistic layout scenario is to place facilities in a multi-row configuration. Our main reason for using single row layout problem instances with a large number of facilities (with n > 100 ) was to more comprehensively evaluate the performance of the described algorithms.
In our computational experiments, we ran both VNS and SA-R 10 times per instance. Maximum CPU time limits for a run of an algorithm were as follows: 180 s for n 20 , 600 s for n = 30 , 1800 s for 40 n 60 , and 3600 s for n 70 . To measure the performance of the algorithms, we use the following statistics: the best objective function value from the 10 independent runs; the average objective function value for 10 runs; and the average time for reaching the best result.

4.2. Parameter Settings

In Section 2, we described two versions of our VNS implementation, VNS1 and VNS2. The first of them starts with a randomly generated permutation of facilities. Its parameters are ρ , z min , and  θ (see Section 2). The second version (VNS2) starts with an initial solution produced by a VNS heuristic applied on an SRFLP instance. The run time of this heuristic is controlled through parameter β . Certainly, the parameters ρ , z min , and  θ apply to VNS2, too.
To find good parameter settings for ρ , z min , and  θ , we examined the performance of VNS on a training sample consisting of 10 DSRFLP instances with n = 60 , 70 , 80 , 90 , and 100 and m = 3 and 5. These instances were randomly generated as described in Section 4.1. Of course, the training sample is disjoint from the main dataset, which is reserved for the final testing stage. In each experiment, we ran several configurations of VNS. To evaluate them, we used the following formula
F avdev = ( F F min ) / 10 ,
where F denotes the objective function value achieved by the tested configuration and F min represents the minimum objective function value obtained by all configurations. The sum in (23) is taken over all instances in the sample. During the parameter tuning phase, we set the cutoff time of VNS to 300 s per instance.
In the stage of preliminary experimentation, we ran VNS1 and VNS2 using various combinations of the values of the parameters ρ , z min , and  θ . We observed that the performance of VNS1 and VNS2 was not very sensitive to the choice of these parameters. Therefore, we relied on a simple parameter setting procedure for our algorithm. Based on early tests, we first identified a range of potential values for each parameter. Then, we allowed one parameter to take on different values from its range while keeping the other parameters fixed at reasonable values chosen on the basis of preliminary numerical experiments.
We started our numerical analysis by investigating the influence of the parameter ρ on the performance of the VNS algorithm. This parameter is used to define the maximum size of the neighborhood in the search process. We ran VNS with ρ { 0.1 , 0.2 , 0.3 , 0.4 , 0.5 , 0.7 , 1 , 1.5 , 2 } . The results are plotted in Figure 4. We see that the best performance of VNS was for ρ 0.4 , with a slight edge to ρ = 0.3 . Therefore, we fixed ρ at 0.3 for all further experiments with VNS. We then examined the following five values of the parameter z min : 1 , 3 , 5 , 10 , and 20. Figure 5 shows that the algorithm was fairly robust to the choice of z min . We decided to fix z min at 3. Furthermore, we investigated the effect of the parameter θ on the performance of VNS. We ran the algorithm with θ = 1 , 2 , 3 , 4 , 5 , 7 , 10 , and 20. From Figure 6, we see that a bad choice is to set θ = 1 . The best values of θ appeared to be 2 and 5. The results were very similar between VNS configurations with other tested values of θ . Based on the obtained results, we fixed θ at 5.
As we alluded to at the beginning of this section, the VNS2 version of the algorithm makes use of the time share parameter β . This parameter represents the proportion of time allotted to a heuristic for providing an initial solution. We tested the values of β from 0 to 0.06 in increments of 0.01 . In addition, we ran VNS2 with β = 0.08 , 0.1 , and  0.2 . The results are displayed in Figure 7. We observe that the best performance of VNS2 was achieved for β { 0.04 , 0.05 , 0.06 } . We elected to set β to 0.04 for all further experiments with VNS2 reported in this paper.

4.3. Computational Results for Smaller Sized Instances

We first report the computational experiments on a set of DSRFLP instances of size up to 100 facilities. Table 2 compares the best results achieved by VNS1, VNS2, and SA-R. Its first column contains the instance names. The first integer in the name gives the number of facilities and the second integer gives the number of planning periods. In the next columns, F * is the objective function value of the best solution out of 10 runs. The average results of VNS1, VNS2, and SA-R are listed in Table 3. The two columns for each algorithm contain the average objective function value of 10 solutions, denoted as F ¯ , and the average time (in seconds) taken to find the best solution in a run. The bottom row of Table 2 and Table 3 shows the results averaged over all 20 problem instances. The best value of F * (in Table 2) and F ¯ (in Table 3) for each instance is highlighted in boldface.
By analyzing the results in Table 2 and Table 3, we find that both VNS versions outperformed the SA-R algorithm. It can be observed that, for each instance of size greater than 10, VNS1 showed superior results to SA-R in terms of both performance measures, F * and F ¯ . Each algorithm found the best solution for the two smallest instances. However, SA-R produced better average solutions than VNS1 for these instances. We also see that VNS2 obtained better solutions than SA-R for 17 instances, matched the performance of SA-R for 2 instances, and failed in one case (for instance p-40-5). This observation is valid for each table. The superiority of VNS over SA-R is also evidenced by the statistics presented in the last row of the tables, where we show the averaged results for each algorithm.
Figure 8 depicts the boxplots of the performance of the tested algorithms on a subset of DSRFLP instances from Table 2 and Table 3. The horizontal lines in each boxplot from bottom to top show the minimum, lower quartile, median, upper quartile, and maximum objective function values. As we can see in the figure, our VNS implementations were capable of delivering solutions of better quality than the SA-R algorithm.
Comparing VNS1 and VNS2, we find that VNS2 exhibits better performance than VNS1 in terms of F ¯ , but is slightly worse in terms of F * . To more rigorously assess the results, we apply the Wilcoxon signed-rank test. The comparison results are summarized in Table 4. The first column indicates which objective function values are compared: best solution values ( F * in Table 2) in the first row and average solution values ( F ¯ in Table 3) in the second row. The next three columns show comparison results: #wins, #ties, and #losses count the number of instances on which VNS2 found a better, an equally good, or an inferior solution than VNS1. The p-values from the Wilcoxon test are estimated in the penultimate column. We use a standard significance level of 0.05 to judge whether a significant difference exists between the algorithms. The value “Yes” means that the average results of VNS2 are better than those of VNS1, while “No” means that there is no statistical difference between the two algorithms when the comparison is based on the F * values.
The average running time taken by each algorithm to reach the last improvement in solution quality is shown in Table 3. We see that SA-R took less time to find the best solution in a run than VNS1 and VNS2. The running times of the latter two algorithms are quite comparable. Part (a) of Figure 9 shows the convergence speed of the tested algorithms for problem instance p-100-5. For each algorithm, a plot of the objective function value of the best solution versus computational time is provided. The first point plotted for VNS2 represents the solution generated by a VNS heuristic for the SRFLP and improved by running LS once. From the figure, we see that SA-R stops improving the best solution earlier than VNS1 and VNS2.

4.4. Computational Results for Larger Sized Instances

Our second experiment aimed to assess the performance of algorithms on a set of DSRFLP instances with the number of facilities ranging from 110 to 200. The results are reported in Table 5 and Table 6. They have the same structure as Table 2 and Table 3. The findings from the experiment slightly differ from those discussed in the previous section. The differences are not only due to an increase in the size of problem instances. We used slightly different parameters to generate instances of size n > 100 . The rearrangement costs for these instances are four times higher than in the case of instances of a size up to 100 (see Section 4.1).
Perhaps the main observation from Table 5 and Table 6 is the overwhelming superiority of VNS2 over the other two algorithms. Basically, VNS2 dominated SA-R and VNS1 across all the 20 problem instances. By contrasting the penultimate column of Table 6 with the second and third columns of Table 5, we can see that the average result of VNS2 is better than the best SA-R (or VNS1) result for all instances except p-130-3. Comparing SA-R with VNS1, it can be noticed from the bottom rows of the tables that VNS1 found better “best” solutions (Table 5) and SA-R produced better solutions on the average (Table 6). The boxplots for the four DSRFLP instances with n > 100 are shown in Figure 10. They confirm the effectiveness of VNS2 in comparison with the other evaluated algorithms.
Table 7 summarizes comparison results between SA-R and VNS1. Regarding the average quality of solutions, the Wilcoxon signed-rank test demonstrated a statistically significant difference in favor of SA-R. However, there was no significant difference between the results of SA-R and VNS1 in the case of the best solutions.
In Table 6, we also report the average running time of the tested algorithms. We see that SA-R found the best solution in a run earlier than the VNS configurations. We notice that VNS2, and especially VNS1, obtained an improved solution in a situation where the time limit of 1 h on a run was close to expiring. In part (b) of Figure 9, we compare the convergence speed of the algorithms for the problem instance p-200-3. We increased the cutoff time for a run to 5 h. However, as we can see from the figure, after 3 h of execution, the improvement in solution quality was very marginal.
We finalized this section with a couple of remarks concerning the performance of the tested algorithms. We experimentally compared our two algorithms (VNS1 and VNS2) for the DSRFLP. The results clearly indicate that VNS2 is our key algorithm in this study. We compared VNS2 with the simulated annealing algorithm (SA-R) from the literature. Experiments showed that the performance of VNS2 was superior to that of SA-R.

5. Analysis of Local Search Variants

To demonstrate the computational efficiency of our LS component of the algorithm, we experimentally compared it with alternative local search implementations. One idea was to abandon the use of the swap operator and base the search entirely on insertion moves. Other attempts were directed towards simplifying the gain calculation process by replacing the use of Propositions 1 and 2 with Equations (7), (9), and (14). Each of the resulting procedures, however, should not be considered as an independent algorithm. Basically, these procedures can be treated as modifications of VNS2. They are obtained by making small changes to the LS part of VNS2. We investigated alternative LS implementations in order to justify various design choices made in the construction of the VNS2 algorithm. To assess the performance of the variable neighborhood search algorithm with alternative LS approaches, we used the main VNS2 variant (described in Section 2 and Section 3) as a reference method.

5.1. Usefulness of Swap Moves

We numerically analyzed the performance of a VNS2 version in which the LS procedure does not employ the swap neighborhood structure. We refer to this version as VNS2a. Basically, VNS2a is obtained from VNS2 by deleting Lines 4–8 in Algorithm 4.
To avoid unnecessarily long computations, we performed the comparison between VNS2 and VNS2a on a set of DSRFLP instances of Section 4.1 with n 100 . The comparison results are shown in Figure 11. On the x axis, F VNS 2 a F VNS 2 represents the difference in the F * values between VNS2a and VNS2 in the bars labeled “Best”, and the difference in the F ¯ values between VNS2a and VNS2 in the bars labeled as “Average”. We provide results for problem instances of a size of at least 40. For smaller instances, the two versions of VNS2 either obtained the same solution or the difference between the objective function values was relatively small. We see in the figure that VNS2a found a better solution than VNS2 for p-40-5. The objective function value achieved by VNS2a for this instance is 1,982,841.80. This value, however, is bigger than that reported by VNS1 (see Table 2). We also observed that, for all problem instances of size greater than 40, VNS2 produced better solutions than VNS2a. The difference F VNS 2 a F VNS 2 averaged over all 20 instances was 8486.30 in the case of the F * values and 11,322.90 in the case of the F ¯ values. From the figure, we can conclude that the swap neighborhood exploration procedure (Algorithm 5) is an important component of the approach. This helps significantly improve the quality of solutions produced by VNS.

5.2. Benefit of Fast Neighborhood Exploration

The purpose of this section is to show that our idea to calculate the move gain in constant time is fruitful. To this end, we consider the following variations of the VNS2 algorithm:
  • VNS2b obtained from VNS2 by replacing the equations of Proposition 1 with Equations (7) and (9);
  • VNS2c obtained from VNS2 by replacing Equation (15) with Equation (14);
  • VNS2d obtained from VNS2 by replacing the equations of Propositions 1 and 2 with Equations (7), (9), and (14).
Since the VNS2 algorithm employs fast neighborhood exploration procedures, it was expected that the above-listed alternative VNS2 versions could not improve the results obtained by VNS2. Our experiment confirmed this expectation. As in Section 5.1, we ran the algorithm on problem instances of size n 100 . The results of solving these instances are reported in Figure 12 and Figure 13. The differences shown in these figures for each VNS2 version were obtained similarly to those in Figure 11. We note that all versions of VNS2 managed to find the best solution for the first five instances in the dataset and therefore the results are only provided for p-30-5 and all instances with n 40 . Figure 12 depicts the performance differences calculated for the F * values and Figure 13 shows the performance differences obtained for the F ¯ values. As it is obvious from the figures, VNS2 produced better solutions than VNS2b, VNS2c, and VNS2d for all problem instances. The average values of F VNS 2 b F VNS 2 , F VNS 2 c F VNS 2 , and  F VNS 2 d F VNS 2 in Figure 12 are 1146.54 , 1305.63 , and  1580.42 , respectively, and those in Figure 13 are 1045.29 , 1322.69 , and  1686.44 , respectively. As expected, VNS2d obtained worse solutions than VNS2b and VNS2c. In general, it can be concluded that the use of the proposed neighborhood exploration procedures has a large impact on improving the performance of the variable neighborhood search method for solving the DSRFLP.

6. Concluding Remarks

In this paper, we developed a variable neighborhood search algorithm for solving the dynamic single row facility layout problem. The effectiveness of this algorithm strongly depends on the quality of the adopted LS strategy. The main contribution of this work is an LS algorithm based on the fast neighborhood exploration procedures developed for swap and insertion neighborhood structures. Provided that the number of planning periods is a constant, these procedures have O ( n 2 ) time complexity. In one version of VNS, a starting solution is generated by performing the short run of a VNS procedure for solving the SRFLP. An instance of the latter is constructed from an instance of the DSRFLP. Another version of VNS starts with a randomly generated solution.
Both VNS versions were numerically compared against the SA approach of Şahin et al. [2], which is the state-of-the-art algorithm for the DSRFLP. Computational experiments were conducted on problem instances of size up to 200 facilities and 3 or 5 planning periods. The results indicate that VNS with a heuristically constructed initial solution outperformed the SA algorithm. The superiority of VNS over SA is more pronounced for DSRFLP instances of a size greater than 100.
Additional experiments were carried out to show the effectiveness of the proposed LS procedure. A conclusion was reached that it is advantageous to explore the swap neighborhood, and not to restrict LS to performing insertion operations only. Another conclusion is that our methods to calculate the move gain largely outperformed traditional techniques.
As a general conclusion, we may note that the DSRFLP is a very difficult combinatorial optimization problem. It is much harder than the SRFLP. We experience that, for large-size instances in our test suite, the best solutions obtained are likely not the best possible. To find improved solutions using VNS, a big amount of CPU time may be required. An obvious direction for further research is to develop new powerful metaheuristic-based algorithms for solving the DSRFLP. One idea is to combine SA and VNS into a single approach. This hybridization strategy was successfully applied to several permutation-based problems, for example, for solving the profile minimization problem [58]. Another promising idea seems to be developing evolutionary algorithms for the DSRFLP. Such algorithms may use the proposed LS procedure as a means for search intensification. In general, due to its complexity and simplicity in formulation, the DSRFLP can serve as a good candidate problem for the performance evaluation of newly introduced metaheuristic optimization methods.
Another possible area of future research is to design and implement metaheuristic algorithms for dynamic versions of some other facility layout problems. In particular, a research effort could be devoted to developing such algorithms for a dynamic formulation of the multi-row facility layout problem. Loop layout problems are another example for which dynamic models can be considered. Finally, the research can also be directed towards the development of new optimization techniques for dynamic parallel row ordering.

Author Contributions

Conceptualization, G.P.; methodology, G.P. and A.O.; software, G.P. and J.P.; validation, G.P., A.O. and J.P.; formal analysis, J.P.; investigation, G.P. and J.P.; resources, A.O.; writing—original draft preparation, G.P., A.O. and J.P.; writing—review and editing, G.P., A.O. and J.P.; supervision, G.P.; project administration, A.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Keller, B.; Buscher, U. Single row layout models. Eur. J. Oper. Res. 2015, 245, 629–644. [Google Scholar] [CrossRef]
  2. Şahin, R.; Niroomand, S.; Durmaz, E.D.; Molla-Alizadeh-Zavardehi, S. Mathematical formulation and hybrid meta-heuristic solution approaches for dynamic single row facility layout problem. Ann. Oper. Res. 2020, 295, 313–336. [Google Scholar] [CrossRef]
  3. Amaral, A.R.S. An exact approach to the one-dimensional facility layout problem. Oper. Res. 2008, 56, 1026–1033. [Google Scholar] [CrossRef]
  4. Amaral, A.R.S. A new lower bound for the single row facility layout problem. Discret. Appl. Math. 2009, 157, 183–190. [Google Scholar] [CrossRef] [Green Version]
  5. Amaral, A.R.S.; Letchford, A.N. A polyhedral approach to the single row facility layout problem. Math. Program. 2013, 141, 453–477. [Google Scholar] [CrossRef] [Green Version]
  6. Anjos, M.F.; Vannelli, A. Computing globally optimal solutions for single-row layout problems using semidefinite programming and cutting planes. INFORMS J. Comput. 2008, 20, 611–617. [Google Scholar] [CrossRef] [Green Version]
  7. Hungerländer, P.; Rendl, F. A computational study and survey of methods for the single-row facility layout problem. Comput. Optim. Appl. 2013, 55, 1–20. [Google Scholar] [CrossRef]
  8. Samarghandi, H.; Eshghi, K. An efficient tabu algorithm for the single row facility layout problem. Eur. J. Oper. Res. 2010, 205, 98–105. [Google Scholar] [CrossRef]
  9. Kothari, R.; Ghosh, D. Tabu search for the single row facility layout problem using exhaustive 2-opt and insertion neighborhoods. Eur. J. Oper. Res. 2013, 224, 93–100. [Google Scholar] [CrossRef] [Green Version]
  10. Datta, D.; Amaral, A.R.S.; Figueira, J.R. Single row facility layout problem using a permutation-based genetic algorithm. Eur. J. Oper. Res. 2011, 213, 388–394. [Google Scholar] [CrossRef]
  11. Ozcelik, F. A hybrid genetic algorithm for the single row layout problem. Int. J. Prod. Res. 2012, 50, 5872–5886. [Google Scholar] [CrossRef]
  12. Kothari, R.; Ghosh, D. An efficient genetic algorithm for single row facility layout. Optim. Lett. 2014, 8, 679–690. [Google Scholar] [CrossRef]
  13. Kothari, R.; Ghosh, D. Insertion based Lin-Kernighan heuristic for single row facility layout. Comput. Oper. Res. 2013, 40, 129–136. [Google Scholar] [CrossRef]
  14. Ou-Yang, C.; Utamima, A. Hybrid estimation of distribution algorithm for solving single row facility layout problem. Comput. Ind. Eng. 2013, 66, 95–103. [Google Scholar] [CrossRef]
  15. Kothari, R.; Ghosh, D. A scatter search algorithm for the single row facility layout problem. J. Heuristics 2014, 20, 125–142. [Google Scholar] [CrossRef]
  16. Palubeckis, G. Fast local search for single row facility layout. Eur. J. Oper. Res. 2015, 246, 800–814. [Google Scholar] [CrossRef]
  17. Rubio-Sánchez, M.; Gallego, M.; Gortázar, F.; Duarte, A. GRASP with path relinking for the single row facility layout problem. Knowl. Based Syst. 2016, 106, 1–13. [Google Scholar] [CrossRef]
  18. Guan, J.; Lin, G. Hybridizing variable neighborhood search with ant colony optimization for solving the single row facility layout problem. Eur. J. Oper. Res. 2016, 248, 899–909. [Google Scholar] [CrossRef]
  19. Palubeckis, G. Single row facility layout using multi-start simulated annealing. Comput. Ind. Eng. 2017, 103, 1–16. [Google Scholar] [CrossRef]
  20. Ning, X.; Li, P. A cross-entropy approach to the single row facility layout problem. Int. J. Prod. Res. 2018, 56, 3781–3794. [Google Scholar] [CrossRef]
  21. Atta, S.; Sinha Mahapatra, P.R. Population-based improvement heuristic with local search for single-row facility layout problem. Sādhanā 2019, 44, 222. [Google Scholar] [CrossRef] [Green Version]
  22. Cravo, G.L.; Amaral, A.R.S. A GRASP algorithm for solving large-scale single row facility layout problems. Comput. Oper. Res. 2019, 106, 49–61. [Google Scholar] [CrossRef]
  23. Yeh, W.-C.; Lai, C.-M.; Ting, H.-Y.; Jiang, Y.; Huang, H.-P. Solving single row facility layout problem with simplified swarm optimization. In Proceedings of the 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), Guilin, China, 29–31 July 2017; pp. 267–270. [Google Scholar] [CrossRef]
  24. Krömer, P.; Platoš, J.; Snášel, V. Solving the single row facility layout problem by differential evolution. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference (GECCO ’20), Cancún, Mexico, 8–12 July 2020; ACM: New York, NY, USA, 2020. [Google Scholar] [CrossRef]
  25. Di Bari, G.; Baioletti, M.; Santucci, V. An experimental evaluation of the algebraic differential evolution algorithm on the single row facility layout problem. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion (GECCO ’20 Companion), Cancún, Mexico, 8–12 July 2020; ACM: New York, NY, USA, 2020. [Google Scholar] [CrossRef]
  26. Sun, X.; Chou, P.; Koong, C.-S.; Wu, C.-C.; Chen, L.-R. Optimizing 2-opt-based heuristics on GPU for solving the single-row facility layout problem. Future Gener. Comput. Syst. 2022, 126, 91–109. [Google Scholar] [CrossRef]
  27. Anjos, M.F.; Vieira, M.V.C. Layout on a single row. In Facility Layout. EURO Advanced Tutorials on Operational Research; Springer: Cham, Switzerland, 2021. [Google Scholar] [CrossRef]
  28. Kalita, Z.; Datta, D. A constrained single-row facility layout problem. Int. J. Adv. Manuf. Technol. 2018, 98, 2173–2184. [Google Scholar] [CrossRef]
  29. Liu, S.; Zhang, Z.; Guan, C.; Zhu, L.; Zhang, M.; Guo, P. An improved fireworks algorithm for the constrained single-row facility layout problem. Int. J. Prod. Res. 2021, 59, 2309–2327. [Google Scholar] [CrossRef]
  30. Keller, B. Construction heuristics for the single row layout problem with machine-spanning clearances. INFOR Inf. Syst. Oper. Res. 2019, 2019 57, 32–55. [Google Scholar] [CrossRef]
  31. Amaral, A.R.S. A parallel ordering problem in facilities layout. Comput. Oper. Res. 2013, 40, 2930–2939. [Google Scholar] [CrossRef]
  32. Yang, X.; Cheng, W.; Smith, A.E.; Amaral, A.R.S. An improved model for the parallel row ordering problem. J. Oper. Res. Soc. 2020, 71, 475–490. [Google Scholar] [CrossRef]
  33. Ahonen, H.; de Alvarenga, A.G.; Amaral, A.R.S. Simulated annealing and tabu search approaches for the corridor allocation problem. Eur. J. Oper. Res. 2014, 232, 221–233. [Google Scholar] [CrossRef]
  34. Kalita, Z.; Datta, D.; Palubeckis, G. Bi-objective corridor allocation problem using a permutation-based genetic algorithm hybridized with a local search technique. Soft Comput. 2019, 23, 961–986. [Google Scholar] [CrossRef]
  35. Zhang, Z.; Mao, L.; Guan, C.; Zhu, L.; Wang, Y. An improved scatter search algorithm for the corridor allocation problem considering corridor width. Soft Comput. 2020, 24, 461–481. [Google Scholar] [CrossRef]
  36. Fischer, A.; Fischer, F.; Hungerländer, P. New exact approaches to row layout problems. Math. Program. Comput. 2019, 11, 703–754. [Google Scholar] [CrossRef]
  37. Herrán, A.; Colmenar, J.M.; Duarte, A. An efficient variable neighborhood search for the space-free multi-row facility layout problem. Eur. J. Oper. Res. 2021, 295, 893–907. [Google Scholar] [CrossRef]
  38. Gong, J.; Zhang, Z.; Liu, J.; Guan, C.; Liu, S. Hybrid algorithm of harmony search for dynamic parallel row ordering problem. J. Manuf. Syst. 2021, 58, 159–175. [Google Scholar] [CrossRef]
  39. Guan, C.; Zhang, Z.; Zhu, L.; Liu, S. Mathematical formulation and a hybrid evolution algorithm for solving an extended row facility layout problem of a dynamic manufacturing system. Robot. Comput.-Integr. Manuf. 2022, 78, 102379. [Google Scholar] [CrossRef]
  40. Rosenblatt, M.J. The dynamics of plant layout. Manage. Sci. 1986, 32, 76–86. [Google Scholar] [CrossRef] [Green Version]
  41. Balakrishnan, J.; Cheng, C.H. Genetic search and the dynamic layout problem. Comput. Oper. Res. 2000, 27, 587–593. [Google Scholar] [CrossRef]
  42. Balakrishnan, J.; Cheng, C.H.; Conway, D.G.; Lau, C.M. A hybrid genetic algorithm for the dynamic plant layout problem. Int. J. Prod. Econ. 2003, 86, 107–120. [Google Scholar] [CrossRef]
  43. McKendall, A.R.; Shang, J.; Kuppusamy, S. Simulated annealing heuristics for the dynamic facility layout problem. Comput. Oper. Res. 2006, 33, 2431–2444. [Google Scholar] [CrossRef] [Green Version]
  44. Baykasoglu, A.; Dereli, T.; Sabuncu, I. An ant colony algorithm for solving budget constrained and unconstrained dynamic facility layout problems. Omega 2006, 34, 385–396. [Google Scholar] [CrossRef]
  45. Zouein, P.P.; Kattan, S. An improved construction approach using ant colony optimization for solving the dynamic facility layout problem. J. Oper. Res. Soc. 2021. [Google Scholar] [CrossRef]
  46. McKendall, A.R.; Shang, J. Hybrid ant systems for the dynamic facility layout problem. Comput. Oper. Res. 2006, 33, 790–803. [Google Scholar] [CrossRef]
  47. Şahin, R.; Türkbey, O. A new hybrid tabu-simulated annealing heuristic for the dynamic facility layout problem. Int. J. Prod. Res. 2009, 47, 6855–6873. [Google Scholar] [CrossRef]
  48. McKendall, A.R.; Liu, W.-H. New tabu search heuristics for the dynamic facility layout problem. Int. J. Prod. Res. 2012, 50, 867–878. [Google Scholar] [CrossRef]
  49. Hosseini-Nasab, H.; Emami, L. A hybrid particle swarm optimisation for dynamic facility layout problem. Int. J. Prod. Res. 2013, 51, 4325–4335. [Google Scholar] [CrossRef]
  50. Turanoğlu, B.; Akkaya, G. A new hybrid heuristic algorithm based on bacterial foraging optimization for the dynamic facility layout problem. Expert Syst. Appl. 2018, 98, 93–104. [Google Scholar] [CrossRef]
  51. Zhu, T.; Balakrishnan, J.; Cheng, C.H. Recent advances in dynamic facility layout research. INFOR Inf. Syst. Oper. Res. 2018, 56, 428–456. [Google Scholar] [CrossRef]
  52. Hosseini-Nasab, H.; Fereidouni, S.; Fatemi Ghomi, S.M.T.; Fakhrzad, M.B. Classification of facility layout problems: A review study. Int. J. Adv. Manuf. Technol. 2018, 94, 957–977. [Google Scholar] [CrossRef]
  53. Mladenović, N.; Hansen, P. Variable neighborhood search. Comput. Oper. Res. 1997, 24, 1097–1100. [Google Scholar] [CrossRef]
  54. Hansen, P.; Mladenović, N.; Moreno Pérez, J.A. Variable neighbourhood search: Methods and applications. 4OR 2008, 6, 319–360. [Google Scholar] [CrossRef]
  55. Schiavinotto, T.; Stützle, T. A review of metrics on permutations for search landscape analysis. Comput. Oper. Res. 2007, 34, 3143–3153. [Google Scholar] [CrossRef] [Green Version]
  56. Baioletti, M.; Milani, A.; Santucci, V. Variable neighborhood algebraic differential evolution: An application to the linear ordering problem with cumulative costs. Inf. Sci. 2020, 507, 37–52. [Google Scholar] [CrossRef]
  57. Zaefferer, M.; Stork, J.; Bartz-Beielstein, T. Distance measures for permutations in combinatorial efficient global optimization. In Lecture Notes in Computer Science, Proceedings of the Parallel Problem Solving from Nature—PPSN XIII, Ljubljana, Slovenia, 13–17 September 2014; Bartz-Beielstein, T., Branke, J., Filipič, B., Smith, J., Eds.; Springer: Cham, Switzerland, 2014; Volume 8672, pp. 373–383. [Google Scholar] [CrossRef]
  58. Palubeckis, G. A variable neighborhood search and simulated annealing hybrid for the profile minimization problem. Comput. Oper. Res. 2017, 87, 83–97. [Google Scholar] [CrossRef]
Figure 1. Example layout with three planning periods.
Figure 1. Example layout with three planning periods.
Mathematics 10 02174 g001
Figure 2. Relocating facility 1 from position 3 to position 5 during planning period 2.
Figure 2. Relocating facility 1 from position 3 to position 5 during planning period 2.
Mathematics 10 02174 g002
Figure 3. Interchanging facility s with its left neighbor v.
Figure 3. Interchanging facility s with its left neighbor v.
Mathematics 10 02174 g003
Figure 4. Average deviation F avdev versus parameter ρ .
Figure 4. Average deviation F avdev versus parameter ρ .
Mathematics 10 02174 g004
Figure 5. Average deviation F avdev versus parameter z min .
Figure 5. Average deviation F avdev versus parameter z min .
Mathematics 10 02174 g005
Figure 6. Average deviation F avdev versus parameter θ .
Figure 6. Average deviation F avdev versus parameter θ .
Mathematics 10 02174 g006
Figure 7. Average deviation F avdev versus parameter β .
Figure 7. Average deviation F avdev versus parameter β .
Mathematics 10 02174 g007
Figure 8. Objective function values achieved by SA-R, VNS1, and VNS2 on four DSRFLP instances with n 100 : (a) p-70-3; (b) p-80-5; (c) p-90-3; and (d) p-100-5.
Figure 8. Objective function values achieved by SA-R, VNS1, and VNS2 on four DSRFLP instances with n 100 : (a) p-70-3; (b) p-80-5; (c) p-90-3; and (d) p-100-5.
Mathematics 10 02174 g008
Figure 9. Convergence speed of SA-R, VNS1, and VNS2: (a) p-100-5; and (b) p-200-3.
Figure 9. Convergence speed of SA-R, VNS1, and VNS2: (a) p-100-5; and (b) p-200-3.
Mathematics 10 02174 g009
Figure 10. Objective function values achieved by SA-R, VNS1, and VNS2 on four DSRFLP instances with n > 100 : (a) p-120-5; (b) p-150-3; (c) p-180-5; and (d) p-200-3.
Figure 10. Objective function values achieved by SA-R, VNS1, and VNS2 on four DSRFLP instances with n > 100 : (a) p-120-5; (b) p-150-3; (c) p-180-5; and (d) p-200-3.
Mathematics 10 02174 g010
Figure 11. Difference in the solution values between VNS2a (that is, VNS2 with no swap operator) and the VNS2 algorithm.
Figure 11. Difference in the solution values between VNS2a (that is, VNS2 with no swap operator) and the VNS2 algorithm.
Mathematics 10 02174 g011
Figure 12. Difference in solution values between the VNS2 algorithm and VNS2 variations VNS2b, VNS2c, and VNS2d: the case of the F * values.
Figure 12. Difference in solution values between the VNS2 algorithm and VNS2 variations VNS2b, VNS2c, and VNS2d: the case of the F * values.
Mathematics 10 02174 g012
Figure 13. Difference in solution values between the VNS2 algorithm and VNS2 variations VNS2b, VNS2c, and VNS2d: the case of the F ¯ values.
Figure 13. Difference in solution values between the VNS2 algorithm and VNS2 variations VNS2b, VNS2c, and VNS2d: the case of the F ¯ values.
Mathematics 10 02174 g013
Table 1. Main notations used in this paper.
Table 1. Main notations used in this paper.
NotationDescription
HSet of facilities
nNumber of facilities ( n = | H | )
mNumber of planning periods
s , u , v Indices used for facilities ( s , u , v H )
tIndex of planning periods
L s Length of facility s
ϕ t s u Material flow between facilities s and u in period t
ψ s u Cost of transferring a unit of material per distance unit between facilities s and u
w t s u Material flow cost between facilities s and u in period t
x t s Center coordinate of facility s in period t
d t ( s , u ) Distance between lefts of facilities s and u in period t ( d t ( s , u ) = | x t s x t u | )
r t s Rearrangement cost of facility s at the beginning of period t
I ( p t 1 , p t ) Set of facilities whose location during period t differs from that during period t 1
w s u Material flow cost between facilities s and u in the SRFLP instance
λ u v + Half-sum of lengths of facilities u and v
λ u v Half-difference between lengths of facilities u and v
p t Layout (permutation of facilities) during period t
p = { p 1 , , p m } Solution (layout plan) of the DSRFLP
Π m Set of all feasible solutions
F ( p ) Objective function of the DSRFLP
F * Objective function value of the best solution
p * Best solution
F ¯ Average objective function value
F avdev Average deviation of the objective function from the reference value
c t q Sum of flow costs between the first q facilities and the remaining n q facilities during period t
G ( u , t , x ) Change in rearrangement cost incurred by placing facility u at location x during period t
x t s ( l ) Center coordinate of facility s when it is inserted at position l in permutation p t
N ˜ z ( p ) Interchange neighborhood of depth z of solution p
N ( p ) Insertion neighborhood of solution p
T lim Maximum time limit for algorithm execution
Table 2. Best results of VNS1, VNS2, and SA-R for smaller sized instances.
Table 2. Best results of VNS1, VNS2, and SA-R for smaller sized instances.
Instance F *
SA-R [2]VNS1VNS2
p-10-311,705.0411,705.0411,705.04
p-10-536,542.4036,542.4036,542.40
p-20-3135,854.28135,836.72135,836.72
p-20-5237,005.63235,702.67236,978.68
p-30-3544,000.65543,651.77543,652.57
p-30-5764,617.91760,389.18760,734.14
p-40-31,107,986.231,105,774.531,106,968.57
p-40-51,984,973.001,977,976.691,986,389.10
p-50-32,386,487.042,383,485.992,383,192.05
p-50-53,306,923.093,298,814.023,298,351.85
p-60-34,049,420.994,044,397.124,043,248.10
p-60-56,628,448.996,618,996.146,618,257.45
p-70-36,373,466.806,367,620.876,368,145.96
p-70-510,871,763.6010,846,635.1810,856,120.03
p-80-39,076,332.099,068,545.519,067,154.61
p-80-515,671,590.4215,647,210.0915,644,661.53
p-90-313,308,092.0013,293,451.7913,289,264.47
p-90-519,678,980.8919,642,957.2419,644,995.21
p-100-320,432,892.8820,406,423.7320,408,596.84
p-100-532,439,455.7832,376,234.7432,380,043.46
Average7,452,326.997,440,117.577,441,041.94
Table 3. Average results of VNS1, VNS2, and SA-R for smaller sized instances (the time is in seconds).
Table 3. Average results of VNS1, VNS2, and SA-R for smaller sized instances (the time is in seconds).
InstanceSA-R [2]VNS1VNS2
F ¯ Time F ¯ Time F ¯ Time
p-10-311,705.04411,799.45<111,705.04<1
p-10-536,542.40536,733.37<136,542.40<1
p-20-3136,171.1088135,873.7523135,836.728
p-20-5238,345.9389236,178.9342236,978.6810
p-30-3544,574.88420544,238.6498544,018.0070
p-30-5766,188.96356761,610.41269760,892.22269
p-40-31,109,630.5911611,108,546.916271,107,759.88922
p-40-51,986,043.958401,980,224.468371,986,508.741329
p-50-32,387,389.069852,385,376.526912,383,939.81985
p-50-53,311,036.019473,303,230.8712623,299,774.871429
p-60-34,051,257.888064,050,500.7112444,044,226.341447
p-60-56,632,170.2710896,622,791.0413926,621,988.341437
p-70-36,374,969.4416116,369,823.4616466,369,592.162138
p-70-510,875,948.72180610,853,532.66266210,868,110.323093
p-80-39,079,287.7213269,075,464.0025539,068,322.472928
p-80-515,675,848.78268215,656,867.55317215,650,223.413223
p-90-313,316,487.34166113,301,726.25283013,291,835.262929
p-90-519,687,737.35139419,659,048.67326319,658,507.913330
p-100-320,437,133.73189220,424,806.76300420,409,215.192633
p-100-532,448,789.19156332,398,327.96351132,394,880.523320
Average7,455,362.9210367,445,835.1214567,444,042.911575
Table 4. Comparison of VNS2 vs. VNS1 for smaller sized instances.
Table 4. Comparison of VNS2 vs. VNS1 for smaller sized instances.
Objective Function Value#wins#ties#lossesp-ValueStatistical Significance
Best7310>0.2No
Average1703<0.025Yes
Table 5. Best results of VNS1, VNS2, and SA-R for larger-size instances.
Table 5. Best results of VNS1, VNS2, and SA-R for larger-size instances.
Instance F *
SA-R [2]VNS1VNS2
p-110-324,426,599.1724,412,874.7424,352,004.50
p-110-544,394,528.0744,394,371.9144,219,359.24
p-120-332,482,176.3932,492,680.3332,396,003.86
p-120-551,521,726.7251,584,284.7751,323,374.62
p-130-343,120,283.1543,112,625.5243,087,223.83
p-130-567,282,522.4767,318,731.0067,042,244.45
p-140-352,320,015.6952,301,770.3452,248,995.04
p-140-580,707,509.2980,793,212.0580,397,730.12
p-150-366,670,525.2566,685,317.1266,523,798.21
p-150-5110,264,451.42110,259,943.20109,963,594.81
p-160-375,165,009.1575,141,439.4074,998,806.00
p-160-5143,801,397.84143,699,497.42143,375,603.20
p-170-390,472,044.2390,527,250.2490,356,544.17
p-170-5140,374,941.98140,448,285.02140,007,379.91
p-180-3100,099,923.84100,029,906.4899,830,173.50
p-180-5187,124,398.15186,995,144.87186,599,842.96
p-190-3129,442,600.58129,392,623.33129,243,420.34
p-190-5223,475,308.22223,303,403.18222,881,668.11
p-200-3159,575,676.04159,458,326.81159,332,994.59
p-200-5256,724,704.71256,210,522.64256,019,625.59
Average103,972,317.12103,928,110.52103,710,019.35
Table 6. Average results of VNS1, VNS2, and SA-R for larger-size instances (the time is in seconds).
Table 6. Average results of VNS1, VNS2, and SA-R for larger-size instances (the time is in seconds).
InstanceSA-R [2]VNS1VNS2
F ¯ Time F ¯ Time F ¯ Time
p-110-324,438,826.09136924,463,207.92353124,357,516.473176
p-110-544,421,154.10162644,472,395.59353244,247,884.943278
p-120-332,498,608.51165832,539,824.79352532,427,541.453381
p-120-551,551,795.94123551,656,799.26355451,361,807.613394
p-130-343,137,728.60173943,193,211.39354343,129,976.043299
p-130-567,299,514.49149667,404,238.70356567,102,663.673379
p-140-352,348,807.27162452,416,130.51350952,255,781.983269
p-140-580,760,725.04192880,875,690.25354680,425,261.193509
p-150-366,695,736.62176366,739,463.80354666,549,419.063367
p-150-5110,327,901.902040110,406,239.533569110,076,167.873346
p-160-375,188,758.41222675,216,760.29351875,057,013.613380
p-160-5143,875,834.701459143,886,630.773562143,565,705.903396
p-170-390,533,129.91212590,602,175.38354790,386,770.543299
p-170-5140,422,925.581657140,499,276.623553140,057,222.243219
p-180-3100,119,474.172367100,173,614.69357099,907,725.903443
p-180-5187,200,857.722420187,149,490.633559186,717,534.743425
p-190-3129,490,670.181919129,535,688.363525129,380,952.413214
p-190-5223,579,637.941709223,546,537.363557223,003,189.233456
p-200-3159,609,830.841799159,576,299.003561159,434,020.053044
p-200-5256,817,931.641475256,646,600.683559256,200,019.903214
Average104,015,992.481782104,050,013.783546103,782,208.743324
Table 7. Comparison of SA-R vs. VNS1 for larger sized instances.
Table 7. Comparison of SA-R vs. VNS1 for larger sized instances.
Objective Function Value#wins#lossesp-ValueStatistical Significance
Best713>0.2No
Average164<0.025Yes
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Palubeckis, G.; Ostreika, A.; Platužienė, J. A Variable Neighborhood Search Approach for the Dynamic Single Row Facility Layout Problem. Mathematics 2022, 10, 2174. https://doi.org/10.3390/math10132174

AMA Style

Palubeckis G, Ostreika A, Platužienė J. A Variable Neighborhood Search Approach for the Dynamic Single Row Facility Layout Problem. Mathematics. 2022; 10(13):2174. https://doi.org/10.3390/math10132174

Chicago/Turabian Style

Palubeckis, Gintaras, Armantas Ostreika, and Jūratė Platužienė. 2022. "A Variable Neighborhood Search Approach for the Dynamic Single Row Facility Layout Problem" Mathematics 10, no. 13: 2174. https://doi.org/10.3390/math10132174

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop