Next Article in Journal
Parameter Identification of a Quasi-3D PEM Fuel Cell Model by Numerical Optimization
Previous Article in Journal
A Small-Scale Study on Removal of Heavy Metals from Contaminated Water Using Water Hyacinth
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Optimization of Chemical Processes Based on Modified Sailfish Optimizer Combined with an Equal Division Method

1
School of Electronic Information, Guangxi University for Nationalities, Nanning 530006, China
2
Institute of Artificial Intelligence, Guangxi University for Nationalities, Nanning 530006, China
3
Guangxi Key Laboratory of Hybrid Computation and IC Design Analysis, Guangxi University for Nationalities, Nanning 530006, China
*
Author to whom correspondence should be addressed.
Processes 2021, 9(10), 1806; https://doi.org/10.3390/pr9101806
Submission received: 15 September 2021 / Revised: 1 October 2021 / Accepted: 5 October 2021 / Published: 12 October 2021

Abstract

:
The optimal solution of the chemical dynamic optimization problem is the basis of automatic control operation in the chemical process, which can reduce energy consumption, increase production efficiency, and maximize economic benefit. In this paper, a modified sailfish optimizer (MSFO) combined with an equal division method is proposed for solving chemical dynamic optimization problems. Based on the basic sailfish optimizer, firstly, the tent chaotic mapping strategy is introduced to disturb the initialization of sailfish and sardine populations to avoid the loss of population diversity. Secondly, an adaptive linear reduction strategy of attack parameters is proposed to enhance the exploration and exploitation ability of sailfish. Thirdly, the updating formula of sardine position is modified, and the global optimal solution is used to attract all sardine positions, which can avoid the premature phenomenon of the algorithm. Eventually, the MSFO is applied to solve six classical optimization cases of chemical engineering to evaluate its feasibility. The experimental results are analyzed and compared with other optimization methods to prove the superiority of the MSFO in solving chemical dynamic optimization problems.

1. Introduction

With the aggravation of environmental and energy problems, reducing energy consumption, increasing production efficiency, and maximizing economic benefits in the chemical process have become the focus of industry and academia. However, the chemical production model has the characteristics of a complex structure, strong nonlinearity, and uncertainty. Srinivasan et al. [1] listed the optimization control of the chemical process as the category of dynamic optimization, so dynamic optimization technology has become an effective means to solve chemical process control problems.
The traditional methods to solve chemical dynamic optimization problems are control vector parameterization (CVP) [2], iterative dynamic programming method (IDP) [3,4], and intelligent optimization method. Among them, the meta-heuristic intelligent optimization algorithm, which is simple, easy to implement, and has a strong searching ability, has become the focus of scholars. The chemical dynamic optimization problem is transformed into an NLP problem by the discretization method, which has a similar form and characteristic to a static optimization problem. Therefore, many intelligent optimization methods suitable for chemical dynamic optimization have been put forward. Pham [5] introduced smoothing and rotation into a genetic algorithm to improve the diversity of the population and solved the chemical dynamic optimization problem. Chiou and Wang [6] introduced the migration strategy and acceleration mechanism to improve the convergence speed of the algorithm and proposed a hybrid differential evolution algorithm for chemical dynamic optimization. Zhang et al. [7] proposed a sequential ant colony algorithm for optimizing chemical dynamic problems. The adaptive Cuckoo algorithm for chemical optimization based on adaptive technology was introduced by Yuanbin et al. [8]. Jiang et al. [9] proposed an efficient multi-objective artificial raindrop algorithm to solve the dynamic optimization problem of chemical process. Shi et al. [10] combined the particle swarm optimization algorithm with the CVP method and proposed the PSO-CVP algorithm to solve the chemical optimization problem.
The sailfish optimizer (SFO) is a new meta-heuristic swarm intelligence optimization algorithm proposed by Shrivastva and Das [11] in 2019, which has been well applied in many fields. Ghosh et al. [12] proposed an improved binary sailfish optimizer (BSF) based on adaptive β-hill climbing to solve the feature selection problem. Li et al. [13] proposed novel discrete binary multi-objective SFO (MOSFO) with a GA, which was applied in the expert recommendation system. Hammouti et al. [14] present a modified sailfish optimizer (MSFO), the modification was to add a local search technique at the last stage of the SFO algorithm (ATK < 0.5), to solve the berth allocation problem. Li et al. [15] proposed an improved sailfish optimization algorithm (ISFO) for successful application in the hybrid dynamic economic emission scheduling of power systems.
In this paper, a modified sailfish optimizer called MSFO is proposed to solve the problems of population initialization dependence, population diversity reduction, low precision, and premature convergence in the SFO. Firstly, the tent chaotic mapping strategy is introduced to disturb the initialization of sailfish and sardine populations to avoid the decline of population diversity. Secondly, the linear decreasing attack parameters of sailfish are improved, and an adaptive linear decreasing attack parameters strategy is proposed to enhance the exploration and exploitation ability. Thirdly, the updating formula for sardine position is improved, and the global optimal solution is used to attract all sardine positions to avoid the premature defect of the algorithm. The modified sailfish optimizer (MSFO) is then combined with an equal separation and dispersion method to solve six classical dynamic optimization cases of chemical engineering. The experimental results show that MSFO is feasible and advantageous in solving dynamic optimization problems of chemical engineering.
The paper is organized as follows: Section 2 depicts the dynamic optimization problems and the equal division method. Section 3 briefly reviews the classic sailfish algorithm and its flow. Section 4 sets out the main contribution provided in this paper, which is the modified sailfish optimizer (MSFO). Eventually, verification of the optimization performance of MSFO is by carried out for six classical dynamic optimization cases of chemical engineering in Section 5.

2. Problem Description and Equal Division Method

2.1. DOP Description

The goal of the dynamic optimization problem (DOP) is to make the optimal control quantity continuously approach the current optimal operation quantity and to achieve the maximization of economic benefit in turn under the condition of satisfying the safety requirements and constraints [16]. The research object of dynamic optimization is generally a time-varying system, which is mathematically called a dynamic system. The dynamic model in industrial process is formulated as differential-algebraic equation (DAE) [17]. The differential equation is used to describe the dynamic characteristics of the system, such as the conservation of mass, momentum, and energy, and the algebraic equation is used to ensure the equilibrium between physics and thermodynamics. The standard forms of dynamic optimization problems (DOPs) are as follows:
min J = Φ [ x ( t f ) , t f ] + t 0 t f φ [ t , x ( t ) , u ( t ) ] d t
s . t . x ˙ ( t ) = f [ t , x ( t ) , u ( t ) ] x ( t 0 ) = x 0 u min u ( t ) u max t [ t 0 , t f ]
where x ( t ) is the state vector, and x 0 is the initial value of the state variable at time ( t 0 ). u ( t ) , J are the control vector and performance index. f , φ are constrained by differential equations and algebraic equations, respectively. t 0 , t f , are the initial time and terminal time; and u max , u min are the lower and upper bounds of the control vector, respectively. To solve the dynamic optimization problem is to find the optimal control u ( t ) ( t [ t 0 , t f ] ) ) so that the performance index J is the minimum when the process satisfies the constraints.

2.2. Equal Division Method

The direct method to solve the dynamic optimization problem is to transform the standard dynamic optimization problem into a finite parameter optimization problem or an NLP problem by means of numerical analysis. The main discrete method is control vector parameterization (CVP) [2], the core idea of which is to discretize the control variables in the time domain and then to approximate each part with a finite number of basis functions.
As shown in Figure 1, this paper divides the time ( t [ t 0 , t f ] ) domain of the standard dynamic optimization problem into N sub-time regions of length d = ( t f t 0 ) / N by using the method of piecewise constant in CVP. Meanwhile, the Runge–Kutta method is used to solve the value of each sub-time region ( σ i k ( k = 1 , 2 , 3 N ) ). Finally, according to the theory of function approximation, the optimal control variables are obtained by linear combination approximation of each optimal result using the intelligent optimizer (MSFO) proposed in this paper.

3. Sailfish Optimizer

Studies have found that group hunting is one of the social behaviors in groups of arthropods, fish, birds, and mammals [11]. Compared with individual hunting, group hunting can save the energy consumption of hunters to achieve the goal of capturing prey. The sailfish optimizer is a new meta-heuristic algorithm based on a group-hunting-inspired simulation of sailfish attacking sardines alternatively. The variable of the problem is the position of the sailfish and sardines in the search space. SFO tries to move as many sailfish and sardines randomly as possible. Sailfish are used to save the current optimal solution, while sardines are used in the search space to find the best solution. The mathematical model of the algorithm is as follows:
Randomly initialize the population positions of sailfish and sardines, to each sailfish and sardine is assigned a randomized position X S F ( i ) k and X S D ( j ) k , consecutively. Where i { s a i l f i s h } , j { s a r d l i n e s } and k is the number of iterations. Mathematically, the update position of sailfish shown in Equation (3):
X S F ( i ) k + 1 = X e l i t e k μ k × ( r a n d ( 0 , 1 ) × X e l i t e k + X i n j u r e k 2 X S F ( i ) k )
μ k = 2 × r a n d ( 0 , 1 ) × P d P d
P d = 1 N u m S F N u m S F + N u m S D
where X S F ( i ) k is the previous position of the i th sailfish, and μ k is a coefficient generated at k th iteration, which is derived from Equation (4). To preserve the optimal solution of each iteration, the sailfish and sardine with the best fitness value are called “elite” sailfish and “injured” sardine, respectively, and their position at iteration k is given X e i l t e k and X i n j u r e k . P d is the density of prey sardines, which represents the number of prey in each iteration, given by Equation (5). NumSF and NumSD represent the population of sailfish and sardines, and the relationship is N u m S F = N u m S D × p e r c e n t , where percent represents the initial species of sailfish as a percentage of the sardine population.
A new position of the sardines at iteration k can be calculated as given in Equation (6):
X S D ( j ) k + 1 = r a n d ( 0 , 1 ) × ( X e l i t e k X S D ( j ) k + A T K )
A T K = A × ( 1 ( 2 × i t e r × ε ) )
α = N u m S D × A T K
β = d × A T K
where X S D ( j ) k represents the previous position of the j th sardine. iter represents the number of current iterations. ATK is the sailfish’s attacking power, which is linearly decreased on each iteration as shown in Equation (7). When A = 4 and ε = 0.001, If ( A T K < 0.5 ), the number of sardines that update their position ( α ) and the number of variables of them ( β ) are calculated as Equations (8) and (9). If ( A T K 0.5 ), then all sardines get updated.
In order to simulate the process of the sailfish catching sardines, if f ( S D j ) < f ( S F i ) , the position of this later is replaced by the position of the sardine j, as given by Equation (10).
X S F ( i ) k = X S D ( j ) k f ( S D j ) < f ( S F i )
Figure 2 shows the specific implementation flow chart of the sailfish optimizer.

4. Modified Sailfish Optimizer (MSFO)

In the search space, the sailfish optimizer has a strong searching ability, but because of the complexity of dynamic optimization, in some cases, it will fall into local optimum, skip optimal solution, low precision, premature convergence, and so on. The main reason is that sailfish and sardine populations rely on initialization and the diversity of the late iteration is reduced, so the exploration and exploitation of the algorithm are insufficient. With a thorough investigation of sailfish optimizer and dynamic optimization, this paper proposes the following modification strategies to modify the performance of the algorithm.

4.1. Tent Chaos Initialization Policy

Chaotic mapping systems have stochastic behavior and nonlinear motion and both randomness and certainty [18]. Chaos theory is the study of dynamic systems. The interesting property of these systems is that when there is a minor change in the system, the whole system gets affected [19]. Common chaotic mapping systems are Gauss mapping, Chebyshev mapping, logistic mapping, tent mapping, etc. [20]. The study found that given an initial value of the chaotic system, the population of the meta-heuristic algorithm is initialized according to the relation of the chaotic mapping, and the chaotic sequence is generated, which can effectively keep the diversity of the population and overcome the premature problem of the classical optimization algorithm [21].
The population initialization of sailfish and sardines in the SFO algorithm is a stochastic strategy. It relies too much on the initialization of the population when searching for the optimal solution. To improve the global search ability of the algorithm and avoid the problem that the diversity of sailfish and sardine population decreases in later searches, we propose to initialize the population of sailfish and sardines by tent chaotic operator. The tent map is defined by the following equation [21]:
T i + 1 = { T i 0.7 , T i 0.7 1 T i 0.3 , T i > 0.7
where T i is the sequence of i th iteration ( T i ( 0 , 1 ) ) . Figure 3 is the tent chaotic sequence distribution of T n with the initial value T 0 = 0.9 in 200 iterations.
Figure 3 shows that the tent map has small and unstable periods, so it can effectively improve the diversity of the population and avoid the disadvantage of the diversity loss of the population of sailfish and sardine late in the iteration. We initialize the sailfish and sardine populations using Equation (12):
X S F ( i + 1 ) = T i + 1 × ( X u b X l b ) + X l b X S D ( j + 1 ) = T j + 1 × ( X u b X l b ) + X l b
where X S F ( i + 1 ) and X S D ( j + 1 ) are the position values of individual sailfish and sardines. X u b and X l b are the upper and lower boundaries of the individual sailfish and sardines in all dimensions.

4.2. Adaptive Linear Decrease Attack Parameter

In the basic SFO algorithm, the sailfish attack value A T K = A × ( 1 ( 2 × i t r × ε ) ) is a linear decreasing parameter that plays an important role in shifting sardines from the exploratory phase to the exploitation phase, where ε is a fixed value. As long as the dynamic optimization problem involves many local optimal values, sardines ( X S D ( j ) k ) can easily fall into local optimal when searching according to the basic attack parameter ATK, so it is difficult to find or approach the optimal solution. Accordingly, to solve the dynamic optimization problem and enhance the exploration and exploitation ability of sardines in the search space, we improved the attack parameter formula, Equation (7), based on the SFO algorithm and proposed the adaptive linear decrease attack parameter as Equation (13):
A T K = A ( A × ( i t e r T ) )
where i t e r T is the adaptive decreasing coefficient of the attack degree, which increases with the number of iterations. Figure 4 shows the linear decreasing contrast between the original attack parameters and the adaptive attack parameters proposed in this paper under 1000 iterations. Compared with the original attack parameters, the proposed attack parameters can keep a higher value when decreasing, so that sardines can conduct a longer global search in the search space, avoid local optimization, and better enhance the exploration and exploitation ability of the algorithm.

4.3. Modifying the Search Equation for Sardines

As shown in Figure 2, in the basic SFO algorithm flow, sardine populations determine their number of replacements based on the attack parameter ATK, thus balancing sardine exploration and exploitation capabilities. After Equation (6) uses the adaptive attack parameters proposed in this article, the accuracy of the optimal solution obtained is affected due to the complexity of the dynamic optimization problem and the individual sardines ( X e l i t e k ) being updated around the current elite sailfish position ( X e l i t e k ). Therefore, we modified the position updating formula of sardines and proposed the global optimal solution and the elite sardines’ ( X i n j u r e d k ) attraction strategy. The proposed formula for updating sardine positions is as follows:
X S D ( j ) k + 1 = r a n d ( 0 , 1 ) × A T K × [ X S D ( j ) k + ( X i n j u r e k X S ( j ) k ) + ( G b e s t ( k ) X S D ( j ) k ) ]
where G b e s t ( k ) is the global optimal solution recorded in iteration k. Using the current global optimal solution and the attraction of elite sardines to all sardines’ individual locations to provide a greedy search direction. The sardine populations can escape from the local optimum, avoid the premature phenomenon of the algorithm, further enhance the exploration and exploitation capacity of the algorithm, and improve the algorithm efficiency.
The pseudo-codes of the modified sailfish optimizer (MSFO) are given in the following table.
Algorithm 1 Pseudo-codes of the MSFO
Inputs: The population size Pop and maximum number of iterations T.
Outputs: The global optimal solution.
Initialize parameter (A = 4) and the population of sailfish and sardine using Equations (11) and (12).
Calculate the objective fitness of each sailfish and sardine.
Find the Global extreme point, elite sailfish, and injured sardine, respectively.
while (k < T) do
for each sailfish
Calculate μ k using Equation (4) and update the position of sailfish using Equation (3)
end for
Calculate ATK using Equation (13)
  for each sardine
    Update the position of all sardines using Equation (14).
end for
Check and correct the new positions based on the boundaries of variables. 
 Calculate the fitness of all sailfish and sardine.
Sort the moderate values of sailfish and sardines.
    If the fitness of sardine is better than that of the sailfish
    Replace a sailfish with an injured sardine using Equation (10).
    Remove the hunted sardine from population.
      Update the best sailfish and best sardine.
    end if
    Update the Global extreme point and elite sailfish and injured sardine, respectively.
end while
return global optimal solution.

4.4. Performance Analysis of Modified Sailfish Optimizer

To test the performance of the MSFO, we conducted numerical experiments for benchmark function testing. The operating system used in this experiment was Windows 10 Professional 64-bit, the experimental software was MATLAB R2020a, and the processor was Intel(R) Core (TM)i5-5257U.

4.4.1. Benchmark Functions

We have selected six well-known benchmark functions, as shown in Table 1, which can be broadly divided into unimodal functions (F1–F3) and multimodal functions (F4–F6). The solution space of the unimodal function problem contains a globally optimal solution, which can be used to examine the exploitation capability and optimization speed of the algorithm. The solution space of the multimodal function problem contains one or more global optimal solutions, which can be used to evaluate the ability to develop the algorithm and avoid the local optimal solution.

4.4.2. Parameter Settings

In order to prove the superiority of MSFO, we chose several famous meta-heuristic algorithms to compare, including the basic sailfish optimization algorithm (SFO) [11], particle swarm optimization algorithm (PSO) [22], and butterfly optimization algorithm (BOA) [23]. To obtain satisfactory results and ensure fairness, the number of the search population used in the experiment was 30 and the maximum number of iterations was 1000. The specific parameters of the relevant algorithms were as follows:
PSO: the learning factors C 1 and C 2 were 2 and the weight ω was 0.9.
BOA: the perceived form parameter c was 0.01, power exponent α was 0.1, the probability of conversion from global search to local search was 0.8.
SFO: initial attack value A was 4, sailfish ratio was 0.3.
MSFO: initial attack value A was 4, sailfish ratio was 0.2.

4.4.3. Statistical Result Comparison

In this section, each algorithm is run 50 times independently to avoid the randomness of the results. Table 2 shows the optimal values of Best, Worst, Mean, and Std. of variance for each algorithm. Rank is the result of sorting the best values according to each algorithm. From the experimental results, it can be seen that the MSFO optimization performance proposed in this paper is obviously superior to that of the other three algorithms in single peak test function (F1~F3) and multipeak function (F4~F8), and the optimization results are of the highest accuracy and the first rank. According to the results of variance, MSFO is the best among the three algorithms. Thus, the feasibility of three modified strategies in improving the performance of SFO algorithm is illustrated.

4.4.4. Convergence Trajectory Comparison

Figure 5 and Figure 6 show the convergence trajectories of four algorithms for optimizing unimodal and multimodal test functions. It can be seen from Figure 5 that MSFO converges to a more accurate optimal target value and converges faster than other methods when optimizing unimodal function(F1~F3). It can be observed from Figure 6 that in optimizing multimodal test function F4, MSFO effectively avoids the local minimum and finds the optimal target value. Therefore, it can be concluded that MSFO has a strong search ability to approximate the global optimal solution when optimizing functions with different characteristics.

4.4.5. Box Plot Analysis

Figure 7 plots the boxplot of four different algorithms running 50 iterations of the benchmark function independently, which can fully display the characteristic information of the target value (minimum, binary, quartile, median, and maximum). As can be seen from the graph, MSFO is not only better than the other three methods in the mean, minimum, and maximum but also in the range and standard deviation of the target value.

4.4.6. Wilcoxon p-Value Statistical Test

The Wilcoxon test is a classical nonparametric statistical hypothesis test, which is used to compare the performance of different methods and is expressed by p-value. Table 3 shows the P values of MSFO versus SFO, MSFO versus SFO, and MSFO versus BOA. As can be seen from Table 3, in the comparison of unimodal and multimodal test functions, the p-value is always less than 0.05, so it shows the superiority of MSFO in statistical differences.

5. Application MSFO to Dynamic Optimization Problems in Chemical Processes

In order to test the performance of MSFO in solving dynamic optimization problems, six classical chemical dynamic optimization problems were selected in this paper for testing. The concentration of the reaction product was used as a performance indicator to evaluate the MSFO algorithm in dealing with dynamic optimization problems in chemical processes. The experimental results were compared with those in the related literature.

5.1. Experimental Flow and Parameter Settings

The specific flow of solving chemical dynamic optimization problems using the proposed MSFO algorithm was as follows:
(1)
The chemical dynamic optimization problem was divided into different equal parts by using the equal division method. The number of parts was N.
(2)
Runge–Kutta Method was used for numerical solutions.
(3)
MSFO algorithm was used to optimize the chemical case.
The sailfish sardine position variables represented the optimal control variable, and its fitness value represented the production’s concentration. Each chemical test case was run independently 20 times. Specific experimental parameters were set as shown in Table 4, the specific flow chart of the experiment is shown in Figure 8.

5.2. Test Case and Analysis

5.2.1. Case 1: Benchmark Dynamic Optimization Problem

This case is a classic chemical dynamic optimization problem with an analytical solution, which is an unconstrained mathematical system. The case involves two state variables and has an analytical solution and a global optimum. Many researchers have tested this case to illustrate their approaches [24,25,26]. The mathematical model can be described as follows:
min J ( u ) = x 2 ( t f )
s . t . { d x 1 d t = u d x 2 d t = x 1 2 + u 1 2 x ( 0 ) = [ 1 , 0 ] T 1 u 0 , t f = 1
where x 1 , x 2 are state variables, u is the control vector, and t f is terminal time.

5.2.2. Case 2: Batch Reactor Consecutive Reaction

The schematic of a typical fed-batch reactor is presented in Figure 9 for illustration.
The objective is to find the optimal temperature profile that maximizes the target product B at the final time of an operation in a batch reactor, where reaction A B C takes place. The mathematical model can be described as follows [25]:
max J ( T ) = C B ( t f )
s . t . { d C A d t = 4000 exp ( 2500 T ) C A d C B d t = 4000 ( 2500 T ) C A 2 6.2 × 10 5 exp ( 5000 T ) C B C = [ 1 , 0 ] T 298 T 398 , t f = 1
where C A , C B are the concentration of A and B, respectively. t f is the final moment of reaction. T is the temperature.

5.2.3. Case 3: Parallel Reactions in Tubular Reactor

This case describes two parallel chemical reactions taking place A B and B C in a tubular reactor The reaction trajectory of the optimal control variable is to be determined such that the concentration of target product B is maximum. The problem is formulated as follows [27]:
max J ( t f ) = x 2 ( t f )
s . t . { d x 1 d t = [ u ( t ) + 0.5 u 2 ( t ) ] x 1 ( t ) d x 2 d t = u ( t ) x 1 ( t ) x ( 0 ) = [ 1 , 0 ] T 0 u ( t ) 5 , t f = 1
where x 1 ( x ) and x 2 ( x ) are the concentration of A and B, respectively. J is the performance index. u ( t ) is the control variable. t f is terminal time.

5.2.4. Case 4: Catalyst Mixing Problem

This problem considers a plug flow reactor, packed with two catalysts. The objective is to obtain the optimal catalyst concentration profile that maximizes the yield of the intermediate product C for a fixed reactor length where the reaction A B C occurs. The dynamic optimization problem of this problem can be described as follows [28]:
max J ( z f ) = 1 x A ( z f ) x B ( z f )
s . t . { d x A d z = u ( z ) [ 10 × x B ( z ) x A ( z ) ] d x B d z = u ( z ) [ 10 × x B ( z ) x A ( z ) ] [ 1 u ( z ) ] × x B ( z ) C ( 0 ) = [ 1 , 0 ] T 0 u ( z ) 1 , z f = 12 m
Denoting x A , x B and J as the mole fractions of the substance A, B, and C. z is the pipe length of the tubular reactor. u ( z ) is the mixing fraction of catalyst A.

5.2.5. Case 5: Plug Flow Tubular Reactor

This case study considers a plug flow reactor as studied by Reddy and Husain [29], Luus et al. [30] and Mekarapiruk and Luus [31] to find the optimal control trajectory of the normalized coolant flow rate u ( t ) that maximizes the normalized concentration of the desired product. The mathematical statement is as follows:
max J = x 1 ( t f )
s . t . { d x 1 d t = ( 1 x 1 ) k 1 x 1 k 2 d x 2 d t = 300 ( ( 1 x 1 ) k 1 x 1 k 2 ) u ( x 2 290 ) k 1 = 1.7536 × 10 5 × exp ( 1.1374 × 10 4 1.9872 x 2 ) k 1 = 2.4885 × 10 10 × exp ( 2.2748 × 10 4 1.9872 x 2 ) x ( 0 ) = [ 0 , 380 ] T 0 u 0.5 , t f = 5 x 2 460
where x 1 denotes the normalized concentration of the desired product, x 2 is the temperature, and are rate constants. t f is terminal time.

5.2.6. Case 6: Fed Batch Bioreactor

This case study deals with the optimal production of a secreted protein in a fed-batch reactor. It was originally formulated by Park and Ramirez [32] in 1988. The objective is to maximize the secreted heterologous protein by a yeast strain in a fed-batch culture. The dynamic model accounts for host-cell growth, gene expression, and the secretion of expressed polypeptides. The mathematical statement is as follows:
max J ( t f ) = z 1 z 5
s . t . { d z 1 d t = g 1 ( z 2 z 1 ) u z 5 z z d z 2 d t = g 2 z 3 u z 5 z 2 d z 3 d t = g 3 z 3 u z 5 z 3 d z 4 d t = 7.3 g 3 z 3 + u z 5 ( 20 z 4 ) d z 5 d t = u z 1 ( 0 ) = 0 , z 2 ( 0 ) = 0 , z 3 ( 0 ) = 1 , z 4 ( 0 ) = 5 , z 5 ( 0 ) = 1 g 1 = 4.75 g 3 0.12 + g 3 , g 2 = z 4 0.1 + z 4 exp ( 5 z 4 ) , g 3 = 21.87 z 4 ( z 4 + 0.4 ) ( z 4 + 62.5 ) , t f = 15
where z 1 , z 2 are, respectively, the concentration of the secreted protein and the total protein. z 3 is the culture cell density, z 4 is the substrate concentration, z 5 is the holdup volume, u is the nutrient (glucose) feed rate, and J is the mass of protein produced. For final time t f = 15 , and the following constraints on the control variable, 0 u 2 . g 1 , g 2 , g 3 are protein secretion rate constants.

5.3. Results and Discussions

5.3.1. Analysis of the Experimental Results of Case 1

Table 5 shows the analytical solution and the global optimum results of MSFO versus OTC, ACO-CP, IACO-CVP, IGA-CVP, IWO-CVP, and ADIWO-CVP in the benchmark dynamic optimization problem. The “~” in Table 5 means that the reference did not provide a value. Figure 10 is the optimal control variable trajectory with N = 20 and N = 50, Figure 11 is the optimal state variable trajectory, and Figure 12 is the iteration curve of the optimal result of the benchmark dynamic optimization problem.
In the open literature, the analytical solution and the global optimum is J = 0.761594156 by adopting classic optimal control theory (OCT) [24] for Case 1. Rajesh et al. [25] got a value of 0.76238 using the ant colony framework with control profile approximation (ACO-CP). Asgari and Pishvaie [26] obtained a value of 0.76160 using region reduction strategy and control vector parameterization with an ant colony optimization algorithm (IACO-CVP). Qian et al. [33] got a value of 0.761595 using a control vector parameterization method with an iterative genetic algorithm (IGA-CVP). Tian et al. [34] obtained values of 0.76159793 (N = 50) and 0.76159417 (N = 50) using control vector parameterization based invasive weed optimization (IWO-CVP) and control vector parameterization based adaptive invasive weed optimization (ADIWO-CVP).
The global optimum obtained by the MSFO after 20 independent runs was 0.76165319 (N = 20) and 0.761594199 (N = 50). Tian et al. [34] reached a value of 0.76165319 by control vector parameterization based adaptive invasive weed optimization (ADIWO-CVP), which is the best literature result close to the analytical result. The results of MSFO were close to the ADIWO-CVP and superior to those of other methods, which shows the validity of the proposed algorithm.

5.3.2. Analysis of the Experimental Results of Case 2

Table 6 shows the optimal target product concentration results of MSFO and 10 different methods for optimizing batch reactor continuous reaction problem. The “~” in Table 6 means that the reference did not provide a value. Figure 13 is the optimal temperature control variable curve with N = 20 and N = 50, Figure 14 is the optimal state variable curve, and Figure 15 is the iteration curve for the optimal result.
In the Table 6, Jiang et al. [9] used the an efficient multi-objective artificial raindrop algorithm (MOARA) to obtain the value of 5.54 × 10−2. Shi et al. [10] reached a value of 0.6105359 using optimal control strategies combined with PSO and control vector parameterization (PSO-CVP). Tian et al. [34] obtained a value of 0.61079180 using control vector parameterization based invasive weed optimization (IWO-CVP). Zhang and Chen [35] got values of 0.6100 (N = 10) and 0.6104 (N = 20) using iterative ant colony algorithm (IACA). Peng et al. [36] obtained values of 0.6101 (N = 10), 0.610426 (N = 20), and range 0.610781–0.610789 (N = 100) using an improved knowledge evolution algorithm (IKEA). Zhou and Liu [37] got values of 0.6107847 and 0.6107850 using control parameterization-based particle swarm approach (CP-PSO) and control parameterization-based adaptive particle swarm approach (CP-APSO). Liu et al. [38] obtained values of 0.6101 (N = 10), 0.610454 (N = 20), and range 0.610779–0.610787 (N = 100) using improved knowledge-based cultural algorithm (IKBCA). Xu et al. [39] proposed an improved seagull optimization algorithm combined with an unequal division method (ISOA) to obtain values of 0.6101 (N = 10), 0.61053 (N = 20), and 0.6107724 (N = 50).
As can be seen from the results in Table 6, as the number of partitions increases, so does the value of optimal target product concentration by MSFO optimizations. The optimal target product concentration obtained by the MSFO after 20 independent runs was 0.610118 (N = 10), 0.610537 (N = 25), 0.610771–0.610785 (N = 50). The optimal concentration of MSFO was close to that of Liu et al., while Liu et al. [38] achieved a value in the range of 0.610779–0.610787 (N = 100) using IKBCA, which is better than that of other methods in the literature. The result shows the feasibility and superiority of solving dynamic optimization problems with MSFO.

5.3.3. Analysis of the Experimental Results of Case 3

Table 7 shows the product concentration results of MSFO versus CP-PSO, CP-APSO, HSOA CVP, CVI, CPT, and MCB for optimization of parallel reactions in a tubular reactor problem. The “~” in Table 7 means that the reference did not provide a value. Figure 16 is the optimal control variable curve with N = 10 and N = 40, Figure 17 is the optimal state variable trajectory, and Figure 18 is the iteration curve of the optimal result.
As can be seen from the results in Table 7, Zhou and Liu [37] got values of 0.573543 and 0.573544 using the control parameterization-based particle swarm approach (CP-PSO) and control parameterization-based adaptive particle swarm qpproach (CP-APSO). Xu et al. [39] obtained values of 0.572226 (N = 10) and 0.57348 (N = 35) using improved seagull optimization algorithm (ISOA). Biegler [40] got values of 0.56910 (CVP) and 0.57322 (CVI) using successive quadratic programming and orthogonal collocation. Vassiliadis et al. [41] got a value of 0.57353 using CPT. Banga and Seider [28] obtained a value of 0.57353 using stochastic algorithms.
In this paper, the optimal target product concentration obtained by the MSFO after 20 independent runs was 0.572143 (N = 10); 0.573212–0.574831 (N = 40). Comparing the experimental results with other literature, it can be seen that the optimal concentration of the product obtained by the optimization of MSFO is not much difference in value compared with other methods in the literature.

5.3.4. Analysis of the Experimental Results of Case 4

Table 8 shows the optimal results of MSFO versus IKEA, HSOA, STA, GA, TDE, and NDCVP-HGPSO for the catalyst mixing problem. Figure 19 is the optimal control variable curve with N = 10 and N = 70, Figure 20 is the optimal state variable curve, and Figure 21 is the iteration curve of the optimal result.
In the open literature, Peng et al. [36] obtained values of 0.4757 (N = 10) and a range of 0.47761–0.47768 (N = 100) using an improved knowledge evolution algorithm (IKEA). Xu et al. [39] obtained a value of 0.47721 (N = 10) using improved seagull optimization algorithm (ISOA). Huang et al. [42] used a genetic algorithm (GA) to obtain values of 0.47260 (N = 5), 0.47363 (N = 10), and 0.47453 (N = 15) and proposed control vector parameterization with state transition algorithm (STA) to obtain values of 0.47260 (N = 5), 0.47363 (N = 10), and 0.47453 (N = 15). Angira and Santosh [43] obtained values of 0.47527 (N = 20) and 0.47683 (N = 40) using a trigonometric differential evolution approach (TDE). Chen et al. [44] reached the value of 0.47771 (N = 15) using nonuniform discretization-based control vector parameterization (NDCVP-HGPSO).
Experimental results show that the optimal solution of MSFO was 0.47562 when the number of partitions was N = 20, which is better than that of Angira et al. [43], who applied TDE to obtain values of 0.47527(N = 20) and Huang et al. [42], who applied STA to obtain values of 0.47453 (N = 15). When N = 70, the optimal solution of MSFO optimization was stable between 0.477544 and 0.4760, which is better than STA, TDE, GA, and ISOA but slightly worse than IKEA and NDCVP-HGPSO. The experimental results further demonstrate the ability of MSFO to solve dynamic optimization problems.

5.3.5. Analysis of the Experimental Results of Case 5

Table 9 shows the optimal results of MSFO versus IPSO, SVTN, IDP, CMM, CGM, NOCVP, S-CVP, and PWV-CVP for plug flow tubular reactor optimization. Figure 22 is the optimal control trajectory of the normalized coolant flow rate with N = 10 and N = 70. Figure 23 is the optimal state variable curve, and Figure 24 is the iteration curve of the optimal result.
In the open literature, Zhang et al. [45] used iterative multi-objective particle swarm optimization-based control vector parameterization (IPSO) to obtain the value of 0.677219 (N = 20). Xiao and Liu [46] reached the value of 0.677389 (N = 20) using an effective pseudospectral optimization approach with sparse variable time nodes (SVTN). Luus [47] used iterative dynamic programming (IDP) to obtain the value of 0.67531 (N = 10). Ko [48] used combination mode method (CMM) to obtain the value of 0.67531 (N = 10). Reddy and Husain [29] got the value of 0.7227 (N = 10) using the conjugate gradient method (CGM). LEI et al. [49] reached the value of 0.77298 (N = 20) using a nonuniform control vector parameterization approach (NU-CVP). Wei, F.X. et al. [50] reached a value of 0.74708 by S-CVP and PWV-CVP, which was the best literature re sult.
The global optimum obtained by the MSFO after 20 independent runs was 0.7226987 (N = 10) and 0.7234794 (N = 20). When N = 20, MSFO was slightly better than the S-CVP and PWV-CVP, thus outperforming all the other approaches mentioned in this article.

5.3.6. Analysis of the Experimental Results of Case 6

Table 10 shows the optimal results of MSFO versus VSACS, IKBCA, IKEA, TDE, NDCVP- HGPSO, FIDP, DCM-PSO, PADIWO-CVP, and GADIWO-CVP methods for optimizing the PR-b problem. The “~” in Table 10 means that the reference did not provide a value. Figure 25, Figure 26 and Figure 27 are the optimal control variable curves with N = 10, N = 20, and N = 100, and Figure 28 is the optimal state variable curve.
In the open literature, Yuanbin et al. [8] got values in the range of 0.28 to 0.9, 32.18175–32.18246 (N = 10), 32.45614–32.45629 (N = 20), and 32.81001–32.81224 (N = 100) using adaptive Cuckoo search algorithm (VSACS). Tian et al. [34] got values of 32.68649 and 32.68720 using control vector parameterization based adaptive invasive weed optimization. Peng et al. [36] used an improved knowledge evolution algorithm (IKEA) to obtain a value in range of 32.4225–32.6783 (N = 20). Liu et al. [38] obtained values in range of 32.4556–32.4561 (N = 20) and 32.7114–32.7180 (N = 150) using improved knowledge-based cultural algorithm (IKBCA). Angira and Santosh [43] used a trigonometric differential evolution approach (TDE) to obtain a value of 32.684494. Chen et al. [44] used nonuniform discretization-based control vector parameterization (NDCVP- HGPSO) to obtain a value of 32.68712. Tholudur and Ramirez [51] obtained a value of 32.47 using a modified iterative dynamic programming algorithm (FIDP). Wang and Li [52] obtained values 32.68851327 (N = 10) and 32.68851335 (N = 20) using a dynamic chaotic mutation based particle swarm optimization (DCM-PSO).
In this paper, the secreted heterologous protein obtained by the MSFO after 20 independent runs was in the range of 32.18024–32.18197 (N = 10), 32.45332–32.45446 (N = 20), and 32.69236–32.71112 (N = 100). The comparison results in Table 10 show that MSFO is obviously better than TDE, FIDP, PADIWO-CVP, and GADIWO-CVP, but there is a certain gap between MSFO, IKBCA, and VSACS. Because the test case involves many local extremum points, the optimization performance of the algorithm is required to be higher. Experimental results show that MSFO has certain advantages in solving dynamic optimization problems.

6. Conclusions

In this paper, a novel dynamic optimization method, integrating the equal division method into the modified sailfish optimizer, named MSFO, was proposed. Aiming at the reliance on population initialization of SFO optimizer, reduced population diversity in the later iterations, low solution accuracy, and premature convergence, the tent chaotic mapping strategy was introduced to initialize the sailfish and sardine population perturbations; an adaptive linear reduction attack parameter was proposed; the sardine position update formula was modified, and a modified sailfish optimizer (MSFO) was proposed. Through an equal division method, an infinite optimization problem of dynamic systems can be successively transformed into a finite-dimensional NLP, providing a calculable evaluation fitness function for the next optimization steps. Six chemical dynamic optimization cases were solved by MSFO. The experimental results showed that the MSFO had better optimization ability and the solution precision was satisfactory. Consequently, the proposed MSFO can be applied to a wider scope of real-life scenarios to solve DOPs.
The method of dividing the time domain into equal parts was used, which has the problems of a large amount of calculation and difficult calculation Thus, the next works will be to use the control variable method of unequal division of the dispersion of time-domain segmentation, combined with a modified sailfish optimizer to solve the chemical dynamic optimization problem.

Author Contributions

Conceptualization, Y.Z. and Y.M.; methodology, Y.Z.; software, Y.M.; validation Y.M.; writing—original draft preparation, Y.Z.; writing—review and editing, Y.Z. and Y.M.; supervision, Y.M. Both authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Science Foundation of China under grant no. 21466008 and by the Project of the Natural Science Foundation of Guangxi Province under grant 2019GXNSFAA185017.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors acknowledge support by the National Science Foundation of China under grant no. 21466008 and by the Project of the Natural Science Foundation of Guangxi Province under grant 2019GXNSFAA185017.

Conflicts of Interest

The authors declare no potential conflict of interest.

References

  1. Srinivasan, B.; Palanki, S.; Bonvin, D. Dynamic Optimization of Batch Processes: I. Characterization of the Nominal Solution. Comput. Chem. Eng. 2003, 27, 1–26. [Google Scholar] [CrossRef]
  2. Pollard, G.P.; Sargent, R.W.H. Offline Computation of Optimum Controls for a Plate Distillation Column. Automatica 1970, 6, 59–76. [Google Scholar] [CrossRef]
  3. Luus, R. Optimization of Fed-Batch Fermentors by Iterative Dynamic Programming. Biotechnol. Bioeng. 1993, 41, 599–602. [Google Scholar] [CrossRef] [PubMed]
  4. Jing, S.; Zhu’an, C. Application of Iterative Dynamic Programming to Dynamic Optimization Problems. J. Chem. Ind. Eng. China 1999, 50, 125–129. [Google Scholar]
  5. Pham, Q.T. Dynamic Optimization of Chemical Engineering Processes by an Evolutionary Method. Comput. Chem. Eng. 1998, 22, 1089–1097. [Google Scholar] [CrossRef]
  6. Chiou, J.-P.; Wang, F.-S. A Hybrid Method of Differential Evolution with Application to Optimal Control Problems of a Bioprocess System. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings, IEEE World Congress on Computational Intelligence (Cat. No. 98TH8360), Anchorage, AK, USA, 4–9 May 1998; pp. 627–632. [Google Scholar]
  7. Zhang, B.; Yu, H.; Chen, D. Sequential Optimization of Chemical Dynamic Problems by Ant-Colony Algorithm. J. Chem. Eng. Chin. Univ. 2006, 20, 120. [Google Scholar]
  8. Yuanbin, M.; Qiaoyan, Z.; Yanzhui, M.; Weijun, Y. Adaptive Cuckoo Search Algorithm and Its Application to Chemical Engineering Optimization Problem. Comput. Appl. Chem. 2015, 3, 292–294. [Google Scholar] [CrossRef]
  9. Jiang, Q.; Wang, L.; Lin, Y.; Hei, X.; Yu, G.; Lu, X. An Efficient Multi-Objective Artificial Raindrop Algorithm and Its Application to Dynamic Optimization Problems in Chemical Processes. Appl. Soft Comput. 2017, 58, 354–377. [Google Scholar] [CrossRef]
  10. Shi, B.; Yin, Y.; Liu, F. Optimal control strategies combined with PSO and control vector parameterization for batchwise chemical process. J. Chem. Ind. Eng. 2018, 70, 979–986. [Google Scholar]
  11. Srivastava, A.; Das, D.K. A Sailfish Optimization Technique to Solve Combined Heat And Power Economic Dispatch Problem. In Proceedings of the 2020 IEEE Students Conference on Engineering & Systems (SCES), Prayagraj, India, 10–12 July 2020; pp. 1–6. [Google Scholar]
  12. Ghosh, K.K.; Ahmed, S.; Singh, P.K.; Geem, Z.W.; Sarkar, R. Improved Binary Sailfish Optimizer Based on Adaptive β-Hill Climbing for Feature Selection. IEEE Access 2020, 8, 83548–83560. [Google Scholar] [CrossRef]
  13. Li, M.; Li, Y.; Chen, Y.; Xu, Y. Batch Recommendation of Experts to Questions in Community-Based Question-Answering with a Sailfish Optimizer. Expert Syst. Appl. 2021, 169, 114484. [Google Scholar] [CrossRef]
  14. Hammouti, I.E.; Lajjam, A.; Merouani, M.E.; Tabaa, Y. A Modified Sailfish Optimizer to Solve Dynamic Berth Allocation Problem in Conventional Container Terminal. Int. J. Ind. Eng. Comput. 2019, 10, 491–504. [Google Scholar] [CrossRef]
  15. Li, L.-L.; Shen, Q.; Tseng, M.-L.; Luo, S. Power System Hybrid Dynamic Economic Emission Dispatch with Wind Energy Based on Improved Sailfish Algorithm. J. Clean. Prod. 2021, 316, 128318. [Google Scholar] [CrossRef]
  16. Vicente, M.; Sayer, C.; Leiza, J.R.; Arzamendi, G.; Lima, E.L.; Pinto, J.C.; Asua, J.M. Dynamic Optimization of Non-Linear Emulsion Copolymerization Systems: Open-Loop Control of Composition and Molecular Weight Distribution. Chem. Eng. J. 2002, 85, 339–349. [Google Scholar] [CrossRef]
  17. Mitra, T. Introduction to dynamic optimization theory. In Optimization and Chaos; Springer: Berlin/Heidelberg, Germany, 2000; pp. 31–108. [Google Scholar]
  18. Liu, L.; Sun, S.Z.; Yu, H.; Yue, X.; Zhang, D. A Modified Fuzzy C-Means (FCM) Clustering Algorithm and Its Application on Carbonate Fluid Identification. J. Appl. Geophys. 2016, 129, 28–35. [Google Scholar] [CrossRef]
  19. Kellert, S.H. Books-Received-in the Wake of Chaos-Unpredictable Order in Dynamical Systems. Science 1995, 267, 550. [Google Scholar]
  20. Rather, S.A.; Bala, P.S. Swarm-Based Chaotic Gravitational Search Algorithm for Solving Mechanical Engineering Design Problems. World J. Eng. 2020, 17, 101–130. [Google Scholar] [CrossRef]
  21. Gandomi, A.H.; Yun, G.J.; Yang, X.-S.; Talatahari, S. Chaos-Enhanced Accelerated Particle Swarm Optimization. Commun. Nonlinear Sci. Numer. Simul. 2013, 18, 327–340. [Google Scholar] [CrossRef]
  22. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  23. Arora, S.; Singh, S. Butterfly Optimization Algorithm: A Novel Approach for Global Optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  24. Leonard, D.; Van Long, N.; Ngo, V.L. Optimal Control Theory and Static Optimization in Economics; Cambridge University Press: Cambridge, UK, 1992; ISBN 0-521-33746-1. [Google Scholar]
  25. Rajesh, J.; Gupta, K.; Kusumakar, H.S.; Jayaraman, V.K.; Kulkarni, B.D. Dynamic Optimization of Chemical Processes Using Ant Colony Framework. Comput. Chem. 2001, 25, 583–595. [Google Scholar] [CrossRef]
  26. Asgari, S.A.; Pishvaie, M.R. Dynamic Optimization in Chemical Processes Using Region Reduction Strategy and Control Vector Parameterization with an Ant Colony Optimization Algorithm. Chem. Eng. Technol. Ind. Chem. Plant Equip. Process. Eng. Biotechnol. 2008, 31, 507–512. [Google Scholar] [CrossRef]
  27. Werterterp, K.R.; Ptasiński, J. Safe Design of Cooled Tubular Reactors for Exothermic, Multiple Reactions; Parallel Reactions—I: Development of Criteria. Chem. Eng. Sci. 1984, 39, 235–244. [Google Scholar] [CrossRef] [Green Version]
  28. Banga, J.R.; Seider, W.D. Global optimization of chemical processes using stochastic algorithms. In State of The Art in Global Optimization; Springer: Berlin/Heidelberg, Germany, 1996; pp. 563–583. [Google Scholar]
  29. Reddy, K.V.; Husain, A. Computation of Optimal Control Policy with Singular Subarc. Can. J. Chem. Eng. 1981, 59, 557–559. [Google Scholar] [CrossRef]
  30. Luus, R. Application of Iterative Dynamic Programming to State Constrained Optimal Control Problems. Hung. J. Ind. Chem. 1991, 29, 245–254. [Google Scholar]
  31. Mekarapiruk, W.; Luus, R. Optimal Control of Inequality State Constrained Systems. Ind. Eng. Chem. Res. 1997, 36, 1686–1694. [Google Scholar] [CrossRef]
  32. Park, S.; Fred Ramirez, W. Optimal Production of Secreted Protein in Fed-batch Reactors. AIChE J. 1988, 34, 1550–1558. [Google Scholar] [CrossRef]
  33. Qian, F.; Sun, F.; Zhong, W.; Luo, N. Dynamic Optimization of Chemical Engineering Problems Using a Control Vector Parameterization Method with an Iterative Genetic Algorithm. Eng. Optim. 2013, 45, 1129–1146. [Google Scholar] [CrossRef]
  34. Tian, J.; Zhang, P.; Wang, Y.; Liu, X.; Chunhua, Y.; Lu, J.; Gui, W.; Sun, Y. Control Vector Parameterization-Based Adaptive Invasive Weed Optimization for Dynamic Processes. Chem. Eng. Technol. 2018, 41, 964–974. [Google Scholar] [CrossRef]
  35. Zhang, B.; Chen, D.; Zhao, W. Iterative Ant-Colony Algorithm and Its Application to Dynamic Optimization of Chemical Process. Comput. Chem. Eng. 2005, 29, 2078–2086. [Google Scholar] [CrossRef]
  36. Peng, X.; Qi, R.; Du, W.; Qian, F. An Improved Knowledge Evolution Algorithm and Its Application to Chemical Process Dynamic Optimization. CIESC J. 2012, 63, 841–850. [Google Scholar]
  37. Zhou, Y.; Liu, X. Control Parameterization-based Adaptive Particle Swarm Approach for Solving Chemical Dynamic Optimization Problems. Chem. Eng. Technol. 2014, 37, 692–702. [Google Scholar] [CrossRef]
  38. Liu, Z.; Du, W.; Qi, R.; QIAN, F. Dynamic Optimization in Chemical Processes Using Improved Knowledge-Based Cultural Algorithm. CIESC J. 2010, 11, 2890–2895. [Google Scholar]
  39. Xu, L.; Mo, Y.; Lu, Y.; Li, J. Improved Seagull Optimization Algorithm Combined with an Unequal Division Method to Solve Dynamic Optimization Problems. Processes 2021, 9, 1037. [Google Scholar] [CrossRef]
  40. Biegler, L.T. Solution of Dynamic Optimization Problems by Successive Quadratic Programming and Orthogonal Collocation. Comput. Chem. Eng. 1984, 8, 243–247. [Google Scholar] [CrossRef]
  41. Vassiliadis, V.S.; Sargent, R.W.; Pantelides, C.C. Solution of a Class of Multistage Dynamic Optimization Problems. 2. Problems with Path Constraints. Ind. Eng. Chem. Res. 1994, 33, 2123–2133. [Google Scholar] [CrossRef]
  42. Huang, M.; Zhou, X.; Yang, C.; Gui, W. Dynamic Optimization Using Control Vector Parameterization with State Transition Algorithm. In Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China, 26–28 July 2017; pp. 4407–4412. [Google Scholar]
  43. Angira, R.; Santosh, A. Optimization of Dynamic Systems: A Trigonometric Differential Evolution Approach. Comput. Chem. Eng. 2007, 31, 1055–1063. [Google Scholar] [CrossRef]
  44. Chen, X.; Du, W.; Tianfield, H.; Qi, R.; He, W.; Qian, F. Dynamic Optimization of Industrial Processes with Nonuniform Discretization-Based Control Vector Parameterization. IEEE Trans. Autom. Sci. Eng. 2013, 11, 1289–1299. [Google Scholar] [CrossRef]
  45. Zhang, P.; Chen, H.; Liu, X.; Zhang, Z. An Iterative Multi-Objective Particle Swarm Optimization-Based Control Vector Parameterization for State Constrained Chemical and Biochemical Engineering Problems. Biochem. Eng. J. 2015, 103, 138–151. [Google Scholar] [CrossRef]
  46. Xiao, L.; Liu, X. An Effective Pseudospectral Optimization Approach with Sparse Variable Time Nodes for Maximum Production of Chemical Engineering Problems. Can. J. Chem. Eng. 2017, 95, 1313–1322. [Google Scholar] [CrossRef]
  47. Luus, R. Iterative Dynamic Programming; Chapman and Hall/CRC: Boca Raton, FL, USA, 2019; ISBN 0-429-12364-7. [Google Scholar]
  48. Ko, D.Y.-C. Studies of Singular Solutions in Dynamic Optimization; Northwestern University: Evanston, IL, USA, 1969; ISBN 9798659357156. [Google Scholar]
  49. Lei, Y.; Li, S.; Zhang, Q.; Zhang, X. A Non-Uniform Control Vector Parameterization Approach for Optimal Control Problems. J. China Univ. Pet. Ed. Nat. Sci. 2011, 5. [Google Scholar]
  50. Wei, F.X.; Aipeng, J. A grid reconstruction strategy based on pseudo Wigner-Ville analysis for dynamic optimization problem. J. Chem. Ind. Eng. 2019, 70, 158–167. [Google Scholar]
  51. Tholudur, A.; Ramirez, W.F. Obtaining Smoother Singular Arc Policies Using a Modified Iterative Dynamic Programming Algorithm. Int. J. Control. 1997, 68, 1115–1128. [Google Scholar] [CrossRef]
  52. Wang, K.; Li, F. A Dynamic Chaotic Mutation Based Particle Swarm Optimization for Dynamic Optimization of Biochemical Process. In Proceedings of the 2017 4th International Conference on Information Science and Control Engineering (ICISCE), Changsha, China, 21–23 July 2017; pp. 788–791. [Google Scholar]
Figure 1. The method of piecewise constant approximation.
Figure 1. The method of piecewise constant approximation.
Processes 09 01806 g001
Figure 2. Flowchart of sailfish optimizer.
Figure 2. Flowchart of sailfish optimizer.
Processes 09 01806 g002
Figure 3. Tent chaotic sequence distribution.
Figure 3. Tent chaotic sequence distribution.
Processes 09 01806 g003
Figure 4. Comparison of attack parameters.
Figure 4. Comparison of attack parameters.
Processes 09 01806 g004
Figure 5. Algorithm convergence curve of unimodal test function.
Figure 5. Algorithm convergence curve of unimodal test function.
Processes 09 01806 g005
Figure 6. Algorithm convergence curve of multi-peak test function.
Figure 6. Algorithm convergence curve of multi-peak test function.
Processes 09 01806 g006
Figure 7. Box plot of four methods for six benchmark functions.
Figure 7. Box plot of four methods for six benchmark functions.
Processes 09 01806 g007
Figure 8. Flowchart of test experiment.
Figure 8. Flowchart of test experiment.
Processes 09 01806 g008
Figure 9. Batch reactor.
Figure 9. Batch reactor.
Processes 09 01806 g009
Figure 10. Optimal control trajectory.
Figure 10. Optimal control trajectory.
Processes 09 01806 g010
Figure 11. Optimal state variable trajectory.
Figure 11. Optimal state variable trajectory.
Processes 09 01806 g011
Figure 12. Iteration curve of Case 1 divided into 50 segments.
Figure 12. Iteration curve of Case 1 divided into 50 segments.
Processes 09 01806 g012
Figure 13. Optimal temperature control trajectory.
Figure 13. Optimal temperature control trajectory.
Processes 09 01806 g013
Figure 14. Optimal state variable trajectory.
Figure 14. Optimal state variable trajectory.
Processes 09 01806 g014
Figure 15. Iteration curve of Case 2 divided into 50 segments.
Figure 15. Iteration curve of Case 2 divided into 50 segments.
Processes 09 01806 g015
Figure 16. Optimal control trajectory.
Figure 16. Optimal control trajectory.
Processes 09 01806 g016
Figure 17. Optimal state variable trajectory.
Figure 17. Optimal state variable trajectory.
Processes 09 01806 g017
Figure 18. Iteration curve of Case 3 divided into 40 segments.
Figure 18. Iteration curve of Case 3 divided into 40 segments.
Processes 09 01806 g018
Figure 19. Optimal control trajectory.
Figure 19. Optimal control trajectory.
Processes 09 01806 g019
Figure 20. Optimal state variable trajectory.
Figure 20. Optimal state variable trajectory.
Processes 09 01806 g020
Figure 21. Iteration curve of Case 4 divided into 70 segments.
Figure 21. Iteration curve of Case 4 divided into 70 segments.
Processes 09 01806 g021
Figure 22. Optimal control trajectory.
Figure 22. Optimal control trajectory.
Processes 09 01806 g022
Figure 23. Optimal state variable trajectory.
Figure 23. Optimal state variable trajectory.
Processes 09 01806 g023
Figure 24. Iteration curve of Case 5 divided into 20 segments.
Figure 24. Iteration curve of Case 5 divided into 20 segments.
Processes 09 01806 g024
Figure 25. Optimal control trajectory (N = 10).
Figure 25. Optimal control trajectory (N = 10).
Processes 09 01806 g025
Figure 26. Optimal control trajectory (N = 20).
Figure 26. Optimal control trajectory (N = 20).
Processes 09 01806 g026
Figure 27. Optimal control trajectory (N = 100).
Figure 27. Optimal control trajectory (N = 100).
Processes 09 01806 g027
Figure 28. Optimal state variable trajectory.
Figure 28. Optimal state variable trajectory.
Processes 09 01806 g028
Table 1. Benchmark function.
Table 1. Benchmark function.
FunctionsFormulationRangDim f min
Sphere F 1 ( x ) = i = 1 n x i 2 [−100,100]300
Schwefel F 2 ( x ) = i = 1 N | x i | + i = 1 N | x i | [−10,10]300
Rastrigin F 3 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] [−5.12,5.12]300
Ackley F 4 ( x ) = 20 exp ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e [−32,32]300
Kowalik F 5 ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 [−5,5]40.00030
Six Hump Camel Back F 6 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 3 4 [−5,5]2−1.0316
Table 2. Test result benchmark functions.
Table 2. Test result benchmark functions.
Algorithm
FunctionsResultMSFOSFOPSOBOA
F1Best6.8104 × 10−949.7670 × 10−161.6556 × 1032.7743 × 100
Worst3.6275 × 10−541.0181 × 10−98.6603 × 1037.0283 × 100
Mean8.4117 × 10−569.5079 × 10−114.8832 × 1035.0551 × 100
Std.5.1736 × 10−551.9546 × 10−101.8340 × 1039.6613 × 10−1
Rank1243
F2Best2.7641 × 10−525.3076 × 10−73.4437 × 1018.0016 × 10−2
Worst5.4745 × 10−271.8220 × 10−47.9924 × 1021.2217 × 100
Mean1.5945 × 10−283.9648 × 10−53.5138 × 1024.7870 × 10−1
Std.8.0045 × 10−283.9331 × 10−51.3872 × 1032.6753 × 10−1
Rank1243
F3Best0.0000 × 1002.3124 × 10−102.1053 × 1021.5309 × 102
Worst0.0000 × 1002.5092 × 10−63.6692 × 1022.3210 × 102
Mean0.0000 × 1001.9599 × 10−72.9085 × 1021.9280 × 102
Std.0.0000 × 1004.36977 × 10−72.8042 × 1011.7471 × 101
Rank1243
F4Best8.8818 × 10−166.2225 × 10−88.3902 × 1001.3961 × 101
Worst8.8818 × 10−162.8007 × 10−51.5680 × 1011.9874 × 101
Mean8.8818 × 10−166.3357 × 10−61.2945 × 1011.8706 × 101
Std.5.9765 × 10−315.2560 × 10−61.5215 × 1001.0767 × 100
Rank1234
F5Best3.1071 × 10−43.1082 × 10−41.2370 × 10−33.0955 × 10−4
Worst5.2036 × 10−45.7377 × 10−42.9944 × 10−23.5654 × 10−3
Mean3.6986 × 10−43.6003 × 10−41.1166 × 10−27.8221 × 10−4
Std.2.0186 × 10−45.7242 × 10−59.7893 × 10−35.2900 × 10−4
Rank1234
F6Best−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100
Worst−1.0316 × 100−9.9951 × 101−1.0246 × 100−1.0316 × 100
Mean−1.0316 × 100−1.0306 × 100−1.0294 × 100−1.0316 × 100
Std.4.4860 × 1004.5484 × 10−31.8578 × 10−34.4860 × 10−16
Rank1342
Table 3. The p-value test results of analysis over unimodal and multimodal benchmark functions.
Table 3. The p-value test results of analysis over unimodal and multimodal benchmark functions.
FunctionMSFO versus SFOMSFO versus PSOMSFO versus BOA
F17.0661 × 10−187.0661 × 10−187.0661 × 10−18
F27.0661 × 10−184.8495 × 10−184.8495 × 10−18
F33.3111 × 10−203.3111 × 10−203.3111 × 10−20
F43.3111 × 10−203.3111 × 10−203.3111 × 10−20
F50.04838.0196 × 10−65.3702 × 10−10
F68.4620 × 10−186.6308 × 10−200.0011
Table 4. Setting of experimental parameters.
Table 4. Setting of experimental parameters.
ParameterQuantityValue
AInitial attack value4
percentSailfish ratio0.2
PopPopulation800
TMaximum number of iterations1000
Table 5. Results of benchmark dynamic optimization by different methods.
Table 5. Results of benchmark dynamic optimization by different methods.
Ref.MethodNOptimum
[24]OCT-0.761594156
[25]ACO-CP-0.76238
[26]IACO-CVP-0.76160
[33]IGA-CVP-0.761595
[34] IWO-CVP500.76159793
[34]ADIWO-CVP500.76159417
This workMSFO200.76165319
This workMSFO500.761594199
Table 6. Results of batch reactor optimization by different methods.
Table 6. Results of batch reactor optimization by different methods.
Ref.MethodNOptimum
[9]MOARA-5.54 × 10−2
[10]PSO-CVP-0.6105359
[34]IWO-CVP-0.61079180
[35]IACA100.6100
[35]IACA200.6104
[36]GA-0.61072
[36]IKEA100.6101
[36]IKEA200.610426
[36]IKEA1000.610781–0.610789
[37]CP-PSO-0.6107847
[37]CP-APSO-0.6107850
[38]IKBCA100.6101
[38]IKBCA200.610454
[38]IKBCA1000.610779–0.610787
[39]ISOA100.6101
[39]ISOA250.61053
[39]ISOA500.6107724
This workMSFO100.610118
This workMSFO250.610537
This workMSFO500.610771–0.610785
Table 7. Results of parallel reaction problems in tubular reactors by different methods.
Table 7. Results of parallel reaction problems in tubular reactors by different methods.
Ref.MethodNOptimum
[28]MCB~0.57353
[37]CP-PSO~0.573543
[37]CP-APSO ~0.573544
[39]ISOA100.572226
[39]ISOA350.57348
[40]CVP~0.56910
[40]CVI~0.57322
[41]CPT~0.57353
This workMSFO100.572143
This workMSFO400.573212–0.574831
Table 8. Results of catalyst mixing problem by different methods.
Table 8. Results of catalyst mixing problem by different methods.
Ref.MethodNOptimum
[36]IKEA200.4757
[36]IKEA1000.47761–0.47768
[39]ISOA400.47721
[42]STA50.47260
[42]STA100.47363
[42]STA150.47453
[42]GA50.47260
[42]GA100.47363
[42]GA150.47453
[43]TDE200.47527
[43]TDE400.47683
[44]NDCVP-HGPSO150.47771
This workMSFO200.47562
This workMSFO700.477544–0.47760
Table 9. Results of plug flow tubular reactor by different methods.
Table 9. Results of plug flow tubular reactor by different methods.
Ref.MethodNOptimum
[45]IPSO200.677219
[46]SVTN200.677389
[47]Iterative dynamic programming (IDP)100.67531
[48]Combination mode method (CMM)100.7226
[29]Conjugate gradient method (CGM)100.7227
[49]Nonuniform control vector parameterization (NU-CVP)200.77298
[50]S-CVP200.7234708
[50]PWV-CVP200.7234708
This workMSFO100.7226987
This workMSFO200.7234724
Table 10. Results of plug flow tubular reactor by different methods.
Table 10. Results of plug flow tubular reactor by different methods.
Ref.MethodNOptimum
[8]VSACS1032.18175–32.18246
[8]VSACS2032.45614–32.45629
[8]VSACS10032.81001–32.81224
[34]PADIWO-CVP~32.68649
[34]GADIWO-CVP~32.68720
[36]IKEA2032.4225–32.6783
[38]IKBCA2032.4556–32.4561
[38]IKBCA15032.7114–32.7180
[43]TDE~32.684494
[44]NDCVP-HGPSO~32.68712
[51]FIDP~32.47
[52]DCM-PSO1032.68851327
[52]DCM-PSO2032.68851335
This workMSFO1032.18024–32.18197
This workMSFO2032.45332–32.45446
This workMSFO10032.69236–32.71112
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Mo, Y. Dynamic Optimization of Chemical Processes Based on Modified Sailfish Optimizer Combined with an Equal Division Method. Processes 2021, 9, 1806. https://doi.org/10.3390/pr9101806

AMA Style

Zhang Y, Mo Y. Dynamic Optimization of Chemical Processes Based on Modified Sailfish Optimizer Combined with an Equal Division Method. Processes. 2021; 9(10):1806. https://doi.org/10.3390/pr9101806

Chicago/Turabian Style

Zhang, Yuedong, and Yuanbin Mo. 2021. "Dynamic Optimization of Chemical Processes Based on Modified Sailfish Optimizer Combined with an Equal Division Method" Processes 9, no. 10: 1806. https://doi.org/10.3390/pr9101806

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop