Next Article in Journal
Model of Price Optimization as a Part of Hotel Revenue Management—Stochastic Approach
Next Article in Special Issue
Application of a Fuzzy Inference System for Optimization of an Amplifier Design
Previous Article in Journal
Debt-by-Price Ratio, End-of-Year Economic Growth, and Long-Term Prediction of Stock Returns
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Trust-Region Based Penalty Barrier Algorithm for Constrained Nonlinear Programming Problems: An Application of Design of Minimum Cost Canal Sections

by
Bothina El-Sobky
1,
Yousria Abo-Elnaga
2,
Abd Allah A. Mousa
3,* and
Mohamed A. El-Shorbagy
4,5
1
Department of Mathematics and Computer Science, Faculty of Science, Alexandria University, Qism Bab Sharqi 21568, Egypt
2
Department of Basic Science, Higher Technological Institute, Tenth of Ramadan City 44629, Egypt
3
Department of Mathematics and Statistics, College of Science, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
4
Department of Mathematics, College of Science and Humanities in Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
5
Department of Basic Engineering Science, Faculty of Engineering, Menofia University, Shebin El-Kom 32511, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(13), 1551; https://doi.org/10.3390/math9131551
Submission received: 9 May 2021 / Revised: 24 June 2021 / Accepted: 28 June 2021 / Published: 1 July 2021

Abstract

:
In this paper, a penalty method is used together with a barrier method to transform a constrained nonlinear programming problem into an unconstrained nonlinear programming problem. In the proposed approach, Newton’s method is applied to the barrier Karush–Kuhn–Tucker conditions. To ensure global convergence from any starting point, a trust-region globalization strategy is used. A global convergence theory of the penalty–barrier trust-region (PBTR) algorithm is studied under four standard assumptions. The PBTR has new features; it is simpler, has rapid convergerce, and is easy to implement. Numerical simulation was performed on some benchmark problems. The proposed algorithm was implemented to find the optimal design of a canal section for minimum water loss for a triangle cross-section application. The results are promising when compared with well-known algorithms.

1. Introduction

Solving linear programming problems via the simplex method has never had any true competition until Karmarkar presented a new polynomial time method [1,2]. The polynomial complexity of Karmarkar’s algorithm has an advantage in comparison with the exponential complexity of the simplex procedure [3,4]. Schrijver [5,6] presented an improvement of Karmarkar’s method. Malek et al. [7] investigated an improved interior-point-based technique dealing with linear programming problems more efficiently. Tse et al. [8] presented an extension of Karmarkar’s algorithm for dealing with quadratic programming. The success of the interior-point algorithm in handling linear programming problems has encouraged researchers to implement the linearization methods to handle nonlinear programming problems [9,10]. Kebbiche et al. [11] investigated a projective interior-point algorithm for solving general convex nonlinear problems. Herskovits et al. [12] presented an interior-point-based approach for solving bilevel programming problems with convex lower-level problems. Zhang et al. [13] presented an interior-point method-based program combined with the homogenization theory on micromechanics to analyze shakedown behavior at the macro-scale. Schmidt [14] presented an interior-point method for nonlinear optimization problems with locatable and separable no-smoothness. Sakineh [15] presented novel interior-point algorithms for solving nonlinear convex optimization problems. Scheunemann et al. [16] presented an algorithm for rat- independent small-strain crystal plasticity based on the infeasible primal–dual interior-point method
In this research, a penalty method is introduced with a barrier method to the transform constrained nonlinear programming (CNP) problem into an unconstrained nonlinear problem. Newton’s method is applied to the barrier Karush–Kuhn–Tucker conditions. Newton’s method has the advantage that it converges quadratically to a stationary point under reasonable assumptions, but it has the disadvantage that the starting point must be sufficiently closed to the stationary point to guarantee convergence. To overcome this disadvantage and to guarantee convergence from any starting point, we use the trust-region strategy. The trust-region strategy can induce strongly global convergence, which is a very important method for solving a nonlinear programming problem and is more robust when it deals with rounding errors. It does not require the objective function of the model to be convex. Additionally, it does not require the Hessian of the objective function to be positive definite. For more details, see [17,18,19,20,21,22,23,24,25,26,27].
A global convergence theory of the PBTR algorithm is investigated. The PBTR algorithm always outputs a new point at each iteration, in order to improve the performance of the lgorithm, one need to solve the subproblem many times This shows that the PBTR algorithm is globally convergent under required assumptions.
A numerical simulation of the PBTR algorithm was performed on benchmark problems. Additionally, the PBTR algorithm was implemented to find the optimal design of a canal section for minimum water loss for a triangle cross-section application. The numerical results to benchmark problems are promising when compared with well-known algorithms.
Our balance idea mainly differs from these algorithms in the method of computing trial steps, the method of ensuring that x ˜ k and y k are strictly positive, the method to update the penalty parameter, the method to update the Hessian matrix, and the analysis of the global convergence. The mechanism of our method seems simpler than these methods. Moreover, numerical experiments display that the suggested PBTR algorithm performs effectively and efficiently in pursuance.
Notations: We use . to denote the Euclidean norm. The i t h component of any vector such as x ˜ is written as x ˜ i . Subscript k refers to iteration indices. For example, f k f x ˜ k , G k G x ˜ k , f k f x ˜ k , and G k G x ˜ k , and so on to symbolize the value of the function at a particular point. We introduce the main framework of the PBTR algorithm to solve COP in the following section.
This paper is structured as follows. In Section 2, we clarify the outline of the penalty–barrier Newton’s method to transform the CNP problem into an unconstrained problem. The outline of the PBTR algorithm is investigated in Section 3. Section 4 presents a global convergence analysis for the PBTR algorithm. Section 5 addresses the simulation experiments and canal application. Finally, our observations and future work are discussed in Section 6.

2. Penalty-Barrier Newton’s Method

In this paper, we consider the following CNP problem
m i n i m i z e f x s u b j e c t     t o g l x = 0 , g i x 0 , x 0 ,
where f : n , g l : n m , and g i : n p are at least twice continuously differentiable functions.
By adding slack variables to the inequality constraints g i x , the CNP problem can be reformulated into the following problem
m i n i m i z e f x s u b j e c t     t o g l x = 0 , g i x + s = 0 , x 0 , s 0 ,
where s p is a slack variable. To simplify, the above problem can be written in the form:
m i n i m i z e f x ˜ s u b j e c t     t o G x ˜ = 0 , x ˜ 0 ,
where x ˜ = ( x , s ) T n + p and G x ˜ = ( g l x , g i x + s ) T m + p .
By using the penalty method, we can write the above problem as follows:
m i n i m i z e f x ˜ + ν 2 G x ˜ 2 s u b j e c t     t o x ˜ 0 ,
where ν > 0 is a penalty parameter. For more details, see [28].
Associated with the above problem, the Lagrangian function is given by
L x ˜ , y ; ν = f x ˜ y T x ˜ + ν 2 G x ˜ 2 ,
where the vector 0 y n + p is the Lagrange multiplier vector associated with the inequality constraint x ˜ .
Motivated by the barrier method, which is discussed in [29,30,31,32,33], Problem (2) can be written as follows
minimize     f ( x ˜ ) ω i = 1 n + p ln ( x ˜ ( i ) ) + v 2 G ( x ˜ ) 2 , x ˜ > 0
for decreasing sequence of barrier parameters ω converging to zero, see [34]. Let
ψ ω ( x ˜ ) = f ( x ˜ ) ω i = 1 n + p ln ( x ˜ ( i ) )
The first-order necessary conditions for a point x ˜ * to be a local minimizer of Problem (4) are given by f x ˜ * ω X ˜ * 1 e + ν G x ˜ * G x ˜ * = 0 , and x ˜ * > 0 , where X ˜ is the diagonal matrix whose diagonal entries are x ˜ 1 , , x ˜ n + p . Let y n + p be an auxiliary variable that is equal to ω X ˜ 1 e . Then, the above conditions can be written as follows
f x ˜ * y * + ν G x ˜ * G x ˜ * = 0 ,
X ˜ * y * ω e = 0 ,
and
x ˜ * > 0 , y * > 0 .
The conditions (6) and (7) are called the barrier Karush–Kuhn–Tucker (KKT) conditions. For more details, see [34]. Applying Newton’s method to the above nonlinear system, then we have
H + ν G ( x ˜ ) G ( x ˜ ) T I Y X ˜ d x ˜ d y = f ( x ˜ ) y + ν G ( x ˜ ) G ( x ˜ ) X ˜ ω e
where H is the Hessian matrix of the barrier function (5) or an approximation to it and Y is a diagonal matrix whose diagonal entries are y 1 , , y n + p . Eliminating d y by using
d y = y + ω X ˜ 1 e X ˜ 1 Y d x ˜ ,
We have the following system
( B + ν G x ˜ G x ˜ ) T d x ˜ = ψ ω x ˜ + ν G x ˜ G x ˜ ,
where B = H + X ˜ 1 Y and ψ ω x ˜ = f x ˜ y .
It is easy to see that the step generated by (10) coincides with the solution of the following quadratic programming subproblem
m i n i m i z e q d = ( ψ ω x ˜ + ν G x ˜ G x ˜ ) T d + 1 2 d T A d ,
where A = B + ν G x ˜ G ( x ˜ ) T and to simplify, we use d instead of d x ˜ .
To guarantee convergence from any starting point, we use the trust-region strategy, which is clarified in the following section.

3. Outline of PBTR Algorithm

In this section, we offer the outline for solving the following trust-region sub-problem, which is associated with Problem (11)
m i n i m i z e q k d = ( ψ ω x ˜ + ν k G x ˜ k G x ˜ k ) T d + 1 2 d T A k d s u b j e c t     t o d δ k ,
where δ k > 0 is the radius of the trust-region. To evaluate the trial step d k , any approximation to the solution of the above Subproblem (12) can be used as long as a fraction of the Cauchy decrease condition holds. The fraction of the Cauchy decrease condition is the condition that a fraction of the predicted decrease obtained by the Cauchy step is less than or equal to the predicted decrease obtained by d k . Therefore, a conjugate gradient method [35] is used to solve the Subproblem (12). In the following Algorithm 1, the main steps to solve Subproblem (12) are offered.
Algorithm 1 The algorithm of the conjugate gradient method
Step 1. Set 0 = d 0 n + p , r 0 = f k y k + ν k G k G k , and v 0 = r 0 .
Step 2. For j = 0 , , n + p do
    Evaluate A k = B k + ν k G k G k T ,
     c j = r j T r j v j T A k v j , and
     γ j such that d j + γ j v j = δ k .
    If v j T A k v j 0 , then set d k = d j + γ j v j and Stop.
    Else, set d j + 1 = d j + c j v j and
     r j + 1 = r j c j A k v j .
    If r j + 1 r 0 ε 0 , set d k = d j + 1 and Stop.
    Evaluate q ¯ j = r j + 1 T r j + 1 r j T r j and the new direction is
v j + 1 = r j + 1 + q ¯ j v j .
Once the step d k is computed, we set x ˜ k + 1 = x ˜ k + d k . To ensure that x ˜ k + 1 > 0 , we need to compute the damping parameter φ k at every iteration k . The damping parameter φ k is computed as follows:
φ = min min i h k i , 1 ,
where
h k i = x ˜ k i d k i ,   i f         d k i   <   0   1 o t h e r w i s e .
Another damping parameter σ k in the step may be needed to satisfy x ˜ k + 1 > 0 , where σ k is chosen as follows. If x ˜ k + φ k d k > 0 , we set σ k = 1 . Otherwise, we set x ˜ k + 1 = x ˜ k + σ k φ k d k , such that σ k 1 θ d k , 1 and 0 < θ is a pre-specified fixed constant. It is easy to see that 1 σ k = O d k . To simplify, we assume μ k = σ k φ k .
To test the scaled step μ k d k to decide if it will be accepted or not, we need the following merit function
Φ ω x ˜ k ; ν k = ψ ω x ˜ k + ν k 2 G x ˜ k 2 .
Additionally, we need to define an actual reduction A r e d k and a predicted reduction P r e d k .
The actual reduction is defined as follows
A r e d k = Φ ω x ˜ k ; ν k Φ ω x ˜ k + μ k d k ; ν k ,
and it can be written as follows,
A r e d k = ψ ω x ˜ k ψ ω x ˜ k + 1 + ν k 2 G k 2 G k + 1 2 .
The predicted reduction is given by
P r e d k = ψ ω ( x ˜ k ) T μ k d k 1 2 μ k 2 d k T B k d k + ν k 2 G k 2 G k + G k T μ k d k 2 ,
and it can be written as follows,
P r e d k = q k 0 q k μ k d k ,
where q d is defined in (11).
Our way for testing the step d k and updating the radius δ k is obviously in the following Algorithm 2.
Algorithm 2 Testing the step d k and updating δ k
    Choose 0 < β 1 < β 2 < 1 , 0 < α 1 < 1 < α 2 , and δ m i n δ 0 δ m a x .
    While A r e d k P r e d k 0 , β 1 or P r e d k 0 .
    Put δ k = α 1 d k and return to evaluate a new d k and end while.
    If A r e d k P r e d k β 1 , β 2 . Put x ˜ k + 1 = x ˜ k + μ k d k and δ k + 1 = max δ k , δ m i n and end if.
    If A r e d k P r e d k β 2 , 1 . Put x ˜ k + 1 = x ˜ k + μ k d k and δ k + 1 = min δ m a x , max δ m i n , α 2 δ k
    End if.
To update the positive penalty parameter ν k , we use the following Algorithm 3.
Algorithm 3 How to update ν k
Set ν 0 = 1 . Evaluate P r e d k , which is given by (5).
If
P r e d k G k G k min G k G k , δ k .
Set ν k + 1 = ν k .
Else, set ν k + 1 = 2 ν k .
End if
For more details, see [36].
Finally, if the conditions ψ k ω + X k Y k e ω k e + G k G k ϵ or d k ε 1 are held for some 0 < ϵ and 0 < ε 1 , then the algorithm is stopped.
The major steps of the PBTR algorithm are offered in the following Algorithm 4.
Algorithm 4 PBTR algorithm
Step 0. Given 0 < x 0 . Evaluate y 0 . Set ω 0 = 0.1 and ν 0 = 1 .
         Pick out ϵ , ε 1 , α 1 , α 2 , β 1 , and β 2 such that 0 < ϵ , 0 < ε 1   0 < α 1 < 1 < α 2 , and
          0 < β 1 < β 2 < 1 .
         Pick out δ 0 , δ m i n , and δ m a x such that δ m i n δ 0 δ m a x . Set k = 0 .
Step 1. If ψ k ω + X k Y k e ω k e + G k G k ϵ , then stop.
Step 2. To compute the step d k , use Algorithm 1.
Step 3. If d k ε 1 , then stop.
Step 4. (a) Evaluate the damping parameter φ k using (13).
          (b) Put x ˜ k + 1 = x ˜ k + φ k d k .
          (c) If x ˜ k + 1 > 0 , then go to Step 5.
          Else, put x ˜ k + 1 = x ˜ k + σ k φ k d k , where σ k 1 θ d k , 1 .
          End if.
Step 5. Update the Lagrange multiplier y k + 1 , by using the following equation
y k + 1 = ω k X k 1 e X k 1 Y k μ k d k .
The above equation is obtained from (9).
Step 6. To test the scaled step μ k d k and update δ k , use Algorithm 2.
Step 7. Taking ω k + 1 0 , ω k .
Step 8. Use Algorithm 3 to update the penalty parameter ν k .
Step 9. Put k = k + 1 and return to Step 1.
The analysis of the global convergence for PBTR algorithm is dedicated in the following section.

4. Global Convergence Analysis for PBTR Algorithm

To prove the theory of the global convergence for the PBTR algorithm, we need the following assumptions.
The required assumptions:
Let { x ˜ k } k 0 be the sequence of points produced by the PBTR algorithm and let Ω be a convex subset of n + p that contains all iterates x ˜ k > 0 and x ˜ k + μ k d k > 0 for all trial steps d k .
We offered the following assumptions of Ω such that it is necessary to prove the theory of the global convergence.
[ N A 1 .] The functions f x ˜ , G x ˜ C 2 for all x ˜ Ω .
[ N A 2 .] All of f x ˜ , f x ˜ , 2 f x ˜ , G x ˜ , G x ˜ , 2 G i x ˜ for i = 1 , , m + p ,
and G k [ ( G k ) T G k ] 1 are uniformly bounded in Ω .
[ N A 3 .] The sequence y k is bounded.
[ N A 4 .] The sequence of approximated Hessian matrices B k is bounded.
The linearly independent assumption on G x ˜ does not assume the above required assumptions. Then, there are other kinds of stationary points, which are introduced in the following definitions.
Definition 1
(A Fritz John point). A point x ˜ * Ω is called a Fritz John (FRJO) point if there are η * and a Lagrange multiplier vector λ * m + p that are not all zero such that:
η * f x ˜ * y + G x ˜ * λ * = 0 ,
G x ˜ * = 0 ,
X ˜ * y * ω * e = 0 ,
η * ,   y * 0 .
The above conditions are FRJO conditions. For more details, see [28].
If η * 0 , then the FRJO conditions (20–23) are the KKT conditions and the point x ˜ * , 1 , λ * η * is the KKT point.
Definition 2
(Infeasible FRJO point). A point x ˜ * Ω is called an infeasible FRJO point if there are η * and a Lagrange multiplier vector λ * m + p that are not all zero such that:
η * f x ˜ * y + G x ˜ * λ * = 0 ,
G x ˜ * G x ˜ * = 0 ,                 G x ˜ * > 0
X ˜ * y * ω * e = 0 ,
η * ,   y * 0 .
The above conditions are infeasible FRJO conditions. For more details see [28]
If η * 0 then, the conditions (24–27) are called infeasible KKT conditions and the point x ˜ * , 1 , λ * η * is an infeasible KKT point.
The following two lemmas give conditions that are equivalent to the conditions that are given in Definitions 1–2.
Lemma 1.
Assume N A 1 - N A 3 hold. A subsequence x ˜ k i of { x ˜ k } k 0 satisfies infeasible FRJO conditions if it satisfies:
(1)
l i m k i X ˜ k i y k i ω k i e = 0 .
(2)
l i m k i G k i > 0 .
(3)
l i m k i m i n d G k i + G k i T μ k i d 2 = l i m k i G k i 2 .
Proof. 
Let d ¯ k be the solution of the problem min d G k + G k T μ k d 2 . Then, d ¯ k satisfies the following equation
μ k 2 G k G k T d ¯ k + μ k G k G k = 0 .
We will consider two cases: Firstly, if lim k d ¯ k = 0 in (28), then we have
lim k μ k G k G k = 0 .
Secondly, if lim k d ¯ k 0 . From the limit of condition 3, we have
lim k μ k 2 d ¯ k T G k G k T d ¯ k + 2 μ k d ¯ k T G k G k = 0 .
Multiplying (28) from the left 2 d ¯ k T and subtracting it from (29), we have lim k μ k 2 d ¯ k T G k G k T d ¯ k = 0 . Hence, lim k μ k G k G k = 0 . This implies that in either case, we have
lim k G k G k = 0 ,
where, μ k 0 , 1 . Taking ( λ k ) i = ( G k ) i , i = 1 , , m + p . Using Condition (25), we have lim k ( λ k ) i 0 , and lim k ( λ k ) i > 0 , for some i . Then, lim k G k λ k = 0 , and hence the conditions of Definition 2 hold in the limit with η * = 0 . □
Lemma 2.
Assume N A 1 - N A 3 hold. A subsequence x ˜ k i of { x ˜ k } k 0 satisfies FRJO conditions if it satisfies:
(1)
l i m k i X ˜ k i y k i ω k i e = 0 .
(2)
For all k i , G k i > 0 and l i m k i G k i = 0 .
(3)
l i m k i m i n d G k i + G k i T μ k i d ) 2 G k i 2 = 1 .
Proof. 
Let z ¯ k be the solution of the following problem
min z U ^ k + G k T μ k z 2 ,
where U ^ k is a unit vector in the direction of G k and z = d G k . Then, z ¯ k satisfies
μ k 2 G k G k T z ¯ k + G k U ^ k μ k = 0 .
We will consider two cases: Firstly, if lim k z ¯ k = 0 in the above equation, then we have
lim k G k U ^ k μ k = 0 .
Secondly, if lim k z ¯ k 0 . Consider the following limit, which is equivalent to the limit in Condition (21)
lim k min z n + p U ^ k + G k T μ k z 2 = 1 ,
and considering the fact that z ¯ k is the minimization of Problem (31), we have
lim k μ k 2 z ¯ k T G k G k T z ¯ k + 2 μ k U ^ k T G k T z ¯ k = 0 .
Multiplying (31) from the left by 2 z ¯ k T and subtracting it from the above limit, we have
lim k μ k 2 z ¯ k G k G k T z ¯ k = 0 .
This implies lim k μ k G k U ^ k = 0 . Hence, in both cases, we have
lim k G k U ^ k = 0 .
The rest of the proof utilizes arguments similar to those in the above lemma. □

4.1. Fundamental Lemmas

We offer in this section some fundamental lemmas that are required in the subsequent proofs.
In the following lemma, we prove that P r e d k at any iteration k is at least equal to the decrease in the quadratic model q k d k c p where d k c p is the Cauchy step.
Lemma 3.
Suppose N A 1 - N A 4 . Then, there exists a constant 0 < K 1 for all k > k ¯ such that,
P r e d k K 1 μ k Φ ω x ˜ k ; ν k m i n δ k , Φ ω x ˜ k ; ν k A k ,
where Φ ω x ˜ k ; ν k = ψ ω x ˜ + ν k G x ˜ k G x ˜ k .
Proof. 
Since the conjugate gradient method is used to evaluate an approximate solution to the Subproblem (12), then a fraction of the predicted decrease obtained by the Cauchy step d k c p is less than or equal to the predicted decrease obtained by the step d k . That is
q k 0 q k d k ϑ q k 0 q k d k c p ,
for some ϑ 0 , 1 , where d k c p is defined as
d k c p = t k c p Φ ω x ˜ k ; ν k ,
and the Cauchy parameter t k c p is defined by
t k c p = { Φ ω ( x ˜ k ; ν k ) 2 ( Φ ω ( x ˜ k ; ν k ) ) T A k Φ ω ( x ˜ k ; ν k ) if Φ ω ( x ˜ k ; ν k ) 3 ( Φ ω ( x ˜ k ; ν k ) ) T A k Φ ω ( x ˜ k ; ν k ) δ k and ( Φ ω ( x ˜ k ; ν k ) ) T A k Φ ω ( x ˜ k ; ν k ) > 0 , δ k Φ ω ( x ˜ k ; ν k ) otherwise .
We will consider two cases:
(i)
If d k c p = δ k Φ ω x ˜ k ; ν k Φ ω x ˜ k ; ν k and Φ ω x ˜ k ; ν k 3 δ k [ Φ ω x ˜ k ; ν k ) T A k Φ ω x ˜ k ; ν k , then
q k 0 q k d k c p = ( Φ ω x ˜ k ; ν k ) T d k c p 1 2 d k c p T A k d k c p = δ k Φ ω x ˜ k ; ν k Φ ω x ˜ k ; ν k 2 1 2 δ k 2 Φ ω x ˜ k ; ν k 2 ( Φ ω x ˜ k ; ν k ) T A k Φ ω x ˜ k ; ν k 1 2 δ k Φ ω x ˜ k ; ν k .
(ii)
If d k c p = Φ ω x ˜ k ; ν k 2 ( Φ ω x ˜ k ; ν k ) T A k Φ ω x ˜ k ; ν k Φ ω x ˜ k ; ν k , and Φ ω x ˜ k ; ν k 3 δ k ( Φ ω x ˜ k ; ν k ) T A k Φ ω x ˜ k ; ν k , then
q k ( 0 ) q k ( d k c p ) = ( Φ ω ( x ˜ k ; ν k ) ) T d k c p 1 2 d k c p T A k d k c p = 1 2 Φ ω ( x ˜ k ; ν k ) 4 ( Φ ω ( x ˜ k ; ν k ) ) T A k ( Φ ω ( x ˜ k ; ν k ) ) Φ ω ( x ˜ k ; ν k ) 2 2 A k .
From inequalities (33), (35), and (36), we have,
q k 0 q k d k K 1 Φ ω x ˜ k ; ν k min δ k , Φ ω x ˜ k ; ν k A k .
Since μ k 0 , 1 , then from the above inequality and using the fact
q k 0 q k μ k d k μ k q k 0 q k d k
we have
q k 0 q k μ k d k K 1 μ k Φ ω x ˜ k ; ν k min δ k , Φ ω x ˜ k ; ν k A k .
From (17) and using the above inequality, we have
P r e d k K 1 μ k Φ ω x ˜ k ; ν k min δ k , Φ ω x ˜ k ; ν k A k .
Lemma 4.
Suppose N A 1 - N A 4 , then there exists a constant 0 < K2 such that
A r e d k P r e d k K 2 μ k ν k d k 2 .
Proof. 
From Equation (15), we have
A r e d k = ψ ω x ˜ k ψ ω x ˜ k + μ k d k + ν k 2 G x ˜ k 2 G x ˜ k + μ k d k 2 .
From the above definition of A r e d k , using (16), and Cauchy–Schwarz inequality, we have
A r e d k P r e d k μ k 2 2 d k T H k 2 ψ x ˜ k + ξ 1 μ k d k d k + μ k 2 2 d k T X ˜ k 1 Y k d k + ν k μ k G k G x ˜ k + ξ 2 μ k d k G k d k + ν k 2 μ k 2 | d k T [ G k G k T G x ˜ k + ξ 2 μ k d k G x ˜ k + ξ 2 μ k d k ) T d k |
for some ξ 1 and ξ 2 0 , 1 . Using the required assumptions, there are then positive constants κ 1 , κ 2 , and κ 3 such that
A r e d k P r e d k μ k κ 1 d k 2 + κ 2 ν k d k 3 + κ 3 ν k d k 2 .
Since ν k > 0 , the above inequality can then be written as
A r e d k P r e d k K 2 μ k ν k d k 2 .
This completes the proof. □
In the following section, we study the convergence of { x ˜ k } k 0 when ν k goes to infinity as k is very large.

4.2. Studying the Convergence if ν k

Let k j be an infinite subsequence of indices indexing iterates of acceptable steps. From Algorithm 3, we know,
P r e d k < G k G k min G k G k , δ k ,
for all k k j . That is, the sequence ν k goes to infinity.
In the following two lemmas, we prove that if ν k as k , then the subsequence of { x ˜ k } k 0 satisfies infeasible FRJO conditions or FRJO conditions.
Lemma 5.
Assume N A 1 - N A 4 hold and ν k as k . If there exists a subsequence k i of indices indexing iterates that satisfy G k ε 1 > 0 for all k k i , then in the limit, the infeasible FRJO conditions are held on it.
Proof. 
By contradiction, we prove this lemma. To simplify, let k i be the whole iteration sequence index k . Assume that there is not any subsequence of iterates in the limit, satisfying the infeasible FRJO conditions. Then, from Definition 2, we have G k G k ε 1 , and from Lemma 1, we have | G k 2 G k + μ k G k T d k 2 | ε 1 for some ε 1 > 0 . We will consider two cases: Firstly, if G k G k ε 1 , then
ψ ω x ˜ k + ν k G k G k ν k G k G k ψ ω x ˜ k ν k ε 1 ψ ω x ˜ k .
From inequalities (32), (40) and the fact
A k = B k + ν k G k G k T ν k G k G k T + B k ,
we have
P r e d k K 1 μ k ν k ε 1 ψ ω x ˜ k min δ k , ν k ε 1 ψ ω x ˜ k ν k G k G k T + B k .
From assumptions N A 1 - N A 4 and for very large k , we have
P r e d k K 1 μ k ν k ε 1 min δ k , ε 1 G k G k T .
However, ν k ; hence, there are an infinite number of acceptable iterates at which inequality (39) holds. Using inequalities (39) and (41), then
G k G k min G k G k , δ k K 1 μ k ν k ε 1 min δ k , ε 1 G k G k T
From the above inequality, we notice that the right-hand side as k , but the left-hand side is bounded. This gives a contradiction unless δk → 0.
Secondly, if | G k 2 G k + μ k G k T d k 2 | ε 1 for some ε 1 > 0 , we consider two cases as k and ν k :
(i)
If G k 2 G k + G k T μ k d k 2 ε 1 , then
ν k G k 2 G k + G k T μ k d k 2 ν k ε 1 .
That is P r e d k . Hence, the right-hand side goes to zero while the left-hand side of inequality (39) goes to infinity. However, this is a contradiction with the assumption in this case.
(ii)
If G k 2 G k + G k T μ k d k 2 δ 1 , then
ν k { G k 2 G k + G k T μ k d k 2 } ν k ε 1 .
Similar to the case (i), P r e d k . However, P r e d k > 0 , and this gives a contradiction. Hence, the lemma is proven.
The behavior of Algorithm 4 is studied in the following lemma when liminf k G k = 0 and ν k as k . □
Lemma 6.
Assume   N A 1 - N A 4 hold and ν k as k . If there exists a subsequence k i of iterates that satisfies G k > 0 for all k k i and l i m k i G k i = 0 , then there exists a subsequence of { x ˜ k } k 0 indexed k i that satisfies in the limit the FRJO conditions.
Proof. 
This lemma is proven by contradiction, and to simplify the notation, we let k i be the whole iteration sequence index k . Assume that there is not any subsequence of iteration sequence { x ˜ k } k 0 that satisfies FRJO conditions in the limit. Then, from Lemma 2 and for all 0 < ε 1 , we have
| G k 2 G k + G k T μ k d k 2 | G k 2 ε 1 ,
for all k sufficiently large. Now, we study three cases as k :
(i)
If liminf k d k G k = 0 , we have a contradiction with inequality (42).
(ii)
If limsup k d k G k = . From Subproblem (12), we have
ψ ω x ˜ k + ν k G x ˜ k G x ˜ k = A k + ρ ¯ k I d ,
where 0 < ρ ¯ k is the Lagrange multiplier vector corresponding with the constraint d δ k . Hence, inequality (32) can be written as follows
P r e d k K 1 μ k ψ ω x ˜ k + ν k G x ˜ k G x ˜ k min δ k , A k + ρ ¯ k I d k A k .
However, A k = B k + ν k G k G k T , hence
P r e d k K 1 μ k ψ ω x ˜ k + ν k G x ˜ k G x ˜ k min δ k , 1 ν k B k + G k G k T + ρ ¯ k ν k I d k 1 ν k B k + G k G k T .
If ν k , then inequality (39) holds at an infinite number of acceptable steps, and it can be written as follows
P r e d k < G k 2 G k 2 .
From inequalities (44) and (45), we have
K 1 μ k ψ ω x ˜ k + ν k G x ˜ k G x ˜ k min δ k , 1 ν k B k + G k G k T + ρ ¯ k ν k I d k 1 ν k B k + G k G k T κ 4 2 G k 2 ,
where κ 4 = s u p x ˜ Ω G k . Dividing the above inequality by G k , then
K 1 μ k ψ ω x ˜ k + ν k G x ˜ k G x ˜ k min δ k G k , 1 ν k B k + G k G k T + ρ ¯ k ν k I d k 1 ν k B k + G k G k T G k < κ 4 2 G k .
From the above inequality, we notice that the right-hand side 0 as k . This implies that
ψ ω x ˜ k + ν k G x ˜ k G x ˜ k G k G k T d k G k G k T G k ,
is bounded over the subsequence k where lim k d k G k = . That is, either d k G k lies in the null space of G k G k T or ψ ω x ˜ k + ν k G x ˜ k G x ˜ k 0 . The first possibility occurs when d k G k lies in the null space of the matrix G k G k T , which contradicts with assumption (42). That is a subsequence of { x ˜ k } k 0 in the limit satisfies the FRJO conditions. The second possibility implies ψ ω x ˜ k + ν k G x ˜ k G x ˜ k 0 , as k . Hence, ν k G k G k must be bounded, and we have G k G k 0 . That is, a subsequence of { x ˜ k } k 0 in the limit satisfies the FRJO conditions.
(iii)
If limsup k d k G k < and liminf k d k G k > 0 , then d k 0 . Hence, as k , we notice that the right-hand side of inequality (46) tends to zero as in the second case. This implies that
ψ ω x ˜ k + ν k G x ˜ k G x ˜ k G k G k T d k G k G k T G k 0 .
That is, either ψ ω x ˜ k + ν k G x ˜ k G x ˜ k 0 or G k G k T d k G k G k T G k 0 . As in Case (ii), we can prove in the limit that the subsequence of { x ˜ k } k 0 satisfies the FRJO conditions. This completes the proof.
The convergence of the sequence { x ˜ k } k 0 is studied when ν k is bounded in the following section. □

4.3. Studying the Convergence if ν k Is Bounded

We study in this section the convergence when ν k is bounded. Therefore, we set ν k = ν ˜ < for all k ˜ k .
Lemma 7.
Assume N A 1 - N A 4 hold. At any given iteration of indexed k at which Φ ω x ˜ k ; ν ˜ + X ˜ k Y k e ω k e + G x ˜ k G x ˜ k > ϵ , there exists a constant K 3 > 0 such that
P r e d k K 3 δ k .
Proof. 
Since A k = B k + ν ˜ G k G k T and using the required assumptions, we can say that A k κ 5 , where 0 < κ 5 is a constant. From the method of updating the barrier parameter ω k , we have X ˜ k Y k e ω k e ϵ 3 for very large k . Then, Φ ω x ˜ k ; ν ˜ + G x ˜ k G x ˜ k > 2 ϵ 3 . Assume that Φ ω x ˜ k ; ν ˜ > ϵ 3 ; then, G k G k > ϵ 3 . Using Lemma 3, we have
P r e d k K 1 μ k Φ ω x ˜ k ; ν ˜ min δ k , Φ ω x ˜ k ; ν ˜ A k K 1 ϵ 3 min 1 , ϵ 3 κ 5 δ max δ k .
Now, consider the case when G k G k > ϵ 3 , and using (18), we have
P r e d k ϵ 3 min ϵ 3 δ max , 1 δ k .
Taking K 3 = min K 1 ϵ 3 min 1 , ϵ 3 κ 5 δ max , ϵ 3 min ϵ 3 δ max , 1 , the result follows. □
Lemma 8.
Assume N A 1 - N A 4 . hold. If Φ ω x ˜ k ; ν ˜ + X ˜ k Y k e ω k e + G x ˜ k G x ˜ k > ϵ , then for some finite j , the condition A r e d k j β 1 P r e d k j will be satisfied. That is, after finitely many trial steps computations, we can find an acceptable step.
Proof. 
From inequalities (18) and (37), we have
A r e d k P r e d k 1 = A r e d k P r e d k P r e d k K 2 ν ˜ δ k 2 K 3 δ k K 2 ν ˜ δ k K 3 .
That is, δ k j turns into a small value and ultimately after a finite number of trials if d k j is rejected. This means that the acceptance rule (for j finite) will be met and hence the proof is completed. □
Lemma 9.
Assume   N A 1 - N A 4 hold. If Φ ω x ˜ k ; ν ˜ + X ˜ k Y k e ω k e + G x ˜ k G x ˜ k > ϵ , at j t h trial step of any iteration k satisfies
d k j 1 β 1 K 3 2 ν ˜ K 2 ,
then the trial step must be accepted.
Proof. 
By contradiction, we can prove this lemma. Suppose that the inequality (48) holds and d k j is rejected. Using inequalities (37) and (47), we have
1 β 1 < A r e d k j P r e d k j P r e d k j < K 2 ν ˜ d k j 2 K 3 d k j 1 β 1 2 .
This gives a contradiction with the assumption. This completes the proof.
The main global convergence result for Algorithm 4 is proven in the following theorem.
Theorem 1.
Assume N A 1 - N A 4 hold. Then, { x ˜ k } k 0 satisfies
liminf k     ψ ω x ˜ k + X ˜ k Y k e ω k e + G k G k   = 0 .
Proof. 
Firstly, we show that
liminf k Φ ω x ˜ k ; ν ˜ + X ˜ k Y k e ω k e + G k G k = 0 .
By contradiction, we will prove (50). For all k , we suppose that
Φ ω x ˜ k ; ν ˜ + X ˜ k Y k e ω k e + G k G k > ϵ . Let k k ˜ and k j k ¯ where j is the trial step of the iteration k . From Lemma 7, we have
Φ k j ω Φ k j + 1 ω = A r e d k j β 1 P r e d k j β 1 K 3 δ k j .
for any acceptable step indexed k j . If k , then
lim k δ k j = 0 .
This means that the radius δ k j is not bounded below. We will consider two cases:
(i)
If j = 1 , then δ k 1 δ min . Hence, in this case, δ k j is bounded below.
(ii)
If j > 1 , then there exists at least one rejected trial step. Hence, at the rejected trial step, d k i for all i = 1 , 2 , j 1 , and using Lemma 9, we have
d k i > 1 β 1 K 3 2 ν ˜ K 2 .
However, if d k j is rejected, then we have
δ k j = η 1 d k j 1 > η 1 1 β 1 K 3 2 ν ˜ K 2 ,
see Step 5 in Algorithm 4. Hence, δ k j is bounded below. This contradicts with (52). That is, the supposition is wrong and (50) hold. This also leads to (49) hold and the proof of this theorem is completed. Hence, the PBTR algorithm terminates because
ψ ω x ˜ k + X ˜ k Y k e ω k e + G k G k < ϵ

5. Numerical Results

In order to validate the effectiveness of the proposed algorithm, it was applied to some benchmark problems [37] and an engineering application.

5.1. Benchmark Test Problems

Benchmark problems are listed in [37] to show the effectiveness of the PBTR algorithm. For comparison, we have included the corresponding results of the PBTR algorithm against the corresponding numerical results in [38,39]. This is summarized in Table 1 where Niter refers to the number of iterations.
The numerical experiments showing that the PBTR algorithm and algorithms in [38,39] have a similar cost per iteration, and it is sufficient to compare them by only counting iterations.
The algorithm has the ability to locate the optimal solution for either feasible or infeasible initial reference points. For all problems, these algorithms achieved the same optimal solution in [37].
Figure 1 declares the number of iterations required for each problem with different methods. From Figure 2, it is clear that the PBTR algorithm has an average iteration number less than that reported in [38,39]. We used the logarithmic performance profiles proposed by [40], and our profiles are based on Niter. We observe that the PBTR algorithm performs quite well compared with the algorithms of [38,39]. The performance profile in terms of Niter is given in Figure 3 and shows a distinction for the PBTR algorithm over the algorithms of [38,39].

5.2. Engineering Application

The PBTR algorithm is also applied to the engineering application problem called the optimal design of a canal section for minimum water loss for a triangle cross-section. Much of the water utilized by mankind is utilized in irrigation (see Figure 4). Numerous diverse irrigation methods are connected over the years, but water has been continuously passed on and distributed using canals. Today, the expanded world population and the subsequent demand for advancement have obliged researchers to consider better water canal networks that classify much more water while keeping its quality. The uncertainty of the canal’s nature may cause the washout of the canals to communicate water during the duration of high flow, which may lead to the overall failure of many surface water exchequer systems. Thus, the loss of water from water system canals must be minimized. More than half of the water stock at the top of the canal is misplaced in leakage and evaporation by the time water comes to the field [41]. Leakage is the imperative portion of the total water loss. In fact, the noteworthy portion of loss comes from the evaporation; however, seepage takes place in a canal in small amounts. The correct lining may stop this seepage misfortune, but the change over the time in lining makes finding the correct lining a troublesome issue to solve.
Indeed, in spite of the fact that the evaporation loss changes with time whether it is winter or summer, conjointly, concrete lining conditions (cracks, etc.) influence the seepage misfortune; they can be estimated under certain conditions. In this manner, the design of a canal cross-section ought to be optimized considering minimization of the seepage misfortune and evaporation loss over time. This study aimed to minimize the cross-section area of the triangle cross-section of the current canal section problem concerning water losses. This application has been studied by many authors, see [42,43,44,45,46].
Problem formulation The optimal canal section problem for the triangle cross-section is formulated as follows
m i n i m i z e F = 10 6 x 2 [ ( 4 π π 2 ) 0.77 + ( 2 x 1 ) 1.5 ] 0.77 + 5 × 10 6 x 1 x 2 s u b j e c t     t o 1.737 x 1 1.5 x 2 2.5 ( 1 + x 1 2 ) 0.25 ln x 4 ( 1 + x 1 2 ) 0.5 6 x 1 x 2 + 0.625 x 3 ( 1 + x 1 2 ) 0.75 ( x 1 x 2 ) 1.5 = 1 , 10 7 x 3 10 5 , 10 6 x 4 10 3 ,
where F is the cross-section area of the triangle cross-section, x 1 is the side slope of the canal (m), and x 2 is the normal depth of the canal (m). The equality constraint is called the resistance equation, and it is the main constraint of this model. The variables x 3 and x 4 represent the average velocity (m/s) and the average roughness height of canal lining (m). The ranges of some variables x 3 and x 4 are reported in [41].
A minimum seepage loss of the rectangular sections is calculated by using the PBTR algorithm, and the results are given in the last column of Table 2. The results of our approach are compared with the optimum sizes of the channels found in [44,46]. Two approaches are used in [44] to solve this problem, the genetic algorithm (GA) and the sequential quadratic programming technique (SQP). From the results, it is clear that our algorithm and algorithms [44,45,46] have a similar cost per iteration, and it is sufficient to compare them by only counting iterations. That is, the PBTR algorithm is much faster than previous algorithms. It generally has given better results than previous studies.
We offered the numerical results of the PBTR algorithm, which have been performed on a laptop with 8 GB RAM, usb 3 (10×), nvidia GEFORCE GT, and Intel inside Core (TM)i7-2670QM CPU 2.2 GHz. The PBTR algorithm was run using MATLAB (R2013a)(8.2.0.701)64-bit(win64).
Given a starting point x 0 > 0 , the following parameter setting is used: δ m i n = 10 3 , δ 0 = m a x d 0 c p , δ m i n , δ m a x = 10 3 δ 0 , β 1 = 10 4 , β 2 = 0.75 , α 1 = 0.5 , α 2 = 2 , ϵ = 10 7 , ε 0 = 10 8 , and ε 1 = 10 10 .

6. Concluding Remarks

We described the penalty–barrier trust-region PBTR algorithm to solve the nonlinear programming problem. We used the penalty method with the barrier method to transform the CNP problem into an unconstrained optimization problem. Newton’s method was applied to the barrier Karush–Kuhn–Tucker conditions. From the bade side of Newton’s method, it may not converge at all if the starting point is far away from the solution. The trust-region approach is a very successful approach to ensure global convergence from any starting point. The trust-region technique can induce strongly global convergence, which is a very important technique for solving optimization problems and is more robust when they deal with rounding errors. A global convergence theory of the PBTR algorithm is proven under four required assumptions.
The algorithm PBTR is performed on some of the test problems listed in [37]. A preliminary numerical experiment on the PBTR algorithm is introduced. The performance of the PBTR algorithm is reported. The numerical results show that our approach is of value and merits further investigation.
The PBTR algorithm is additionally applied to the optimal design of a canal section for minimum water loss for a triangle cross-section problem. This study examined the effectiveness of our algorithm in the design of the minimum water loss canal section. Design problems are solved for the triangle canal section using our algorithm, which gives reliable results in a quicker time. This algorithm should be applied to the other version of open channel problems (having different geometries such as parabolic sections or considering cost as an objective).
There are many questions that should be answered for future work:
  • Updating the PBTR algorithm to be able to treat nondifferentiation nonlinear programming problems.
  • Updating the PBTR algorithm by using the nonmonotone mechanism.

Author Contributions

Conceptualization, B.E.-S., M.A.E.-S. and Y.A.-E.; Methodology, B.E.-S., M.A.E.-S., Y.A.-E.; Investigation, B.E.-S., M.A.E.-S., Y.A.-E. and A.A.A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to express their sincere gratitude to the Deanship of Scientific Research in Taif University for funding Taif University Researcher Supporting Project number (TURSP-2020/48), Taif University, Taif, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Karmarkar, N. A new polynomial-time algorithm for linear programming. Combinatorica 1984, 4, 373–395. [Google Scholar] [CrossRef]
  2. Navidi, H.; Malek, A.; Khosravi, P. Efficient hybrid algorithm for solving large scale constrained linear program mming problems. J. Appl. Sci. 2009, 9, 3402–3406. [Google Scholar] [CrossRef] [Green Version]
  3. Hamdy, A.T. Operations Research; Macmillan: New York, NY, USA, 1992. [Google Scholar]
  4. Karp, R.M. George Dantzig’s impact on the theory of computation. Discret. Optim. 2008, 5, 174–185. [Google Scholar] [CrossRef] [Green Version]
  5. Roos, C.; Terlaky, T.; Vial, J. Theory and Algorithms for Linear Optimization: An Interior Point Approach; John Wiley & Sons: Hoboken, NJ, USA, 1997. [Google Scholar]
  6. Schrijver, A. Theory of Linear and Integer Programming; John Wiley & Sons: New York, NY, USA, 1986. [Google Scholar]
  7. Malek, A.; Naseri, R. A new fast algorithm based on Karmarkar’s gradient projected method for solving linear programming problem. Adv. Model. Optim. 2004, 6, 43–51. [Google Scholar]
  8. Ye, Y.; Tse, E. An extension of Karmarkar’s projective algorithm for convex quadratic programming. Math. Program. 1989, 44, 157–179. [Google Scholar] [CrossRef]
  9. Kebbiche, Z.; Benterki, D. A weighted path-following method for linearly constrained convex programming. Revue Roum. Math. Pures Appl. 2012, 57, 245–256. [Google Scholar]
  10. Fei, P.; Wang, Y. A primal infeasible interior point algorithm for linearly constrained convex programming. Control Cybern. 2009, 38, 687–704. [Google Scholar]
  11. Kebbiche, Z.; Keraghel, A.; Yassine, A. Extension of a projective interior pointmethod for linearly constrained convex programming. Appl. Math. Comput. 2007, 193, 553–559. [Google Scholar]
  12. Herskovits, J.; Filho, M.T.; Leontiev, A. An interior point technique for solving bilevel programming problems. Optim. Eng. 2013, 14, 381–394. [Google Scholar] [CrossRef] [Green Version]
  13. Zhang, J.; Oueslati, A.; Shen, W.; De Saxcé, G.; Nguyen, A.; Zhu, Q.; Shao, J. Shakedown analysis of a hollow sphere by interior-point method with non-linear optimization. Int. J. Mech. Sci. 2020, 175, 105515. [Google Scholar] [CrossRef]
  14. Schmidt, M. An interior-point method for nonlinear optimization problems with locatable and separable nonsmoothness. EURO J. Comput. Optim. 2015, 3, 309–348. [Google Scholar] [CrossRef]
  15. Tahmasebzadeh, S.; Navidi, H.; Malek, A. Novel interior point algorithms for solving nonlinear convex optimization problems. Adv. Oper. Res. 2015, 2015, 487271. [Google Scholar] [CrossRef] [Green Version]
  16. Scheunemann, L.; Nigro, P.; Schröder, J.; Pimenta, P. A novel algorithm for rate independent small strain crystal plasticity based on the infeasible primal-dual interior point method. Int. J. Plast. 2020, 124, 1–19. [Google Scholar] [CrossRef]
  17. Esmaeili, H.; Kimiaei, M. An efficient implementation of a trust region method for box constrained optimization. J. Appl. Math. Comput. 2015, 48, 495–517. [Google Scholar] [CrossRef]
  18. El-Sobky, B. A multiplier active-set trust-region algorithm for solving constrained optimization problem. Appl. Math. Comput. 2012, 219, 928–946. [Google Scholar] [CrossRef]
  19. El-Sobky, B. An interior-point penalty active-set trust-region algorithm. J. Egypt. Math. Soc. 2016, 24, 672–680. [Google Scholar] [CrossRef] [Green Version]
  20. El-Sobky, B. An active-set interior-point trust-region algorithm. Pac. J. Optim. 2018, 14, 125–159. [Google Scholar] [CrossRef] [Green Version]
  21. El-Sobky, B.; Abouel-Naga, Y. Multi-objective optimal load flow problem with interior-point trust-region strategy. Electr. Power Syst. Res. 2017, 148, 127–135. [Google Scholar] [CrossRef]
  22. El-Sobky, B.; Abouel-Naga, Y. A penalty method with trust-region mechanism for nonlinear bilevel optimization problem. J. Comput. Appl. Math. 2018, 340, 360–374. [Google Scholar] [CrossRef]
  23. Kimiaei, M. A new class of nonmonotone adaptive trust-region methods for nonlinear equations with box constrained. Calcolo 2017, 54, 769–812. [Google Scholar] [CrossRef]
  24. Yuan, L.N.A.Y.; Niu, L.; Yuan, Y. A new trust-region algorithm for nonlinear constrained optimization. J. Comput. Math. 2010, 28, 72–86. [Google Scholar] [CrossRef]
  25. Wang, X.; Yuan, Y.-X. A trust region method based on a new affine scaling technique for simple bounded optimization. Optim. Methods Softw. 2013, 28, 871–888. [Google Scholar] [CrossRef]
  26. Wang, X.; Yuan, Y. An augmented Lagrangian trust region method for equality constrained optimization. Optim. Methods Softw. 2014, 30, 559–582. [Google Scholar] [CrossRef]
  27. Kouri, D.; Heinkenschloss, M.; Ridzal, D.; van Bloemen Waanders, B.G. Inexact objective function evaluations in a trust-region algorithm for PDE-constrained optimization under uncertainty. SIAM J. Sci. Comput. 2014, 36, A3011–A3029. [Google Scholar] [CrossRef]
  28. Bazaraa, M.S.; Sherali, H.D.; Shetty, C.M. Nonlinear Programming: Theory and Algorithms; John Wiley & Sons: Hoboken, NJ, USA, 2013; pp. 390–410. [Google Scholar]
  29. Byrd, R.H.; Gilbert, J.C.; Nocedal, J. A trust region method based on interior point techniques for nonlinear programming. Math. Program. 2000, 89, 149–185. [Google Scholar] [CrossRef] [Green Version]
  30. Kouri, D.; Heinkenschloss, M.; Ridzal, D.; van Bloemen Waanders, B.G. A trust-region algorithm with adaptive stochastic collocation for PDE optimization under uncertainty. SIAM J. Sci. Comput. 2020, 35, 1847–1879. [Google Scholar] [CrossRef] [Green Version]
  31. El-Bakry, A.S.; Tapia, R.A.; Tsuchiya, T.; Zhang, Y. On the formulation and theory of the Newton interior-point method for nonlinear programming. J. Optim. Theory Appl. 1996, 89, 507–541. [Google Scholar] [CrossRef]
  32. Ulbrich, M.; Ulbrich, S. A globally convergent primal-dual interior-point filter method for nonlinear programming. Math. Program. 2003, 100, 379–410. [Google Scholar] [CrossRef] [Green Version]
  33. Yamashita, H. A globally convergent primal-dual interior point method for constrained optimization. Optim. Methods Softw. 1998, 10, 443–469. [Google Scholar] [CrossRef]
  34. Fiacco, A.; McCormick, G. Nonlinear Programming: Sequential Unconstrained Minimization Techniques; John Wiley & Sons: Hoboken, NJ, USA, 1968. [Google Scholar]
  35. Steihaug, T. The conjugate gradient method and trust regions in large scale optimization. SIAM J. Numer. Anal. 1983, 20, 626–637. [Google Scholar] [CrossRef] [Green Version]
  36. Yuan, Y.-X. On the convergence of a new trust region algorithm. Numer. Math. 1995, 70, 515–539. [Google Scholar] [CrossRef]
  37. Hock, W.; Schittkowski, K. Test Examples for Nonlinear Programming Codes; Springer Science and Business Media LLC: Berlin, Germany, 1981; Volume 187. [Google Scholar]
  38. Tits, L.; Wachter, A.; Bakhtiari, S.; Urban, J.; Lawrence, T. A primal-dual interior-point method for nonlinear programming with strong global and local convergence properties. SIAM J. Optim. 2003, 14, 173–199. [Google Scholar] [CrossRef] [Green Version]
  39. Nocedal, J.; Öztoprak, F.; Waltz, R.A. An interior point method for nonlinear programming with infeasibility detection capabilities. Optim. Methods Softw. 2013, 29, 837–854. [Google Scholar] [CrossRef]
  40. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
  41. Swamee, P.K.; Mishra, G.C.; Chahar, B.R. Design of minimum seepage loss canal sections. J. Irrig. Drain. Eng. 2000, 126, 28–32. [Google Scholar] [CrossRef]
  42. Bhattacharjya, R.K. Optimal design of open channel section incorporating critical flow condition. J. Irrig. Drain. Eng. 2006, 132, 513–518. [Google Scholar] [CrossRef]
  43. Chahar, B.R. Optimal design of a special class of curvilinear bottomed channel section. J. Hydraul. Eng. 2007, 133, 571–576. [Google Scholar] [CrossRef] [Green Version]
  44. Kentli, A.; Mercan, O. Application of different algorithms to optimal design of canal sections. J. Appl. Res. Technol. 2014, 12, 762–768. [Google Scholar] [CrossRef]
  45. Kacimov, A. Seepage Optimization for Trapezoidal Channel. J. Irrig. Drain. Eng. 1992, 118, 520–526. [Google Scholar] [CrossRef]
  46. Swamee, P.K.; Mishra, G.C.; Chahar, B.R. Design of minimum water-loss canal sections. J. Hydraul. Res. 2002, 40, 215–220. [Google Scholar] [CrossRef]
Figure 1. Comparison of methods in [38,39] with the PBTR algorithm.
Figure 1. Comparison of methods in [38,39] with the PBTR algorithm.
Mathematics 09 01551 g001
Figure 2. Average number of iterations.
Figure 2. Average number of iterations.
Mathematics 09 01551 g002
Figure 3. Performance profile based on Niter of methods in [38,39] and the PBTR algorithm.
Figure 3. Performance profile based on Niter of methods in [38,39] and the PBTR algorithm.
Mathematics 09 01551 g003
Figure 4. Triangular canal section.
Figure 4. Triangular canal section.
Mathematics 09 01551 g004
Table 1. Comparison of methods in [38,39] with PBTR algorithm.
Table 1. Comparison of methods in [38,39] with PBTR algorithm.
ProblemNameNo. of VariablesFeasibility of Initial SolutionNiter
Method [38]Method [39]Algorithm (PBTR)
P1hs0062Infeasible754
P2hs0072Infeasible987
P3hs0082Infeasible1468
P4hs0092feasible1067
P5hs0122feasible574
P6hs0242feasible1449
P7hs0263Feasible191914
P8hs0273Infeasible141812
P9hs0283Feasible623
P10hs0293Feasible869
P11hs0303Feasible768
P12hs0323Feasible2456
P13hs0333Feasible2968
P14hs0343Feasible3057
P15hs0363Feasible1079
P16hs0373Feasible766
P17hs0394Infeasible19235
P18hs0404Infeasible436
P19hs0424Infeasible637
P20hs0434feasible976
P21hs0464feasible251010
P22hs0475feasible251712
P23hs0485feasible623
P24hs0495feasible691610
P25hs0505feasible1186
P26hs0515feasible823
P27hs0525Infeasible423
P28hs0535Infeasible544
P29hs0567feasible1254
P30hs0603Infeasible775
P31hs0613Infeasible4478
P32hs0633Infeasible554
P33hs0734Infeasible1678
P34hs0785Infeasible445
P35hs0795Infeasible744
P36hs0805Infeasible656
P37hs0815Infeasible967
P38hs0936feasible1265
Table 2. Comparison of methods in [44,46] with the PBTR algorithm.
Table 2. Comparison of methods in [44,46] with the PBTR algorithm.
NameMethod (GA) [44]Method (SQP) [44]Method [46]Algorithm
(PBTR)
Best F1.699 × 10−51.699 × 10−51.885 × 10−51.643 × 10−5
x 1 0.6220.6170.5200.606
x 2 2.8852.8993.2032.976
x 3 1.9321.9281.8741.918
max/min constrain8.49 × 10−88.53 × 10−88.86 × 10−88.23 × 10−8
No. of iteration175276412
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

El-Sobky, B.; Abo-Elnaga, Y.; Mousa, A.A.A.; El-Shorbagy, M.A. Trust-Region Based Penalty Barrier Algorithm for Constrained Nonlinear Programming Problems: An Application of Design of Minimum Cost Canal Sections. Mathematics 2021, 9, 1551. https://doi.org/10.3390/math9131551

AMA Style

El-Sobky B, Abo-Elnaga Y, Mousa AAA, El-Shorbagy MA. Trust-Region Based Penalty Barrier Algorithm for Constrained Nonlinear Programming Problems: An Application of Design of Minimum Cost Canal Sections. Mathematics. 2021; 9(13):1551. https://doi.org/10.3390/math9131551

Chicago/Turabian Style

El-Sobky, Bothina, Yousria Abo-Elnaga, Abd Allah A. Mousa, and Mohamed A. El-Shorbagy. 2021. "Trust-Region Based Penalty Barrier Algorithm for Constrained Nonlinear Programming Problems: An Application of Design of Minimum Cost Canal Sections" Mathematics 9, no. 13: 1551. https://doi.org/10.3390/math9131551

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop