Next Article in Journal
BooLSPLG: A Library with Parallel Algorithms for Boolean Functions and S-Boxes for GPU
Previous Article in Journal
A Novel Method for Predicting Rockburst Intensity Based on an Improved Unascertained Measurement and an Improved Game Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Alternating Direction Search Pattern Method for Solving Constrained Nonlinear Optimization Problems

School of Mathematics and Statistics, Henan University of Science and Technology, Luoyang 471023, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(8), 1863; https://doi.org/10.3390/math11081863
Submission received: 6 March 2023 / Revised: 10 April 2023 / Accepted: 12 April 2023 / Published: 14 April 2023

Abstract

:
We adopt the alternating direction search pattern method to solve the equality and inequality constrained nonlinear optimization problems. Firstly, a new augmented Lagrangian function with a nonlinear complementarity function is proposed to transform the original constrained problem into a new unconstrained problem. Under appropriate conditions, it has been proven that there is a 1-1 correspondence between the local and global optimal solutions of the new unconstrained problem and the original constrained problem. In this way, the optimal solution of the original problem can be obtained by solving the new unconstrained optimization problem. Furthermore, based on the characteristics of the new problem, the alternating direction pattern search method was designed and its convergence was demonstrated. Numerical experiments were implemented to illustrate the availability of the new augmented Lagrangian function and the algorithm.

1. Introduction

With the big data era coming, the demand for optimization in resource allocation, transportation, production management, marketing planning, engineering design, and other aspects is constantly increasing. As one of the most basic models, the constrained nonlinear optimization problems are widely used in many fields, such as finance [1], power grids [2], dynamics [3,4], logistics management [5], portfolio [6], etc. There are many ways to solve the constrained nonlinear optimization problems, including the filter method [7], penalty function method [8,9], and so on. For example, the alternating direction method has been continuously developed in recent years [10,11]. Ref. [10] proposes a new acceleration approximation linearization ADMM algorithm.
The augmented Lagrangian function is to add the constraints as penalty terms to the objective function to form a new optimization problem, which was presented by Di Pillo and Grippo in the 1980s, see [12,13,14]. Furthermore, PU [15,16] proposed a new class of augmented Lagrangian functions and methods with piecewise nonlinear complementarity function (denote NCP) for the constrained nonlinear programming. A class of new augmented Lagrangian functions with Fischer–Burmeister NCP functions and methods was presented in [17]. A generalized Newtonian method of global convergence to solve non-smooth equations with non-differentiable functions but continuous Lipschitz was presented in [18]. A class of new Lagrangian multiplier methods with NCP functions was presented for solving the constrained nonlinear programming satisfying equation constraints and inequality constraints, see [19]. The reference [20] developed and analyzed several new constructors for constructing nonlinear complementary functions. The reference [21] provided a new class of complementary functions for regularization problems, which use smooth complementary functions to transform the complementarity problem into a nonlinear system of equations. In [22,23], a class of new augmented Lagrangian functions with NCP functions is presented for the constrained nonlinear programming.
We can use the Lagrange multiplier method to solve the constrained nonlinear optimization problems. The following the constrained nonlinear optimization problems(P) is studied in this paper:
min f ( x ) s . t . c ( x ) = 0 , c : R n R p s ( x ) 0 , s : R n R m
let c ( x ) = ( c 1 ( x ) , c 2 ( x ) , , c p ( x ) ) T , s ( x ) = ( s 1 ( x ) , s 2 ( x ) , , s m ( x ) ) T , which are twice the continuously differentiable functions.
The Lagrangian function of the problem (P) is equivalent to the following function:
L ( x , ω , λ ) = f ( x ) + ω T c ( x ) + λ T s ( x ) ,
where ω = ( ω 1 , ω 2 , , ω p ) T R p , λ = ( λ 1 , λ 2 , , λ m ) T R m , ω and λ are the multiplier vectors. Conveniently, ( x , ω , λ ) is denoted as the column vector ( x T , ω T , λ T ) T .
A Karush–Kuhn–Tucker (KKT) point ( x ¯ , ω ¯ , λ ¯ ) R n × R p × R m holds the following conditions:
x L ( x ¯ , ω ¯ , λ ¯ ) = 0 , c ( x ¯ ) = 0 , s ( x ¯ ) 0 , λ ¯ 0 , λ ¯ j T s j ( x ¯ ) = 0 , 1 j m .
This condition is called the first-order optimal necessary condition for problem (P).
We also define a class of new augmented Lagrangian functions with NCP functions for the problem (P). The difference between this new function and the previous one is that the parameters are different. The previous study used the same parameters in the penalty terms of equation and inequality constraints, and now we consider adding two penalty terms with different parameters, which has the advantage of improving the algorithm efficiency. We then prove a 1-1 correspondence between the local and global optimal solutions of the new unconstrained problem and the original constrained problem. In particular, considering the characteristics of the new problem, an alternating direction search pattern method was designed and its convergence was demonstrated, while experiment simulations are performed according to the designed new augmented Lagrangian function and algorithm.

2. Constructing New Augmented Lagrangian Function

In order to construct the new augmented Lagrangian function, some necessary preparations and assumptions are given in this section.
The NCP function is a function that satisfies the following conditions:
ψ ( a , b ) = 0 , a 0 , b 0 , a b = 0 .
The point ( x , ω , λ ) subject to the following conditions:
  • The first-order KKT condition is satisfied;
  • For any d P ( x ) = { d | d T c i ( x ) = 0 , i = 1 , 2 , , p , d T s j ( x ) 0 , j J ( x ) } , with d T x x 2 L ( x , ω , λ ) d > 0 . Here, d 0 and j J ( x ) , J ( x ) = { j | j = 1 , 2 , , m , s j ( x ) = 0 } .
Then, ( x , ω , λ ) satisfies the strong second-order sufficiency condition [20].
In this paper, NCP function was used ψ ( a , b ) = ( a + b ) a 2 + b 2 a 2 b 2 , it is continuously differentiable except at the origin, and strongly semismooth at the origin, see [19,20,21].
Denote
ψ j ( x , λ j , γ ) = ψ ( ( γ s j ( x ) ) , λ j ) , 1 j m .
where γ > 0 is a parameter.
ψ j ( x , λ , γ ) = 0 iff s j ( x ) 0 , λ j 0 and λ j s j ( x ) = 0 for any γ > 0 . Denote Ψ ( x , λ , γ ) = ψ 1 ( x , λ 1 , γ ) , , ψ m ( x , λ m , γ ) T .
For problem (P), we defined the new augmented Lagrangian function F as follows:
F ( x , ω , λ , γ , ρ ) = d e f f ( x ) + ω T c ( x ) + ρ c ( x ) 2 / 2 + Ψ ( x , λ , γ ) + λ 2 2 λ 2 2 + x L ( x , ω , λ ) 2 / 2
where ρ > 0 is the other parameter.
Obviously, the conditions of KKT point convert to the following conditions:
ψ ( x , λ , γ ) = 0 , c ( x ) = 0 , x L ( x , ω , λ ) = 0 .
Some assumptions are as follows:
A1. f ( x ) , c i ( x ) , and s j ( x ) are two times Lipschitz continuously differentiable. For example, for f ( x ) , L is a positive number, if all points x in the field of point x 0 have,
f ( x ) f ( x 0 ) L x x 0 2 ,
then it is said that f ( x ) holds two Lipschitz conditions at point x 0 . The same for c i ( x ) and s j ( x ) .
A2. At the point with F ( x , ω , λ , γ , ρ ) = 0 for any γ > 0 and ρ > 0 , { c i ( x ) , s j ( x ) | i = 1 , 2 , , p , j = 1 , 2 , , m } are linear independent.
A3. The strict complementarity condition holds at any KKT point of problem (P).
Suppose ( x , ω , λ ) is a KKT point. λ j s j ( x ) = 0 , j = 1 , , m is called the complementary condition. If there is also λ j 2 + ( s j ( x ) ) 2 > 0 , then the strict complementarity condition holds.
A4. The strict second-order sufficiency condition holds at any KKT point of the problem (P).

3. The Equivalence of Local Optimal Solution

To prove the equivalence of locally optimal solutions, we give the following two lemmas.
Lemma 1.
For sufficiently large γ, ρ > 0 . F ( x , ω , λ , γ , ρ ) = 0 holds iff ( x , ω , λ ) is a KKT point.
Proof. 
The index sets is denoted
I 0 ( x , λ ) = { j | ( s j ( x ) , λ j ) = ( 0 , 0 ) } , I 1 ( x , λ ) = { j | ( s j ( x ) , λ j ) ( 0 , 0 ) } .
By the ψ j ( x , λ j , γ ) definition, for any j I 0 , ψ j ( x , λ j , γ ) = 0 , ψ j ( x , λ j , γ ) + λ j 2 = 0 , and ψ ( x , λ j , γ ) + λ j 2 = 0 .
For any j I 1 , the derivative of F is:
x F ( x , ω , λ , γ , ρ ) = f ( x ) + c ( x ) ω + ρ c ( x ) c ( x ) + 1 2 γ 4 [ x Ψ ( x , λ , γ ) + λ 2 2 λ 2 2 ] + x x 2 L ( x , ω , λ ) x L ( x , ω , λ ) = x L ( x , ω , λ ) + ρ c ( x ) c ( x ) + s ( x ) Ψ ( x , λ , γ ) + x x 2 L ( x , ω , λ ) x L ( x , ω , λ ) + j I 1 τ j
ω F ( x , ω , λ , γ , ρ ) = c ( x ) + ( c ( x ) ) T x L ( x , ω , λ )
λ F ( x , ω , λ , γ , ρ ) = 1 2 γ 4 λ Ψ ( x , λ , γ ) + λ 2 2 λ 2 2 + ( s ( x ) ) T x L ( x , ω , λ ) = j I 1 ξ j + ( s ( x ) ) T x L ( x , ω , λ )
where τ j , ξ j denotes the following:
τ j = ψ j ( x , λ j , γ ) s j ( x ) λ j s j ( x ) + 1 γ 3 ( ψ j ( x , λ j , γ ) + λ j 2 ) γ s j ( x ) ( λ j γ s j ( x ) ) ( λ j ) 2 + ( γ s j ( x ) ) 2 λ j 2 + ( γ s j ( x ) ) 2 + 2 γ s j ( x ) s j ( x )
ξ j = 1 2 γ 4 { ( ψ j ( x , λ j , γ ) + λ j 2 ) λ j ( λ j γ s j ( x ) ) ( λ j ) 2 + ( γ s j ( x ) ) 2 + ( λ j ) 2 + ( γ s j ( x ) ) 2 4 λ j 3 }
Suppose ( x , ω , λ ) is a KKT point, then for any γ > 0 ,
x L ( x , ω , λ ) = 0 , c ( x ) = 0 , s ( x ) 0 , λ 0 , λ i T s i ( x ) = 0 .
The combined (5) can be obtained from this,
x F ( x , ω , λ , γ , ρ ) = f ( x ) + c ( x ) ω + ρ c ( x ) c ( x ) + 1 2 γ 4 [ x Ψ ( x , λ , γ ) + λ 2 2 λ 2 2 ] + x x 2 L ( x , ω , λ ) x L ( x , ω , λ ) = x L ( x , ω , λ ) + ρ c ( x ) c ( x ) + s ( x ) Ψ ( x , λ , γ ) + x x 2 L ( x , ω , λ ) x L ( x , ω , λ ) + 1 2 γ 4 [ x Ψ ( x , λ , γ ) + λ 2 2 λ 2 2 ] = 1 2 γ 4 [ x Ψ ( x , λ , γ ) + λ 2 2 λ 2 2 ]
put (4) into the above equation in order to obtain
1 2 γ 4 [ x Ψ ( x , λ , γ ) + λ 2 2 λ 2 2 ] = 0 .
therefore, x F ( x , ω , λ , γ , ρ ) = 0 .
It can be obtained according to (6),
ω F ( x , ω , λ , γ , ρ ) = c ( x ) + ( c ( x ) ) T x L ( x , ω , λ ) = 0 .
By (7),
λ F ( x , ω , λ , γ , ρ ) = 1 2 γ 4 λ Ψ ( x , λ , γ ) + λ 2 2 λ 2 2 + ( s ( x ) ) T x L ( x , ω , λ ) = 0 .
As a result, F ( x , ω , λ , γ , ρ ) = 0 .
Conversely, suppose F ( x , ω , λ , γ , ρ ) = 0 for some γ > 0 , ψ j ( x , λ j , γ ) = 0 , x L ( x , ω , λ ) = 0 .
Then, we have
s ( x ) 0 , λ 0 , λ i T s i ( x ) = 0 , j = 1 , 2 , , m .
by (6) it is easy to know
ω F ( x , ω , λ , γ , ρ ) = c ( x ) + ( c ( x ) ) T x L ( x , ω , λ ) = 0 .
as x L ( x , ω , λ ) = 0 , we have c ( x ) = 0 .
Thus, ( x , ω , λ ) makes the KKT conditions of problem (P) hold.
Furthermore, we assume F ( x , ω , λ , γ , ρ ) = 0 for a sufficiently large γ > 0 .
By A1, g j ( x ) is bounded and we have
ξ j = 1 γ 4 { 2 λ j ( γ s j ( x ) ) 2 3 ( λ j ) 2 ( γ s j ( x ) ) ( γ s j ( x ) ) 3 ( γ s j ( x ) ) 2 [ ( λ j ) 2 + ( γ s j ( x ) ) 2 + λ j ( λ j γ s j ( x ) ) ( λ j ) 2 + ( γ s j ( x ) ) 2 ] }
where γ , ξ j 0 .
By (7), we obtain
( s ( x ) ) T x L ( x , ω , λ ) = 0
By A2, s j ( x ) are linear independent,
x L ( x , ω , λ ) = 0 ,
there is
λ j s j ( x ) = 0 .
Otherwise, let λ j s j ( x ) > 0 for any γ > 0 , τ j > 0 or ξ j < 0 , which is a contradiction to F ( x , ω , λ , γ , ρ ) = 0 .
By (6) and (9), we have
c ( x ) = 0 .
Put (9)–(11) into (5), then for j I 1 , we obtain
x F ( x , ω , λ , γ , ρ ) = 1 γ 3 j I 1 ( ψ j ( x , λ j , γ ) + λ j 2 ) γ s j ( x ) ( λ j γ s j ( x ) ) ( λ j ) 2 + ( γ s j ( x ) ) 2 λ j 2 + ( γ s j ( x ) ) 2 2 γ s j ( x ) s j ( x ) .
By A2, s j ( x ) are linear independent and then we obtain
1 γ 3 ( ψ j ( x , λ j , γ ) + λ j 2 ) ( λ j ) 2 + ( γ s j ( x ) ) 2 + γ s j ( x ) ( γ s j ( x ) λ j ) ( λ j ) 2 + ( γ s j ( x ) ) 2 + 2 γ s j ( x ) = 0 .
If s j ( x ) = 0 , then ψ j ( x , λ j , γ ) = 0 .
If s j ( x ) 0 , then λ j = 0 .
For a sufficiently large γ ,
s j ( x ) 2 + s j ( x ) 2 / s j ( x ) + 2 s j ( x ) = s j ( x ) + s j ( x ) 2 = 0 .
then, s j ( x ) < 0 .
So, it follows from (13) that ψ j ( x , λ j , γ ) = 0 ; that means s ( x ) 0 , λ j 0 , λ i T s i ( x ) = 0 , in other words ( x , ω , λ ) is KKT point of problem (P).    □
Lemma 2.
For ρ > 0 and a sufficiently large γ, suppose ( x * , ω * , λ * ) is the local minima of F ( x , ω , λ , γ , ρ ) and x * is the local minima of the problem (P).
The proof of this lemma can be found in [17].

4. The Equivalence of Globe Optimal Solution

Lemma 3.
For a sufficiently large γ, suppose ( x ¯ , ω ¯ , λ ¯ ) is the KKT point of problem (P), and F ( x , ω , λ , γ , ρ ) is strong convex at ( x ¯ , ω ¯ , λ ¯ ) when A1–A3 hold.
Proof. 
Suppose ( x ¯ , ω ¯ , λ ¯ ) is a KKT point of problem (P), thus
x L ( x ¯ , ω ¯ , λ ¯ ) = 0 , c i ( x ¯ ) = 0 , ψ j ( x ¯ , λ ¯ j , γ ) = 0 .
In other words,
( λ ¯ j γ s j ( x ¯ ) ) λ ¯ j 2 + ( γ s j ( x ¯ ) ) 2 ( γ s j ( x ¯ ) ) 2 = λ ¯ j 2 0 .
The supposition A1 connotes λ ¯ j 2 + ( γ s j ( x ¯ ) ) 2 0 . For any j I 1 , the Hessian matrix of F ( x , ω , λ , γ , ρ ) at a KKT point ( x ¯ , ω ¯ , λ ¯ ) is:
x x 2 F ( x ¯ , ω ¯ , λ ¯ , γ , ρ ) = x x L ( x ¯ , ω ¯ , λ ¯ ) + ρ c ( x ¯ ) ( c ( x ¯ ) ) T + γ x x 2 L ( x ¯ , ω ¯ , λ ¯ , γ ) x x 2 L ( x ¯ , ω ¯ , λ ¯ ) + j = 1 m λ j 2 + 16 ( γ s j ( x ¯ ) ) 2 + λ j 4 s j ( x ¯ ) ( s j ( x ¯ ) ) T ,
x ω 2 F ( x ¯ , ω ¯ , λ ¯ , γ , ρ ) = c ( x ¯ ) + x x 2 L ( x ¯ , ω ¯ , λ ¯ ) ( c ( x ¯ ) ) ,
x λ 2 F ( x ¯ , ω ¯ , λ ¯ , γ , ρ ) = s ( x ¯ ) ) d i a g ( ζ ) + x x 2 L ( x ¯ , ω ¯ , λ ¯ , γ ) ,
λ λ 2 F ( x ¯ , ω ¯ , λ ¯ , γ , ρ ) = s ( x ¯ ) d i a g ( η ) + x x 2 L ( x ¯ , ω ¯ , λ ¯ ) s ( x ¯ ) ,
ω ω 2 F ( x ¯ , ω ¯ , λ ¯ , γ , ρ ) = c ( x ¯ ) ( ( c ( x ¯ ) ) T ,
λ ω 2 F ( x ¯ , ω ¯ , λ ¯ , γ , ρ ) = ( s ( x ¯ ) ) T c ( x ¯ ) ,
let ζ , η represent the column vector,
ζ j = λ j 3 λ j 2 + ( γ s j ( x ¯ ) ) 2 , η j = 6 λ j 2 + ( γ s j ( x ¯ ) ) 4 λ j 2 + ( γ s j ( x ¯ ) ) 2 ) .
Therefore,
( x i , ω i , λ i ) T 2 F ( x ¯ , ω ¯ , λ ¯ , γ , ρ ) ( x i , ω i , λ i ) = x T x x L ( x ¯ , ω ¯ , λ ¯ ) x + i = 1 p ρ i ( x T c ( x ) ) 2 + j = 1 m [ λ j 2 + 16 ( γ s j ( x ¯ ) ) 2 + λ j 4 ] ( x T s j ( x ) ) 2 .
Moreover,
( x T c ( x ) ) 2 0 , ( x T s ( x ) ) 2 0 , λ j 2 + 16 ( γ s j ( x ¯ ) ) 2 + λ j 4 0 .
So,
( x i , ω i , λ i ) T 2 F ( x ¯ , ω ¯ , λ ¯ , γ , ρ ) ( x i , ω i , λ i ) 0 .
The lemma is proved.    □
Lemma 4.
Let ( x ¯ , ω ¯ , λ ¯ ) X × P × M is the KKT point of problem (P) in the compact set. Suppose x ¯ is the unique global minima of problem (P) on the X. ( x ¯ , ω ¯ , λ ¯ ) is the unique global minima of F ( x , ω , λ , γ , ρ ) when A2–A4 hold.
From the above several lemmas, it is easy to obtain Lemma 4, which is proved but ignored in the remainder of this paper.

5. Solution Algorithms

Based on the above four lemmas, we obtain the solution of the problem (P) by solving the unconstrained nonlinear optimization problems. Considering the nature of the problem (P), we design the algorithm for solving the nonlinear constrained problem using the alternating direction search pattern method. The specific algorithm is as shown below.
Algorithm 1: Alternating direction search pattern method
Input: Parameter initialization. γ 0 > 0 , ρ 0 > 0 , 0 η < < 1 and 0 < θ 1 < 1 < θ 2 .
Give a starting point x 0 R n , ω 0 = ( ω 1 0 , ω 2 0 , , ω p 0 ) R p and λ 0 = ( λ 1 0 , λ 2 0 , , λ m 0 ) R m .
Let k = 0 .
Output: Stop until a certain stopping criterion is met.
Step 1: Solve the optimal solution.
Use Algorithm 2 to solve the problem min F ( x , ω k , λ k , γ k , ρ k ) to obtain x k + 1 .
If c ( x k + 1 ) η and Ψ ( x k + 1 , λ k , γ k ) η , then stop.
Step 2: If | c i ( x k + 1 ) | θ 1 | c i ( x k ) | , for any i, then ρ i k + 1 = ρ i k . Otherwise ρ i k + 1 = θ 2 ρ i k .
If | ψ j ( x k + 1 , λ j k , γ k ) | θ 1 | ψ j ( x k , λ j k , γ k ) | , for any j, then γ k + 1 = γ k , otherwise γ k + 1 = θ 2 γ k .
Step 3: Compute ω k + 1 and λ k + 1 :
ω i k + 1 = ω i k + ρ i k c i ( x k + 1 ) ,
( λ j k + 1 ) 2 = ( γ k s j ( x k + 1 ) + λ j k ) ( γ k s j ( x k + 1 ) ) 2 + ( λ j k ) 2 ( γ k s j ( x k + 1 ) 2 .
Step 4: Set k = k + 1 , go to Step 1.
Algorithm 2 is a sub-algorithm of Algorithm 1.
Algorithm 2: The search pattern method
Input: Select initial value x 0 R n , ε > 0 , k = 1 .
Output: Stop until a certain stopping criterion is met.
Step 1: Let x 1 k = x k ; for i = 1 , 2 , · · · , n , computer
α i k = a r g m i n F ( x i k + α e i , ω k , λ k , γ k , ρ k ) ) ,
x i + 1 k = x i k + α i k e i .
Step 2: Set d k = x k + 1 x k , if | | d k | | ε , stop; Otherwise, let
α i k = a r g m i n F ( x i k + α d k , ω k , λ k , γ k , ρ k ) ) ,
x i + 1 k = x i k + α i k d k .
Let k : = k + 1 , go to step 1.
Note: e i is the i-th direction of coordinate axis. The Algorithm 2 is also called the search pattern method; its main idea is to find a new point in each n coordinate rotation search in order to obtain a direction and then conduct a search.

6. Convergence

In this section, let μ i k = ω i k / ρ i k , ν j k = ( λ j k ) 2 / γ k , μ k = ( μ 1 k , μ 2 k , , μ p k ) T , ν k = ( ν 1 k , ν 2 k , , ν m k ) T .
Lemma 5.
Let the non-empty set X is the feasible solutions set of the problem (P), for x X . τ , supposing that f ( x ) has a lower bound, then μ k 2 + ν k 2 τ k .
Proof. 
For any k, to prove μ k 2 + ν k 2 τ k is equal in order to prove
i = 1 p ( ω i k ) 2 ρ i k + j = 1 m ( λ j k ) 4 2 ( γ k ) 4 τ k .
Because
F ( x k + 1 , ω k , λ k , γ k , ρ k ) f ( x k + 1 ) x L ( x k + 1 , ω k , λ k ) 2 / 2 = i = 1 p ( ω i k + 1 ) 2 ( ω i k ) 2 2 ρ i k + j = 1 m ( λ j k + 1 ) 4 ( λ j k ) 4 2 ( γ k ) 4 = i = 1 p ( ω i k + 1 ) 2 2 ρ i k + j = 1 m ( λ j k + 1 ) 2 2 ( γ k ) 4 i = 1 p ( ω i k ) 2 2 ρ i k j = 1 m ( λ j k ) 2 2 ( γ k ) 4 = 1 2 ( μ k + 1 2 + ν k + 1 2 ) 1 2 ( μ k 2 + ν k 2 ) .
Therefore,
1 2 ( μ k + 1 2 + ν k + 1 2 ) = 1 2 ( μ k 2 + ν k 2 ) + F ( x k + 1 , ω k , λ k , γ k , ρ k ) f ( x k + 1 ) x L ( x k + 1 , ω , λ ) 2 / 2 .
For any x ¯ X , in other words
c ( x ¯ ) = 0 , s ( x ¯ ) 0 , λ ¯ 0 , λ ¯ j T s j ( x ¯ ) = 0 , ϕ j ( x ¯ , λ j k , γ k ) + ( λ j ) 2 ( λ j k ) 2 ,
so,
F ( x k + 1 , ω k , λ k , γ k , ρ k ) f ( x k + 1 ) x L ( x k + 1 , ω k , λ k ) 2 = j = 1 m ψ j ( x ¯ , λ i k , γ k ) + ( λ i 2 ) 2 λ i 4 / 2 ( γ k ) 4 0 ,
μ k + 1 2 + ν k + 1 2 μ k 2 + ν k 2 + 2 ( f ( x ¯ ) f ( x k + 1 ) ) .
Now that f ( x ) has a lower bound, there is a τ such that μ k 2 + ν k 2 τ k .
The lemma is proved.
Theorem 1.
Let the non-empty set X be the feasible solutions set of the problem (P); when Algorithm 1 reaches the stopping criterion or lim inf k f ( x k ) = , then end the experiment.
Proof. 
Suppose the contrary; let the algorithm be implemented and if after the finite number of iterations of the algorithm does not stop or lim inf k f ( x k ) = , then let
J i = { i | lim inf k c i ( x k ) 0 } , J ¯ i = { i | lim inf k ρ i k = } , J j = { j | lim inf k ψ j ( x k ) 0 } , J ¯ j = { j | lim inf k γ k = } .
By assumption of this theorem: J i J j , J ¯ i J ¯ j is not empty. For any x X ,
f ( x ¯ ) + j = 1 m ψ j ( x ¯ , λ j k , γ k ) + λ j k 2 λ j k 2 / 2 γ k f ( x k + 1 ) + ( ω k ) T c ( x k + 1 ) + i = 1 m ρ i c ( x k + 1 ) / 2 + j = 1 m ψ j ( x k + 1 , λ j k , γ k ) + λ j k 2 λ j k 2 / 2 γ k .
Because γ k + 1 γ k and ρ i k + 1 ρ i k , therefore
f ( x ¯ ) f ( x k + 1 ) O ( k ) + i J ¯ i p ρ i k ( c i ( x k + 1 ) 2 + ω i k / ρ i k ) 2 ( ω i k / ρ i k ) 2 / 2 + j J ¯ j m γ k ( ϕ j ( x k + 1 , λ j k , γ k ) / γ k + λ j k / γ k ) 2 ( λ j k / γ k ) 2 / 2 .
where O ( k ) is a higher order infinitesimal of k.
Since Algorithm 1 does not stop, for any k ¯ , there exists k > k ¯ , either for a i J ¯ i and τ 1 > 0 , ρ i k + 1 ρ i k , and c i ( x k + 1 ) τ 1 , either ρ i k θ 2 ( k 1 ) / p k 2 , such that,
ρ i k ( h i ( x k + 1 ) 2 + ω i k / ρ i k ) 2 / 2 > ρ i k τ 1 2 / 2 > k 2 τ 1 2
or for a j J ¯ j , γ j k θ 2 ( k 1 ) / m k 2 ,
γ j k ( s j ( x k + 1 ) 2 + λ j k / γ j k ) 2 / 2 > γ j k τ 1 2 / 2 > k 2 τ 1 2 / 2
The above formula shows,
f ( x ¯ ) f ( x k + 1 ) ( k τ 1 ) 2 / 4 .
This is a contradiction. That is the end of the proof. □
Theorem 2.
Let the non-empty set X is the feasible solutions set of the problem (P); ω k and λ k , are bounded. Let η = 0 in Algorithm 1; at the k-th iteration Algorithm 1 satisfies the stopping criterion, then x k is solution to problem(P), or let x * be the accumulation point of { x k } , then x * is solution to problem (P).
Proof. 
As Algorithm 1 is stopped at the k-th iteration,
ψ j ( x k , λ j k , γ k ) = 0 , s ( x k ) 0 , λ j k 0 , λ j T s j ( x k ) = 0 .
It is clear for any γ > 0 ,
λ j k + 1 = γ k s j ( x k ) ( λ j k ) 2 + ( γ k s j ( x k ) ) 2 + 1 ( λ j k ) 2 + ( γ k s j ( x ) ) 2 + γ k s j ( x k ) = λ j k
Since Ψ j ( x k , λ j k , γ k ) = 0 , c ( x k ) = 0 , by Equations (16) and (21),
F ( x k , ω k , λ k , γ k , ρ k ) = L ( x k , ω k , λ k + 1 ) = 0 .
so, x k is the solution of ( P ) .
In other words, at the k-th iteration Algorithm 1 does not satisfy the stopping criterion, for any minima ( x * , ω * , λ * ) of sequence ( x k , ω k , λ k ) , where γ > 0 or for any x ¯ X , ψ ( x * , λ j * , γ ) = 0 , c ( x k ) = 0 ,
f ( x ¯ ) f ( x k + 1 ) + O ( k ) .
As k
f ( x ¯ ) f ( x * ) .
It is clear that x * is local minima of the primal constrained problem (P).
That is the end of the proof. □

7. Numerical Experiment

To verify the reliability of the algorithm, some numerical experiments were conducted. The results are shown in Table 1. The experimental examples in Table 1 are constrained nonlinear programming from [24], where the problem number is the same as the number of the problem in [24].
In the experiments, the parameters are set to η = 0.00001 , θ 1 = 0.6 , θ 2 = 1.6 , and ψ 10 5 as the termination condition. By selecting different initial points, “NIT, NF and NG” are set as the evaluation criteria. NIT denotes the number of iterations; NF is the number of times the functions are evaluated—the number of NF will only increase by one when all functions are estimated once; NG is the number of times the Ψ (or gradient) is evaluated times.
It is clear from the above table that the “NIT, NF and NG” are small for different problems with different initial points, which means that the new augmented Lagrange methods are effective as shown by these numerical results.
In conclusion, a new augmented Lagrangian function with nonlinear complementarity functions is proposed to solve the constrained nonlinear optimization problems. Under the given conditions, the solution of the new unconstrained programming is equivalent to the solution of the original constrained programming. Furthermore, the alternating direction based search pattern method was designed, and its convergence was demonstrated. At the same time, experiments were conducted and the results of the experiments demonstrated the effectiveness of the presented new augmented Lagrangian function and the algorithms.

Author Contributions

Conceptualization, A.F.; Methodology, A.F. and X.C.; Software, X.C.; Validation, A.F. and Y.S.; Resources, J.F.; Data curation, A.F.; Writing—original draft, A.F. and J.F.; Writing—review & editing, X.C.; Supervision, Y.S.; Project administration, Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

Supported by NSFC (No. 12071112, 12101195).

Data Availability Statement

Not applicable.

Acknowledgments

The authors would very much like to thank the reviewers for their helpful suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gilli, M.; Maringer, D.; Schumann, E. Numerical Methods and Optimization in Finance; Academic Press: Cambridge, MA, USA, 2019. [Google Scholar]
  2. Yan, M.; Shahidehpour, M.; Paaso, A. Distribution network-constrained optimization of peer-to-peer transactive energy trading among multi-microgrids. IEEE Trans. Smart Grid 2020, 12, 1033–1047. [Google Scholar] [CrossRef]
  3. Guo, N.; Lenzo, B.; Zhang, X. A real-time nonlinear model predictive controller for yaw motion optimization of distributed drive electric vehicles. IEEE Trans. Veh. Technol. 2020, 69, 4935–4946. [Google Scholar] [CrossRef] [Green Version]
  4. Shi, D.; Wang, S.; Cai, Y. Model Predictive Control for Nonlinear Energy Management of a Power Split Hybrid Electric Vehicle. Intell. Autom. Soft Comput. 2020, 26, 27–39. [Google Scholar] [CrossRef]
  5. Zhang, Y.; Kou, X.; Song, Z. Research on logistics management layout optimization and real-time application based on nonlinear programming. Nonlinear Eng. 2022, 10, 526–534. [Google Scholar] [CrossRef]
  6. Janabi, A.; Mazin, A. Optimization algorithms and investment portfolio analytics with machine learning techniques under time-varying liquidity constraints. J. Model. Manag. 2022, 17, 864–895. [Google Scholar] [CrossRef]
  7. Liu, Z.; Reynolds, A. Robust multiobjective nonlinear constrained optimization with ensemble stochastic gradient sequential quadratic programming-filter algorithm. SPE J. 2021, 26, 1964–1979. [Google Scholar] [CrossRef]
  8. Nguyen, B.T.; Bai, Y.; Yan, X. Perturbed smoothing approach to the lower order exact penalty functions for nonlinear inequality constrained optimization. Tamkang J. Math. 2019, 50, 37–60. [Google Scholar] [CrossRef]
  9. Tsipianitis, A.; Tsompanakis, Y. Improved Cuckoo Search algorithmic variants for constrained nonlinear optimization. Adv. Eng. Softw. 2020, 149, 102865. [Google Scholar] [CrossRef]
  10. Liu, J.; Chen, J.; Zheng, J. A new accelerated positive-indefinite proximal ADMM for constrained separable convex optimization problems. J. Nonlinear Var. Anal. 2022, 6, 707–723. [Google Scholar]
  11. Zhang, X.L.; Zhang, Y.Q.; Wang, Y.Q. Viscosity approximation of a relaxed alternating CQ algorithm for the split equality problem. J. Nonlinear Funct. Anal. 2022, 43, 335. [Google Scholar]
  12. Di, P.G.; Grippo, L. A new class of augmented Lagrangians in nonlinear programming. SIAM J. Control Optim. 1979, 17, 618–628. [Google Scholar]
  13. Di, P.G.; Grippo, L. A new augmented Lagrangian function for inequality constraints in nonlinear programming problems. J. Optim. Theory Appl. 1982, 36, 495–519. [Google Scholar]
  14. Di, P.G.; Lucidi, S. On exact augmented Lagrangian functions in nonlinear programming. Nonlinear Optim. Appl. 1996, 25, 85–100. [Google Scholar]
  15. Pu, D.G. A class of augmented Lagrangian multiplier function. J. Inst. Railw. Technol. 1984, 5, 45. [Google Scholar]
  16. Pu, D.; Yang, P. A class of new Lagrangian multiplier methods. In Proceedings of the 2013 Sixth International Conference on Business Intelligence and Financial Engineering, Hangzhou, China, 14–16 November 2013; pp. 647–651. [Google Scholar]
  17. Pu, D.G.; Zhu, J. New Lagrangian Multiplier Methods. J. Tongji Univ. (Nat. Sci.) 2010, 38, 1387–1391. [Google Scholar]
  18. Pu, D.G.; Tian, W.W. Globally inexact generalized Newton methods for nonsmooth equation. J. Comput. Appl. Math. 2002, 138, 37–49. [Google Scholar] [CrossRef] [Green Version]
  19. Shao, Y.F.; Pu, D.G. A Class of New Lagrangian Multiplier Methods with NCP function. J. Tongji Univ. (Nat. Sci.) 2008, 36, 695–698. [Google Scholar]
  20. Galántai, A. Properties and construction of NCP functions. Comput. Optim. Appl. 2012, 52, 805–824. [Google Scholar] [CrossRef]
  21. Yu, H.D.; Xu, C.X.; Pu, D.G. Smooth Complementarily Function and 2-Regular Solution of Complementarity Problem. J. Henan Univ. Sci. 2011, 32, 1. [Google Scholar]
  22. Feng, A.F.; Xu, C.X.; Pu, D.G. New Form of Lagrangian Multiplier Methods. In Proceedings of the 2012 Fifth International Joint Conference on Computational Sciences and Optimization, Harbin, China, 23–26 June 2012; Volume 74, pp. 302–306. [Google Scholar]
  23. Feng, A.F.; Zhang, L.M.; Xue, Z.X. Alternating Direction Method Of Solving Nonlinear Programming With Inequality Constrained. In Applied Mechanics and Materials; Trans Tech Publications Ltd.: Kanton Schwyz, Switzerland, 2014; Volume 651, pp. 2107–2111. [Google Scholar]
  24. Schittkowski, K. More Test Examples for Nonlinear Programming Codes; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
Table 1. Computational results of Algorithm 1.
Table 1. Computational results of Algorithm 1.
Problem No.Initial PointNITNFNGInitial PointNITNFNG
Problem 2150.6, 0.61115191.8, 1.8182533
Problem 2270.8, 0.8714251.5, 1.2182334
Problem 2324, 31216236, 691922
Problem 2508, 6, 9151839−6, −7, −8141927
Problem 2641, 0.8, 1, 0.81824211.2, 1.2, 1.2, 1.2243635
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Feng, A.; Chang, X.; Shang, Y.; Fan, J. The Alternating Direction Search Pattern Method for Solving Constrained Nonlinear Optimization Problems. Mathematics 2023, 11, 1863. https://doi.org/10.3390/math11081863

AMA Style

Feng A, Chang X, Shang Y, Fan J. The Alternating Direction Search Pattern Method for Solving Constrained Nonlinear Optimization Problems. Mathematics. 2023; 11(8):1863. https://doi.org/10.3390/math11081863

Chicago/Turabian Style

Feng, Aifen, Xiaogai Chang, Youlin Shang, and Jingya Fan. 2023. "The Alternating Direction Search Pattern Method for Solving Constrained Nonlinear Optimization Problems" Mathematics 11, no. 8: 1863. https://doi.org/10.3390/math11081863

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop