Next Article in Journal
Study on the Selection of Pharmaceutical E-Commerce Platform Considering Bounded Rationality under Probabilistic Hesitant Fuzzy Environment
Previous Article in Journal
Model for Choosing the Shape Parameter in the Multiquadratic Radial Basis Function Interpolation of an Arbitrary Sine Wave and Its Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Objective LQG Design with Primal-Dual Method

Department of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
Mathematics 2023, 11(8), 1857; https://doi.org/10.3390/math11081857
Submission received: 23 March 2023 / Revised: 10 April 2023 / Accepted: 11 April 2023 / Published: 13 April 2023
(This article belongs to the Topic Dynamical Systems: Theory and Applications)

Abstract

:
The objective of this paper is to investigate a multi-objective linear quadratic Gaussian (LQG) control problem. Specifically, we examine an optimal control problem that minimizes a quadratic cost over a finite time horizon for linear stochastic systems subject to control energy constraints. To tackle this problem, we propose an efficient bisection line search algorithm that outperforms other approaches such as semidefinite programming in terms of computational efficiency. The primary idea behind our algorithm is to use the Lagrangian function and Karush–Kuhn–Tucker (KKT) optimality conditions to address the constrained optimization problem. The bisection line search is employed to search for the Lagrange multiplier. Furthermore, we provide numerical examples to illustrate the efficacy of our proposed methods.

1. Introduction

Optimal control of dynamical systems has long been one of the fundamental problems in the control community [1,2,3,4]. Among various optimal control scenarios, the linear quadratic Gaussian (LQG) control problem is our main concern. It covers various applications such as mobile robots, an industrial quality control system, and flight control to name just a few. The LQG problem can be efficiently solved using the classical dynamic programming and Riccati equation [1,2]. In many applications, however, there exist several possibly competing objectives that require behaviors that mediate among them. Sometimes, those multiple objectives can be formulated as constraints; for instance, bounds on different objectives, risk measures or costs. In the classical dynamic programming, multiple objectives can be encoded into the cost to be minimized. However, since they are blended into a single cost, designing a policy that satisfies constraints or minimizes multiple objectives is challenging. In this respect, an optimization-based multi-objective LQG design provides more potentials than the traditional dynamic-programming-based approaches by leveraging the existing constrained optimization algorithms and theories [5].
The emergence of convex optimization [5] and semidefinite programming (SDP) [6] techniques in control analysis and design promoted new optimization formulations of control problems [7,8,9,10,11,12,13,14,15,16,17,18,19]. They have also provided greater convenience and flexibility in control design with various objectives and constraints. However, when the size of the problem is large, the computational complexity of such SDP-based algorithms is known to explode quickly, and it makes the problem numerically inefficient.
Motivated by the discussions above, this paper’s primary objective is to investigate a numerically efficient algorithm for linear quadratic Gaussian (LQG) problems that have an energy constraint, utilizing optimization theory and Lagrangian duality. The problem has many applications such as the building control problems [20,21,22], where limited resources are allowed for the control task. To find an optimal solution to the problem, we suggest a simple and efficient bisection line search algorithm whose computational complexity is in general lower than SDP-based methods. The main idea is to formulate a constrained optimization problem, and then use the Lagrangian function and Karush–Kuhn–Tucker (KKT) optimality conditions [23] to solve the constrained optimization problem. The Lagrange multiplier is searched using the bisection line search. A numerical example of a building control problem is given to demonstrate the effectiveness of the proposed methods.
Some lists of related works are summarized as follows. Ref.  [24] presented a technique to compute the explicit state-feedback solution to both the finite and infinite horizon optimal linear quadratic regulator (LQR) problem subject to state and input constraints. They showed that this closed form solution is piecewise linear and continuous. Ref. [25] developed efficient algorithms to compute such piecewise linear solutions of the constrained LQR problems. The constrained LQR problem was also considered for hybrid systems in [26]. Ref. [27] presented an efficient algorithmic solution to the infinite horizon LQR problem for a discrete-time SISO plant subject to constraints on a scalar variable. The solution to the corresponding quadratic programming problem is based on the active set method and on dynamic programming. Ref. [28] considered a feedback control problem such that the controlled signals have a guaranteed maximum peak value in response to arbitrary but bounded energy exogenous inputs. Ref. [29] proposed a new SDP formulation, where the finite-horizon LQG problem was converted into the optimal covariance matrix selection problem, and addressed energy constrained LQG. Other SDP formulations of control problems with various constraints have been developed in several papers [7,29,30,31,32] to name just a few.
Considering types of constraints, systems, and problems, the most related previous work is [29], which considers energy bounds on the weighted state and control input, and uses SDP solutions. Our main contribution is the development of a simple and numerically efficient algorithm for the energy bounded LQG problem, which does not rely on SDPs. The new method uses a simple and efficient bisection line search algorithm whose computational complexity is in general lower than SDP-based methods. We demonstrate the efficiency of the algorithm through a numerical example.
Notation 1. 
The adopted notation is as follows: N and N + : sets of non-negative and positive integers, respectively; R : set of real numbers; R + : set of non-negative real numbers; R + + : set of positive real numbers; R n : n-dimensional Euclidean space; R n × m : set of all n × m real matrices; A T : transpose of matrix A; A T : transpose of matrix A 1 ; A 0 ( A 0 , A 0 , and A 0 , respectively): symmetric positive definite (negative definite, positive semi-definite, and negative semi-definite, respectively) matrix A; I n : n × n identity matrix; S n : symmetric n × n matrices; S + n : cone of symmetric n × n positive semi-definite matrices; S + + n : symmetric n × n positive definite matrices; and T r ( A ) : trace of matrix A.

2. Finite-Horizon LQG Problem

Consider the stochastic linear time-invariant (LTI) system
x ( k + 1 ) = A x ( k ) + B u ( k ) + w ( k ) ,
where k N , A R n × n , B R n × m are system matrices, x ( k ) R n is the state vector, u ( k ) R m is the input vector, x ( 0 ) N ( z , V ) and w ( k ) N ( 0 , W ) are mutually independent Gaussian random vectors so that E [ x ( 0 ) ] = z , E [ w ( k ) ] = 0 , E [ ( x ( 0 ) z ) ( x ( 0 ) z ) T ] = V , and E [ w ( k ) w ( k ) T ] = W . In this paper, we consider the following multi-objective finite-horizon LQG problem:
Problem 1 
(Multi-objective LQG problem). Solve
min F 0 , , F N 1 R m × n E [ x ( k ) T Q f x ( k ) ] + k = 0 N 1 E x ( k ) u ( k ) T Q k 0 0 R k x ( k ) u ( k ) subject to x ( k + 1 ) = A x ( k ) + B u ( k ) + w ( k ) , u ( k ) = F k x ( k ) , E [ x ( k ) T Q ˜ f x ( k ) ] + k = 0 N 1 E x ( k ) u ( k ) T Q ˜ k 0 0 R ˜ k x ( k ) u ( k ) γ .
Note that the second objective is encoded into the inequality instead of the objective function. The problem may be useful in many optimal control applications, for example, the building control problem [20,21,22], where the goal is to reduce the indoor temperature tracking error as much as possible while using limited energy for the control input within a certain time horizon.
Example 1. 
Consider a room’s thermal dynamic model expressed as (1) with
A = 0.9500 0.0250 0.0250 0 0.0250 0.9750 0 0 0 0 1 0 0 0 0 1 , B = 0.0250 0 0 0 ,
where x 1 ( k ) is the indoor air temperature ( C ), x 2 ( k ) is the wall temperature ( C ), x 3 ( k ) is the outdoor air temperature ( C ), x 4 ( k ) is the reference temperature ( C ), u ( k ) (W) is the control input, which represents the amount of energy injected into the room. For instance, when the goal is to heat the room, u ( k ) > 0 , while u ( k ) < 0 when the objective is to cool the room. The outdoor air temperature and reference temperature are kept constant ( 30 C and 24 C , respectively) over time. To this end, the initial state should be deterministic and fixed, and the last element of the noise w ( k ) should be zero over time. In this case, the initial state should be set to be
x ( 0 ) = x 1 ( 0 ) x 2 ( 0 ) 30 24 T
where x 1 ( 0 ) is the initial indoor air temperature, and x 2 ( 0 ) is the initial wall temperature. We want to enforce the indoor temperature to track the reference temperature 24 C as close as possible while satisfying the total input energy constraint
E k = 0 N 1 u ( k ) 2 γ ,
The problem can be formulated as Problem 1 with
Q f = Q k = 1 0 0 1 T 1 0 0 1 , R k = 0 , k { 0 , 1 , , N 1 }
and
Q ˜ f = Q ˜ k = 0 , R ˜ k = 1 , k { 0 , 1 , , N 1 }
The cost function enforces the indoor temperature to track the desired reference temperature.
Another important remark is that the control policy in Problem 1 is restricted to a linear state-feedback law. It is well-known that the optimal solution to the LQG problem without the energy constraint is a linear state-feedback policy. However, more careful attention should be paid to the constrained case. If it admits a nonlinear optimal solution, then Problem 1 may give only a suboptimal linear solution. Fortunately, the SDP design method in [29] suggests that the optimal solution of the energy constrained problem is also linear so that no conservatism exists in Problem 1.
Throughout the paper, we assume that there exists a strictly feasible solution, i.e., there exists a solution such that the strict inequality constraint is satisfied. The following is a summary of the assumptions that will be utilized throughout this paper.
Assumption 1. 
The following assumptions are made:
  • Q f 0 , Q ˜ f 0 , Q k 0 , Q ˜ k 0 , R k + λ R ˜ k 0 for all k 0 and λ > 0 ;
  • V 0 , W 0 .
The assumptions V 0 and W 0 imply that all elements of x ( 0 ) and w ( k ) are stochastic. For the deterministic case, we need to set V = 0 and W = 0 , and if some of the elements of x ( 0 ) and w ( k ) are partially deterministic, then we need to set V 0 and W 0 . These cases will be briefly addressed later.
If we define the covariance of the augmented vector [ x ( k ) T , u ( k ) T ] T R n × m
S k = E x ( k ) u ( k ) x ( k ) u ( k ) T , k { 0 , , N 1 } ,
then, Problem 1 can be equivalently converted to the matrix equality constrained optimization problem or the covariance selection problem.
Problem 2 
(Covariance selection problem). Solve
J p : = min S 0 , , S N 1 S n + m , F 0 , , F N 1 R m × n J p ( { S k } k = 0 N 1 ) subject to Φ ( F k , S k 1 ) = S k k { 1 , , N 1 } , I n F 0 ( V + z z T ) I n F 0 T = S 0 , C ( { S k } k = 0 N 1 ) γ ,
where
J p ( { S k } k = 0 N 1 ) : = T r Q f A T B T T S N 1 A T B T + W + k = 0 N 1 T r Q k 0 0 R k S k C ( { S k } k = 0 N 1 ) : = T r Q ˜ f A T B T T S N 1 A T B T + W + k = 0 N 1 T r Q ˜ k 0 0 R ˜ k S k Φ ( F , S ) : = I n F A T B T T S A T B T + W I n F T
In Problem 2, the matrix equality constraints represent the covariance updates. Since Problem 1 is strictly feasible, we can prove that Problem 2 is also strictly feasible. Since this fact will be used later, we make a formal assumption for convenience.
Assumption 2 
(Strict feasibility). There exists at least one set of matrices { ( S k , F k ) } k = 0 N 1 such that all the equalities in Problem 2 are satisfied and all inequalities are strictly satisfied.
Based on the assumptions and definitions in this section, we will address the main results in the next section.

3. Main Results

In this section, we present the main results of this paper. We first present the KKT condition [23] for the optimization Problem 2, and find potential optimal solution candidates satisfying the KKT condition. Then, we find the set of optimal solutions and their properties. Based on the analysis, a bisection algorithm is developed.

3.1. Lagrangian Solution

For any P 0 , , P N 1 S + n + m and λ R + , define the Lagrangian function of Problem 2
L ( { ( S k , F k , P k ) } k = 0 N 1 , λ ) : = J p ( { S k } k = 0 N 1 ) + k = 1 N 1 T r ( ( Φ ( F k , S k 1 ) S k ) P k ) + T r I F 0 ( V + z z T ) I F 0 T S 0 P 0 + λ ( C ( { S k } k = 0 N 1 ) γ ) ,
where P 0 , , P N 1 S + n + m are called the Lagrangian multipliers or dual variables.
Rearranging some terms, it can be rewritten as
L ( { ( S k , F k , P k } k = 0 N 1 , λ ) = J d ( { P k , F k } k = 0 N 1 ) + T r A T B T ( Q f + λ Q ˜ f ) A T B T T P N 1 + Q N 1 0 0 R N 1 + λ Q ˜ N 1 0 0 R ˜ N 1 S N 1 + k = 1 N 1 T r ( ( Γ k ( F k , P k , λ ) P k 1 ) S k 1 )
where
J d ( { P k , F k } k = 0 N 1 ) : = T r I F 0 ( V + z z T ) I F 0 T P 0 + k = 1 N T r I F k W I F k T P k
and
Γ k ( F , P , λ ) : = A T B T I F T P I F A T B T T + Q k 0 0 R k + λ Q ˜ k 0 0 R ˜ k
Based on the Lagrangian function, the KKT condition can be summarized as
  • Primal feasibility condition:
    I F 0 ( V + z z T ) I F 0 T = S 0 ,
    Φ ( F k , S k 1 ) = S k , k { 1 , 2 , , N 1 }
    J ˜ p ( { S k } k = 0 N 1 ) γ .
  • Complementary slackness condition:
    λ ( C ˜ ( { S k } k = 0 N 1 ) γ ) = 0 .
  • Dual feasibility condition:
    λ 0 .
  • Stationary condition S k , F k L ( { ( S k , F k , P k ) } k = 0 N 1 , λ ) = 0 :
    P N = Q f + λ Q ˜ f 0 0 0 , Γ N ( 0 , P N , λ ) = P N 1
    Γ k ( F k , P k , λ ) = P k 1 , k { 1 , 2 , N 1 }
    ( V + z z T ) ( P 0 , 12 + F 0 T P 0 , 22 ) + ( P 0 , 12 T + P 0 , 22 F 0 ) ( V + z z T ) = 0
    M k ( P k + 1 , 12 + F k + 1 T P k + 1 , 22 ) + ( P k + 1 , 12 T + P k + 1 , 22 F k + 1 ) M k = 0 k { 1 , 2 , , N 1 }
    where M k = A B S k A B T + W .
Using the KKT condition, we establish a modified Riccati equation for solving the multi-objective problem in the following.
Proposition 1. 
Suppose that λ 0 is fixed and arbitrary. Consider the Riccati equation
A T X k + 1 λ A A T X k + 1 λ B ( R k + λ R ˜ k + B T X k + 1 λ B ) 1 B T X k + 1 λ A + Q k + λ Q ˜ k = X k λ
for all k { 0 , , N 1 } with X N λ = Q f + λ Q ˜ f , and define { ( S k λ , F k λ , P k λ ) } k = 0 N 1 with
F k λ = ( R k + λ R ˜ k + B T X k + 1 λ B ) 1 B T X k + 1 λ A , S k λ = Φ ( F k λ , S k 1 λ ) , S 0 = I F 0 λ ( V + z z T ) I F 0 λ T , P k λ = Q k + λ Q ˜ k + A T X k + 1 λ A A T X k + 1 λ B B T X k + 1 λ A R k + λ R ˜ k + B T X k + 1 λ B ,
where the superscript λ is included to designate the dependence on λ. Then, { ( S k λ , F k λ ) } k = 0 N 1 is a primal feasible point of Problem 2 uniquely satisfying the primal feasibility condition (3) and { ( P k λ , F k λ ) } k = 0 N 1 uniquely satisfies the stationary condition (7)–(10).
Proof. 
Using Assumption 1, V 0 implies that V + z z T is nonsingular, and consequently, (9) implies P 0 , 12 T + P 0 , 22 F 0 = 0 . Similarly, W 0 with (10) implies P k , 12 T + P k , 22 F k = 0 for all k { 0 , 1 , , N 1 } . On the other hand, (7) and (8) with the assumption that R k + λ R ˜ k 0 for any λ > 0 in Assumption 1 ensure P k , 22 is nonsingular for all k { 0 , 1 , , N } . Therefore, the feedback gains are uniquely determined by F k = P k , 22 1 P k , 12 T for all k { 0 , 1 , , N 1 } . Plugging this expression into (7) and (8) leads to the construction in (12) with the Riccati Equation (11). Note that under Assumption 1 and fixed λ > 0 , the KKT point is uniquely determined.    □
If the inequality constraint is removed and λ = 0 , then the Riccati equation in (11) is reduced to the standard Riccati equation. In this case, it is clear that the solution that satisfies the KKT condition is unique. Therefore, the solution obtained from the Riccati equation is the unique optimal solution, which is a well-known fact.
Proposition 2. 
Consider Problem 2 without the inequality constraint. Then, { ( S k 0 , F k 0 ) } k = 0 N 1 is the unique optimal solution of Problem 2.
Proof. 
It is clear that the tuples { S k 0 , F k 0 , P k 0 } k = 0 N 1 uniquely satisfy the KKT condition. Since the KKT condition is a necessary condition for optimality, it is a unique optimal solution of Problem 2 without the inequality constraint.    □
Proposition 1 tells us that the Riccati equation can be induced from the Lagrangian function and KKT condition in optimization theory instead of the classical argument from the value function and HJB equation. Moreover, we can see that the solution of the multi-objective LQG defined in Problem 1 is nothing but the solution of a standard LQG problem with modified weight Q f + λ Q ˜ f , Q k + λ Q ˜ k , R k + λ R ˜ k and an appropriately chosen λ > 0 . Let us now focus on how to determine the Lagrange multiplier λ satisfying the KKT condition. We need to consider the following three scenarios:
  • If the strict inequality C ( { S k 0 } k = 0 N 1 ) < γ is already satisfied with { F k 0 } k = 0 N 1 obtained using the standard Riccati equation, then λ = 0 solves the complementary slackness condition. We do not need to do anything in this case.
  • Moreover, if the equality C ( { S k 0 } k = 0 N 1 ) = γ is satisfied with { F k 0 } k = 0 N 1 obtained using the standard Riccati equation, then any λ 0 solves the complementary slackness condition. However, when λ > 0 , the corresponding { S k λ , F k λ , P k λ } k = 0 N 1 may be different from { S k 0 , F k 0 , P k 0 } k = 0 N 1 . Therefore, to use the variables obtained in Proposition 1 as a solution to the KKT condition, we need to set λ = 0 .
  • Lastly, assume that C ( { S k 0 } k = 0 N 1 ) > γ holds with { F k 0 } k = 0 N 1 obtained using the standard Riccati equation. Then, some λ > 0 solves the complementary slackness condition if C ( { S k λ } k = 0 N 1 ) = γ . Suppose that λ > 0 is such a number. Then, the corresponding tuple ( λ , S k λ , F k λ , P k λ ) satisfies the KKT condition.
For simplicity of the presentation, we only focus on the last case because the other cases are trivial, and we formalize it in the following assumption.
Assumption 3 
(Nontrivial scenario). Throughout the paper, we assume that C ( { S k 0 } k = 0 N 1 ) > γ holds with { F k 0 } k = 0 N 1 obtained using the standard Riccati equation.
To proceed further, we need to establish some properties of the function f : R + R defined as
f ( λ ) : = C ( { S k λ } k = 0 N 1 ) γ , λ 0 ,
which evaluates the error in the inequality constraint. In the following, we study various properties of f, which play important roles throughout this paper.
Proposition 3 
(Properties of f). Define the function f : R + R as
f ( λ ) : = C ( { S k λ } k = 0 N 1 ) γ
Then, the following statements hold:
1. 
f is continuous over R + ;
2. 
f ( λ ) f ( λ + ε ) holds for any ε > 0 ;
3. 
If f ( λ ) = f ( λ + ε ) holds for some ε > 0 , then J p ( { S k λ } k = 0 N 1 ) = J p ( { S k λ + ε } k = 0 N 1 ) ;
4. 
f ( 0 ) > 0 ;
5. 
There exists a λ > 0 such that f ( λ ) < 0 ;
6. 
Define the set-valued mapping T : ( V , W ) { λ > 0 : f ( λ ) = 0 } . Then, T ( V , W ) is a closed line segment.
Proof. 
  • From the definition, P k λ is linear in λ , F k λ is rational, whose entries are finite for a finite λ R + because the inverse matrix ( R k + λ R ˜ k + B T X k + 1 λ B ) 1 in F k λ = ( R k + λ R ˜ k + B T X k + 1 λ B ) 1 B T X k + 1 λ A is finite for all λ R + . Therefore, from the definition, S k λ is also rational and finite over λ R + , which implies that S k λ is continuous in λ R + . This completes the proof.
  • We only need to prove the inequality for C ( { S k λ } k = 0 N 1 ) . By contradiction, suppose that C ( { S k λ + ε } k = 0 N 1 ) > C ( { S k λ } k = 0 N 1 ) holds. For a fixed λ , we see from the KKT condition that the problem is nothing but the optimization
    J p : = min S 0 , , S N 1 S n + m , F 0 , , F N 1 R m × n J p ( { S k } k = 0 N 1 ) + λ [ C ( { S k } k = 0 N 1 ) γ ] subject to Φ ( F k , S k 1 ) = S k k { 1 , , N 1 } , I n F 0 W f I n F 0 T = S 0
    with an augmented objective. Since { S k λ } k = 0 N 1 is the optimal solution corresponding to λ , it follows that
    J p ( { S k λ } k = 0 N 1 ) + λ [ C ( { S k λ } k = 0 N 1 ) γ ] J p ( { S k λ + ε } k = 0 N 1 ) + λ [ C ( { S k λ + ε } k = 0 N 1 ) γ ]
    where { S k λ + ε } k = 0 N 1 is the optimal solution corresponding to ε λ + ε . On the other hand, we have
    J p ( { S k λ + ε } k = 0 N 1 ) + ( λ + ε ) [ C ( { S k λ + ε } k = 0 N 1 ) γ ] J p ( { S k λ } k = 0 N 1 ) + ( λ + ε ) [ C ( { S k λ } k = 0 N 1 ) γ ]
    which leads to
    J p ( { S k λ + ε } k = 0 N 1 ) + λ [ C ( { S k λ + ε } k = 0 N 1 ) γ ] J p ( { S k λ } k = 0 N 1 ) + λ [ C ( { S k λ } k = 0 N 1 ) γ ] + ε [ C ( { S k λ } k = 0 N 1 ) C ( { S k λ + ε } k = 0 N 1 ) ]
    Combining (16) with (14) yields
    0 ε [ C ( { S k λ } k = 0 N 1 ) C ( { S k λ + ε } k = 0 N 1 ) ]
    which contradicts with our hypothesis. This completes the proof.
  • Assume f ( λ ) = f ( λ + ε ) holds for some ε > 0 . Then, (14) leads to J p ( { S k λ } k = 0 N 1 ) J p ( { S k λ + ε } k = 0 N 1 ) , while (15) yields J p ( { S k λ + ε } k = 0 N 1 ) J p ( { S k λ } k = 0 N 1 ) . Combining the two inequalities leads to the desired conclusion.
  • The fourth statement is true due to Assumption 3.
  • Note that the objective in (13) can be replaced with λ 1 J p ( { S k λ } k = 0 N 1 ) + C ( { S k λ } k = 0 N 1 ) γ without changing the optimal solutions. As λ , the objective converges to C ( { S k λ } k = 0 N 1 ) γ = f ( λ ) , which implies C ( { S k λ } k = 0 N 1 ) γ = f ( λ ) < 0 as λ due to the strict feasibility assumption in Assumption 2. Since f is continuous over R + from the first statement, there should exists λ > 0 such that f ( λ ) < 0 .
  • Define a = arg sup { λ > 0 : f ( λ ) = 0 } and b = arg inf { λ > 0 : f ( λ ) = 0 } . From the continuity of f, the supremum and infimum are attained; otherwise, f should be discontinuous. Therefore, we can define a = max { λ > 0 : f ( λ ) = 0 } and b = min { λ > 0 : f ( λ ) = 0 } . From the second statement, we see that f ( λ ) = 0 for all λ [ b , a ] . It completes the proof.
   □
Proposition 3 suggests that f is monotonically decreasing over the non-negative real numbers. Moreover, we can choose a λ ¯ > 0 such that f ( λ ) < 0 , λ [ λ ¯ , ) . Let λ ˜ [ λ ¯ , ) . Then, f ( λ ) is a monotonically decreasing over λ [ 0 , λ ˜ ] , which connects f ( λ ˜ ) < 0 and f ( 0 ) > 0 . The graph of f is illustrated in the following example.
Example 2. 
Let us consider Example 1 again. With γ = 3 and N = 10 , the function value f ( λ ) is plotted in Figure 1, demonstrating the monotonically non-increasing property and the zero crossing property in Proposition 3.
Based on Proposition 3, we can characterize the set of the KKT points. In particular, it turns out that the KKT point corresponding to λ should satisfy the constraint C ( { S k λ } k = 0 N 1 ) = γ ·
Proposition 4. 
The set of variables satisfying the KKT condition is all tuples ( λ , S k λ , F k λ , P k λ ) k = 0 N 1 such that λ > 0 and C ( { S k λ } k = 0 N 1 ) = γ .
Proof. 
If C ( { S k λ } k = 0 N 1 ) > γ , the KKT condition is obviously not satisfied because the complementary slackness condition (5) is not satisfied with λ > 0 . If C ( { S k λ } k = 0 N 1 ) < γ , then the complementary slackness condition is not satisfied with λ > 0 . Only the case that KKT is satisfied with λ > 0 is the case that C ( { S k λ } k = 0 N 1 ) = γ holds. This completes the proof.    □
Proposition 4 gives us a clue on how to decide the KKT point. However, since the KKT condition is a necessary condition for the optimality, there is no guarantee that a KKT point found is actually an optimal solution. Fortunately, we can prove that all the KKT points characterized in Proposition 4 constitute the optimal solutions.
Proposition 5. 
Consider any tuples ( λ , S k λ , F k λ , P k λ ) k = 0 N 1 such that λ > 0 and C ( { S k λ } k = 0 N 1 ) = γ . The set of such tuples is formally defined as
Λ : = { ( λ , S k λ , F k λ , P k λ ) k = 0 N 1 : λ > 0 , C ( { S k λ } k = 0 N 1 ) = γ }
Then, the corresponding { ( S k λ , F k λ ) } k = 0 N 1 with ( λ , S k λ , F k λ , P k λ ) k = 0 N 1 Λ is an optimal solution of the constrained LQG problem in Problem 2.
Proof. 
From Proposition 4, we conclude that Λ is the set of all KKT points. Therefore, there exists at least one ( λ , S k λ , F k λ , P k λ ) k = 0 N 1 Λ such that the corresponding { ( S k λ , F k λ ) } k = 0 N 1 is an optimal solution of the constrained LQG problem in Problem 2. From statement 3) of Proposition 3, other elements in Λ have the same objective function value J p ( { S k λ } k = 0 N 1 ) . Therefore, for all ( λ , S k λ , F k λ , P k λ ) k = 0 N 1 Λ , the corresponding { ( S k λ , F k λ ) } k = 0 N 1 is an optimal solution of the constrained LQG problem in Problem 2. This completes the proof.    □
Proposition 5 tells us that if we can find a root λ > 0 satisfying f ( λ ) = 0 , then we can find an optimal solution of Problem 2. Therefore, the problem is reduced to finding a root of f ( λ ) = 0 . Our next goal is to develop a simple algorithm to solve the multi-objective LQG problem.

3.2. Algorithm

A natural way is to perform a line search over λ 0 until C ( { S k λ } k = 0 N 1 ) = γ holds. For instance, we can gradually increase λ from 0 with a certain step size Δ λ > 0 until f ( λ ) = 0 holds. Another way is to perform a bisection line search over a certain interval λ [ 0 , λ ¯ ] , λ ¯ > 0 and find a root λ > 0 satisfying f ( λ ) = 0 . In this paper, we adopt the bisection search summarized in Algorithm 1. Note that the bisection search is valid because f is monotone in its argument, which is reduced to the monotonicity of C ( { S k λ } k = 0 N 1 ) . Moreover, Algorithm 1 can be seen as a primal-dual method because they alternate the primal and dual variables updates to estimate f. In Algorithm 1, f can be computed using Algorithm 2.
Algorithm 1 Primal-Dual Method with Bisection Line Search.
1:
Input: accuracy ε > 0 , line search interval λ ˜ > 0
2:
Compute f ( 0 ) = Eval ( 0 ) .
3:
if  f ( 0 ) 0  then
4:
    Stop and output 0.
5:
end if
6:
Let a = 0 and b = λ ˜ .
7:
for   k { 0 , 1 , }  do
8:
    Compute f ( a ) = Eval ( a ) , f ( b ) = Eval ( b ) , c = ( a + b ) / 2 , and f ( c ) = Eval ( c )
9:
    if  | ( b a ) / 2 | ε  then
10:
        Stop and output c
11:
    end if
12:
    if  sign ( f ( c ) ) = sign ( f ( a ) )  then
13:
         a c
14:
    else
15:
         b c
16:
    end if
17:
end for
Algorithm 2 Policy evaluation f ( λ ) = Eval ( λ ) .
1:
Input: λ
2:
Dual update: Perform the recursion (Riccati equation)
A T X k + 1 λ A A T X k + 1 λ B ( R k + λ R ˜ k + B T X k + 1 λ B ) 1 B T X k + 1 λ A + Q k + λ Q ˜ k = X k λ
for all k { 0 , , N 1 } with X N λ = Q f + λ Q ˜ f .
3:
Primal update: Compute the feedback gains
F k = ( R k + λ R ˜ k + B T X k + 1 B ) 1 B T X k + 1 A , k { 0 , , N 1 }
4:
Primal update: Perform the recursion
Φ ( F k , S k 1 ) = S k , k { 1 , , N 1 } ,
with I n F 0 ( V + z z T ) I n F 0 T = S 0 .
5:
Compute
J ˜ p ( { S k λ } k = 0 N 1 ) : = T r Q ˜ f A T B T T S N 1 λ A T B T + W + k = 0 N 1 T r Q ˜ k 0 0 R ˜ k S k λ
6:
Output: f ( λ ) = J ˜ p ( { S k λ } k = 0 N 1 ) γ

3.3. Suboptimality

Once an approximate λ is found from Algorithm 1, the corresponding solution { S k λ , P k λ , F k λ } k = 0 N 1 can be easily found. The solution obtained by Algorithm 1 is ε -accurate in terms of λ , while it does not guarantee the ε -accuracy in terms of the objective f ( λ ) or other variables { S k λ , P k λ , F k λ } k = 0 N 1 induced from the ε -accuracy of λ , which depend on their sensitivities in λ . From the structures of f ( λ ) or { S k λ , P k λ , F k λ } k = 0 N 1 , we can conclude that if λ is ε -accurate, then f ( λ ) is ρ ( ε ) -accurate, i.e., | f ( λ ) | ρ ( ε ) for some function ρ such that ρ ( ε ) 0 as ε 0 . The function ρ depends on the system parameters such as ( A , B ) , Q f , Q k , R k , Q ˜ f , Q ˜ k , R ˜ k , k 0 , and N.
Due to the finite precision error in the bisection search, it is hard to satisfy the equality C ( { S k λ } k = 0 N 1 ) = γ or f ( λ ) = 0 exactly. Assume that f ( λ ) is ρ -accurate, i.e., | f ( λ ) | ρ . This implies
C ( { S k λ } k = 0 N 1 ) γ = : a ( ρ , ρ )
Then, such λ > 0 is the dual variable such that
C ( { S k λ } k = 0 N 1 ) = γ + a
is satisfied, and the corresponding tuple { λ , S k λ , P k λ , F k λ } k = 0 N 1 satisfies the KKT condition with γ γ + a . We can conclude that { S k λ , F k λ } k = 0 N 1 is an optimal solution of Problem 2 with γ replaced with γ + a ( γ ρ , γ + ρ ) .
Proposition 6. 
Suppose that given λ > 0 , f ( λ ) is ρ-accurate, i.e., | f ( λ ) | ρ , and define
C ( { S k λ } k = 0 N 1 ) γ = : a ( ρ , ρ )
Then, for the corresponding tuple { λ , S k λ , P k λ , F k λ } k = 0 N 1 , { S k λ , F k λ } k = 0 N 1 is an optimal solution of Problem 2 with γ replaced with γ + a ( γ ρ , γ + ρ ) .
Proof. 
The corresponding tuple { λ , S k λ , P k λ , F k λ } k = 0 N 1 satisfies the KKT condition with the complement slackness condition replaced with λ ( J ˜ p ( { S k } k = 0 N 1 ) γ a ) = 0 . The proof is concluded using Proposition 5. □
Proposition 6 suggests that the solution obtained by using Algorithm 1 is a suboptimal solution of Problem 2 with γ replaced with γ + a .

3.4. Computational Efficiency

The number of variables in the problem is upper bounded by O ( n 2 · N ) . If we use an SDP to solve the multi-objective problem using interior point algorithms, the time complexity is known to be upper bounded by O ( n 6 · N 3 · log ( 1 / ε ) ) to obtain an ε -accurate solution [33]. Therefore, the computational time may explode cubically as N . On the other hand, the bisection line search is known to find an ε -accurate solution within the number of iterations bounded by O ( log ( ε 0 / ε ) ) , where ε 0 = | b a | is the initial bracket size. The time complexity of the proposed algorithm per iteration is O ( n 2 · N ) . Therefore, the overall time complexity is bounded by O ( n 2 · N · log ( ε 0 / ε ) ) , which is linear in N. Assuming that both notions of the ε -accuracy is reasonably compatible for fair comparisons, the proposed bisection algorithm may perform much faster than the interior-point algorithms, especially when N is large, which is the case in most applications. In particular, when the model predictive control is applied, where Problem 2 is solved at every iteration, the proposed scheme could play an important role. The compatibility of the ε -accuracy of both approaches is hard to address within the scope of this paper. We will provide a numerical comparative analysis at the end of this paper to demonstrate the efficiency of the algorithm.

3.5. Deterministic Cases

The previous results assume that the noises are stochastic and the covariance matrices V and W are positive definite. However, it does not cover important applications where some variables are deterministic or fixed. To cover more practical cases, we will extend the results to the generic case that some elements of the noise vectors are deterministic. In this case, V 0 , W 0 should be relaxed to V 0 , W 0 . Note that if V 0 , W 0 , then the KKT point in Proposition 1 uniquely satisfies the KKT condition for any fixed λ > 0 . For the deterministic case V = W = 0 or V , W 0 , it still satisfies the KKT condition, while it may not be a unique solution. Therefore, we cannot preserve the optimality arguments in Proposition 5. Fortunately, the previous results hold in this case under mild assumptions.
Proposition 7. 
Suppose that the second condition, V 0 , W 0 , in Assumption 1 is relaxed to V 0 , W 0 . Assume that f is a bijection for the given V 0 , W 0 . Consider any tuples ( λ , S k λ , F k λ , P k λ ) k = 0 N 1 such that λ > 0 and C ( { S k λ } k = 0 N 1 ) = γ . Then, the corresponding { ( S k λ , F k λ ) } k = 0 N 1 is an optimal solution of the constrained LQG problem in Problem 2.
Proof. 
We first prove the continuity of ( λ , S k λ , F k λ , P k λ ) k = 0 N 1 as a function of V and W, and denote them by ( λ , S k λ , V , W , F k λ , V , W , P k λ , V , W ) k = 0 N 1 . First of all, suppose that λ > 0 is fixed. Then, F k λ , V , W and P k λ , V , W do not depend on V and W, and hence are continuous as functions of ( V , W ) . Moreover, S k λ , V , W depends on ( V , W ) linearly, and thus, is continuous in ( V , W ) . Similarly, so are J p V , W and C V , W as functions of ( V , W ) . Now, f V , W is also continuous as a function of ( V , W ) and λ . Consider the set-valued mapping T : ( V , W ) { λ > 0 : f V , W ( λ ) = 0 } . If f is bijective for V 0 and W 0 , then the output of T is singleton, and T ( V , W ) is the point on the graph of f V , W , which crosses zero because the Lagrange multiplier λ > 0 , which solves the KKT condition as a root of f V , W ( λ ) . Therefore, from the continuity of f V , W on ( V , W ) and λ 0 , we can prove that T : R n × n × R m × m R + + is also continuous as follows. First of all, note that by the continuity of f V , W on ( V , W ) , f V , W is a bijection for all ( V , W ) around ( V , W ) . In the sequel, assume that ( V , W ) always lies inside such a set. Note also that T ( V , W ) = ( f V , W ) 1 ( 0 ) and ( f V , W ) 1 ( y ) is continuous in y by the continuity of f V , W ( λ ) in λ . We will show that for any ε > 0 , there exists δ > 0 such that V V 2 + W W 2 < δ implies | T ( V , W ) T ( V , W ) | < ε . To proceed, let us define T ( V , W ) = ( f V , W ) 1 ( 0 ) = λ and T ( V , W ) = ( f V , W ) 1 ( 0 ) = λ . By the continuity of f V , W in V and W, for any ε > 0 , there exists δ > 0 such that V V 2 + W W 2 < δ implies | f V , W ( λ ) f V , W ( λ ) | < ε . Moreover, by the continuity of ( f V , W ) 1 ( y ) in y, for any ε > 0 , there exists ε > 0 such that | x y | < ε implies | ( f V , W ) 1 ( x ) ( f V , W ) 1 ( y ) | < ε . With x = f V , W ( λ ) , y = f V , W ( λ ) , we have
| ( f V , W ) 1 ( f V , W ( λ ) ) ( f V , W ) 1 ( f V , W ( λ ) ) | = | ( f V , W ) 1 ( 0 ) λ | = | ( f V , W ) 1 ( 0 ) ( f V , W ) 1 ( 0 ) | < ε
Therefore, this proves the continuity of T ( V , W ) = ( f V , W ) 1 ( 0 ) in V and W. Hence, ( λ , S k λ , V , W , F k λ , V , W , P k λ , V , W ) k = 0 N 1 is continuous as a function of V and W. Note that λ is also a function of ( V , W ) . As the next step, consider a sequence ( V i , W i ) i = 0 such that V i = V + ( 0.5 ) i I and W i = W + ( 0.5 ) i I so that ( V i , W i ) ( V , W ) as i and V i 0 , W i 0 for all i 0 . Then, the corresponding ( S k λ i , V i , W i , F k λ i , V i , W i ) k = 0 N 1 is the unique optimal solution corresponding to V i and W i , where λ i is the Lagrange multiplier corresponding to V i and W i . From the continuity of the KKT point in ( V , W ) , we can arrive at the desired conclusion. □

3.6. Example

In this section, we will provide a simple example to illustrate the validity and efficiency of the proposed approach. The numerical example was treated with the help of MATLAB R2020a. The SDPs were solved with SeDuMi [34] and Yalmip [35]. Let us consider Example 1 again with the following setting:
N = 1000 , γ = 25,000 , V = 0 , W = 0.01 I , z = 25 25 30 24
Running the proposed algorithm with the initial search interval [ 0 , λ ˜ ] = [ 0 , 100 ] and the accuracy ε = 0.000001 leads to λ = 0.2448 with the elapsed time 1.83 seconds. With the obtained control policy, the evolution of the indoor temperature x 1 ( t ) ( C), reference temperature x 4 ( t ) ( C), input u ( t ) ( C), and the histograms of the objective function and constraint cost are depicted in Figure 2. The histogram has been obtained over 3000 samples, and the empirical average of the constraint cost is 25,129, which meets the inequality constraint approximately.
Running the proposed algorithm with the same setting except for γ = 10,000 leads to λ = 0.8959 . The corresponding simulation results are given in Figure 3. The empirical average of the constraint cost from samples in histogram is 9956, which meets the inequality constraint approximately.
A semidefinite programming problem (SDP) for solving the same problem, Problem 2 is given by
max S k , k { 0 , 1 , , N } , λ 0 k = 0 N T r S k λ γ subject to Q k + λ Q ˜ k 0 0 R k + λ R ˜ k + A T S k + 1 A S k A T S k + 1 B B T S k + 1 A B T S k + 1 B 0 , k { 0 , 1 , , N }
with S N + 1 = 0 , Q N = Q f , Q ˜ N = Q ˜ f , R N = R ˜ N = 0 , which can be readily obtained by modifying the results in [29]. The histograms of the elapsed times of the proposed algorithm and the above SDP problem to solve the building problem are shown in Figure 4 over 30 samples.
The average elapsed times are 1.7551 s and 11.8987 s for the proposed method and the SDP problem, respectively. The result demonstrates that the proposed algorithm is computationally more efficient than the SDP approach.

4. Conclusions

In this paper, we have studied a multi-objective linear quadratic Gaussian (LQG) control problem subject to an input energy constraint. To solve this problem efficiently, we have proposed a bisection line search algorithm based on optimization and Lagrangian theories. Our approach has been thoroughly investigated by analyzing optimal solutions using the Karush–Kuhn–Tucker (KKT) condition, and we have proven the convergence guarantees of the algorithm. We have also demonstrated the applicability and efficiency of the proposed algorithm by applying it to a building control problem.
The proposed algorithm has the potential to be applied to fast model predictive control and to provide new insights into LQG problems. However, we acknowledge that the bisection line search algorithm may become inefficient in multi-constraint scenarios due to the high-dimensional search space. Hence, in future work, we aim to investigate and develop new algorithms that can efficiently handle problems with multiple constraints.

Funding

This work was supported by the National Research Foundation under Grant NRF2021R1F1A1061613 and Institute of Information communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2022-0-00469).

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Bertsekas, D.P.; Tsitsiklis, J.N. Neuro-Dynamic Programming; Athena Scientific: Belmont, MA, USA, 1996. [Google Scholar]
  2. Bertsekas, D.P. Dynamic Programming and Optimal Control, 3rd ed.; Athena Scientific: Nashua, MA, USA, 2005; Volume 1. [Google Scholar]
  3. Lee, D. Convergence of Dynamic Programming on the Semidefinite Cone for Discrete-Time Infinite-Horizon LQR. IEEE Trans. Autom. Control 2022, 67, 5661–5668. [Google Scholar] [CrossRef]
  4. Sabermahani, S.; Ordokhani, Y. Fibonacci wavelets and Galerkin method to investigate fractional optimal control problems with bibliometric analysis. J. Vib. Control 2021, 27, 1778–1792. [Google Scholar] [CrossRef]
  5. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  6. Vandenberghe, L.; Boyd, S. Semidefinite programming. SIAM Rev. 1996, 38, 49–95. [Google Scholar] [CrossRef] [Green Version]
  7. Boyd, S.; El Ghaoui, L.; Feron, E.; Balakrishnan, V. Linear Matrix Inequalities in Systems and Control Theory; SIAM: Philadelphia, PA, USA, 1994. [Google Scholar]
  8. El Ghaoui, L.; Niculescu, S.I. Advances in Linear Matrix Inequality Methods in Control; SIAM: University City, PA, USA, 2000; Volume 2. [Google Scholar]
  9. Scherer, C.; Weiland, S. Linear Matrix Inequalities in Control; Lecture Notes; Dutch Institute for Systems and Control: Delft, The Netherlands, 2000; Volume 3. [Google Scholar]
  10. Lee, D.H.; Park, J.B.; Joo, Y.H. A less conservative LMI condition for robust D-stability of polynomial matrix polytopes—A projection approach. IEEE Trans. Autom. Control 2010, 56, 868–873. [Google Scholar] [CrossRef]
  11. Lee, D.H.; Joo, Y.H. A note on sampled-data stabilization of LTI systems with aperiodic sampling. IEEE Trans. Autom. Control 2015, 60, 2746–2751. [Google Scholar] [CrossRef]
  12. Cao, Y.Y.; Lin, Z. A descriptor system approach to robust stability analysis and controller synthesis. IEEE Trans. Autom. Control 2004, 49, 2081–2084. [Google Scholar] [CrossRef]
  13. Oliveira, R.C.; Peres, P.L. Parameter-dependent LMIs in robust analysis: Characterization of homogeneous polynomially parameter-dependent solutions via LMI relaxations. IEEE Trans. Autom. Control 2007, 52, 1334–1340. [Google Scholar] [CrossRef]
  14. Ramos, D.C.; Peres, P.L. An LMI condition for the robust stability of uncertain continuous-time linear systems. IEEE Trans. Autom. Control 2002, 47, 675–678. [Google Scholar] [CrossRef]
  15. Chesi, G.; Garulli, A.; Tesi, A.; Vicino, A. Polynomially parameter-dependent Lyapunov functions for robust stability of polytopic systems: An LMI approach. IEEE Trans. Autom. Control 2005, 50, 365–370. [Google Scholar] [CrossRef]
  16. Fridman, E.; Shaked, U. Parameter dependent stability and stabilization of uncertain time-delay systems. IEEE Trans. Autom. Control 2003, 48, 861–866. [Google Scholar] [CrossRef]
  17. Yan, S.; Shen, M.; Nguang, S.K.; Zhang, G. Event-triggered H control of networked control systems with distributed transmission delay. IEEE Trans. Autom. Control 2019, 65, 4295–4301. [Google Scholar] [CrossRef]
  18. Felipe, A.; Oliveira, R.C. An LMI-based algorithm to compute robust stabilizing feedback gains directly as optimization variables. IEEE Trans. Autom. Control 2020, 66, 4365–4370. [Google Scholar] [CrossRef]
  19. Guo, M.; De Persis, C.; Tesi, P. Data-driven stabilization of nonlinear polynomial systems with noisy data. IEEE Trans. Autom. Control 2021, 67, 4210–4217. [Google Scholar] [CrossRef]
  20. Lee, D.; Lee, S.; Karava, P.; Hu, J. Simulation-based policy gradient and its building control application. In Proceedings of the 2018 Annual American Control Conference (ACC), Milwaukee, WI, USA, 27–29 June 2018; pp. 5424–5429. [Google Scholar]
  21. Ma, Y.; Borrelli, F. Fast stochastic predictive control for building temperature regulation. In Proceedings of the 2012 American Control Conference (ACC), Montreal, QC, Canada, 27–29 June 2012; pp. 3075–3080. [Google Scholar]
  22. Oldewurtel, F.; Parisio, A.; Jones, C.N.; Morari, M.; Gyalistras, D.; Gwerder, M.; Stauch, V.; Lehmann, B.; Wirth, K. Energy efficient building climate control using stochastic model predictive control and weather predictions. In Proceedings of the 2010 American Control Conference, Baltimore, ML, USA, 30 June–2 July 2010; pp. 5100–5105. [Google Scholar]
  23. Luenberger, D.G.; Ye, Y. Linear and Nonlinear Programming; Addison-Wesley: Reading, MA, USA, 1984; Volume 2. [Google Scholar]
  24. Bemporad, A.; Morari, M.; Dua, V.; Pistikopoulos, E.N. The explicit linear quadratic regulator for constrained systems. Automatica 2002, 38, 3–20. [Google Scholar] [CrossRef]
  25. Grieder, P.; Borrelli, F.; Torrisi, F.; Morari, M. Computation of the constrained infinite time linear quadratic regulator. Automatica 2004, 40, 701–708. [Google Scholar] [CrossRef]
  26. Borrelli, F. Constrained Optimal Control of Linear and Hybrid Systems; Springer: Berlin/Heidelberg, Germany, 2003; Volume 290. [Google Scholar]
  27. Chisci, L.; Zappa, G. Fast algorithm for a constrained infinite horizon LQ problem. Int. J. Control 1999, 72, 1020–1026. [Google Scholar] [CrossRef]
  28. Rotea, M.A. The generalized H2 control problem. Automatica 1993, 29, 373–385. [Google Scholar] [CrossRef]
  29. Gattami, A. Generalized linear quadratic control. IEEE Trans. Autom. Control 2010, 55, 131–136. [Google Scholar] [CrossRef]
  30. Kothare, M.V.; Balakrishnan, V.; Morari, M. Robust constrained model predictive control using linear matrix inequalities. Automatica 1996, 32, 1361–1379. [Google Scholar] [CrossRef] [Green Version]
  31. Primbs, J.A.; Sung, C.H. Stochastic receding horizon control of constrained linear systems with state and control multiplicative noise. IEEE Trans. Autom. Control 2009, 54, 221–230. [Google Scholar] [CrossRef]
  32. Costa, O.L.V.; Assumpção Filho, E.; Boukas, E.; Marques, R. Constrained quadratic state feedback control of discrete-time Markovian jump linear systems. Automatica 1999, 35, 617–626. [Google Scholar] [CrossRef]
  33. Gahinet, P.; Nemirovski, A.; Laub, A.J.; Chilali, M. LMI Control Toolbox; The Math Works Inc.: Natick, MA, USA, 1996. [Google Scholar]
  34. Sturm, J.F. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim. Methods Softw. 1999, 11, 625–653. [Google Scholar] [CrossRef]
  35. Lofberg, J. YALMIP: A toolbox for modeling and optimization in MATLAB. In Proceedings of the 2004 IEEE International Conference on Robotics and Automation (IEEE Cat. No. 04CH37508), New Orleans, LA, USA, 2–4 September 2004; pp. 284–289. [Google Scholar]
Figure 1. f ( λ ) for different λ [ 0 , 5 ] .
Figure 1. f ( λ ) for different λ [ 0 , 5 ] .
Mathematics 11 01857 g001
Figure 2. With γ = 25,000 , the top figures show evolution of the indoor temperature x 1 ( t ) , reference temperature x 4 ( t ) , and the input u ( t ) . The bottom figure depicts the histograms of the objective function and constraint cost.
Figure 2. With γ = 25,000 , the top figures show evolution of the indoor temperature x 1 ( t ) , reference temperature x 4 ( t ) , and the input u ( t ) . The bottom figure depicts the histograms of the objective function and constraint cost.
Mathematics 11 01857 g002
Figure 3. With γ = 10,000 , the top figures show evolution of the indoor temperature x 1 ( t ) , reference temperature x 4 ( t ) , and the input u ( t ) . The bottom figure depicts the histograms of the objective function and constraint cost.
Figure 3. With γ = 10,000 , the top figures show evolution of the indoor temperature x 1 ( t ) , reference temperature x 4 ( t ) , and the input u ( t ) . The bottom figure depicts the histograms of the objective function and constraint cost.
Mathematics 11 01857 g003
Figure 4. The histograms of the elapsed times of the proposed algorithm and the SDP problem.
Figure 4. The histograms of the elapsed times of the proposed algorithm and the SDP problem.
Mathematics 11 01857 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, D. Multi-Objective LQG Design with Primal-Dual Method. Mathematics 2023, 11, 1857. https://doi.org/10.3390/math11081857

AMA Style

Lee D. Multi-Objective LQG Design with Primal-Dual Method. Mathematics. 2023; 11(8):1857. https://doi.org/10.3390/math11081857

Chicago/Turabian Style

Lee, Donghwan. 2023. "Multi-Objective LQG Design with Primal-Dual Method" Mathematics 11, no. 8: 1857. https://doi.org/10.3390/math11081857

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop