Next Article in Journal
Evaluation of the Regions of Attraction of Higher-Dimensional Hyperbolic Systems Using Extended Dynamic Mode Decomposition
Next Article in Special Issue
Can Artificial Neural Networks Be Used to Predict Bitcoin Data?
Previous Article in Journal
Acknowledgment to the Reviewers of Automation in 2022
Previous Article in Special Issue
Engineering Emergence: A Survey on Control in the World of Complex Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Dynamic Control of Proxy War Arms Support

Peter Lohmander Optimal Solutions, Hoppets Grand 6, SE-901 34 Umea, Sweden
Automation 2023, 4(1), 31-56; https://doi.org/10.3390/automation4010004
Submission received: 10 November 2022 / Revised: 16 January 2023 / Accepted: 25 January 2023 / Published: 30 January 2023
(This article belongs to the Special Issue Networked Predictive Control for Complex Systems)

Abstract

:
A proxy war between a coalition of countries, BLUE, and a country, RED, is considered. RED wants to increase the size of the RED territory. BLUE wants to involve more regions in trade and other types of cooperation. GREEN is a small and independent nation that wants to become a member of BLUE. RED attacks GREEN and tries to invade. BLUE decides to give optimal arms support to GREEN. This support can help GREEN in the war against RED and simultaneously can reduce the military power of RED, which is valuable to BLUE also outside this proxy war, since RED may confront BLUE also in other regions. The optimal control problem of dynamic arms support, from the BLUE perspective, is defined in general form. From the BLUE perspective, there is an optimal position of the front. This position is a function of the weights in the objective function and all other parameters. Optimal control theory is used to determine the optimal dynamic BLUE strategy, conditional on a RED strategy, which is observed by BLUE military intelligence. The optimal arms support strategy for BLUE is to initially send a large volume of arms support to GREEN, to rapidly move the front to the optimal position. Then, the support should be almost constant during most of the war, keeping the war front location stationary. In the final part of the conflict, when RED will have almost no military resources left and tries to retire from the GREEN territory, BLUE should strongly increase the arms support and make sure that GREEN rapidly can regain the complete territory and end the war.

1. Introduction

Military operations analysis, based on quantitative game theoretic models of conflicts, have traditionally been founded on some fundamental, but seldom questioned, assumptions. Such assumptions are typically that the players and decision makers have known objective functions and that the decisions can be shown to be rational, with consideration of these objective functions, the information available to the players and other facts of relevance. Several articles and books of this type are presented and discussed below. Recent developments of severe military conflicts, in particular the Russian attack on Ukraine, make it difficult and perhaps even impossible to understand and model the observed military activities of the attacker as results of a well-defined game solution, where the objective function of the attacker is relevant to rational national interests. This paper’s objective is to develop a theory and a methodology that are suitable and relevant in this partly new type of war. In order to preserve a stable world, it is important to optimize the defense against aggressive attacks from rulers and nations that cannot be considered as rational. For these reasons, the model developed and analyzed in this paper is founded on the assumption that a coalition of countries defends a region attacked by another country. The attacker is described and modeled as a predictable agent. The defense is, however, optimized via optimal control theory.

1.1. Theoretical Framework

Mathematical modeling of military conflicts can be conducted with many alternative analytical and numerical methods. The purpose may be descriptive or normative. In the first case, the models should predict the future development of a conflict, as a function of initial conditions and selected strategies. In the latter, the purpose is usually to optimize the decisions of one or several decision makers that affect the outcome of the conflict.
Differential equations are key to studies of all kinds of dynamic phenomena. Braun [1] is an excellent resource that not only contains general mathematical theories and methods but also highly relevant applications, such as war dynamics. Conventional and conventional-guerilla combat problems are discussed and analyzed in detail, and the dynamics of actual World War II battles are described based on differential equations determined from empirical war data. Fleming and Rishel [2] give a complete introduction to deterministic and stochastic optimal control. The book also covers stochastic differential equations and Markov diffusion processes. Sethi and Thompson [3] is another excellent reference on optimal control, and provides many detailed applications in management science and economics.
When we have more than one decision maker that affects the outcome of the situation at hand, we are in the realm of game theory. Typically, different participants in a game have different objective functions. This is something quite different from optimization with one objective function. A classical book on game theory is Luce and Raiffa [4]. This book does however not handle game dynamics via differential equations. That step was first taken by Isaacs [5], who initiated the theory of differential games. When there are two players with completely opposite interests and objective functions, we have a zero-sum game. Then, one player wants to maximize an objective function, and the other player wants to minimize the same objective function. Classical examples of zero-sum games are chess, where one player wins if the other loses, and military duels, where one participant survives if the other dies. In such cases, differential games can be used to determine how the situation develops. Isaacs [5] does not only invent the methodology but also develops and analyzes many combat problems where the new methodology is useful. The War of Attrition and Attack (WAA), is one of the central models in the book.
Kim et al. [6] is one example of a study based on a symmetric two-player war of attrition game. They show the existence of an equilibrium and calculate the dynamic consequences.
Often, discrete time allows for better modeling of real situations more realistically. More types of functions can be used when stochastic dynamic programming solutions replace continuous time differential games. Lohmander [7,8] contain models that conceptually are generalizations of the WAA model. Lohmander replaces continuous time by discrete time and makes it possible to include nonlinear functions at every stage, to describe the outcomes of different decision combinations, and to make it possible to used mixed strategies at different points in time. With such generalizations, the optimal dynamic two-person zero-sum game strategies can be quite different from the optimal strategies in differential games.
From a general high level strategic perspective, wars are simply not zero-sum problems. It is very important to be aware that military conflicts, when we look at them from a wider perspective, are not zero-sum games. Clearly, in most wars, it is possible for two armies to increase the total number of survivors, if both armies jointly decide to stop the war. In the same way, real wars cause very expensive damages. As the war continuous, more and more houses, roads, bridges, etc., are destroyed, and civilian people are hurt or killed. All these costs of death and destruction should also be considered in a policy relevant analysis of the war. Hence, we must accept the fact that zero-sum wars are in most cases simply not relevant descriptions of large conflicts. Nevertheless, within a war, there are many local battles at lower levels of command. There and then, local officers regularly face situations that may be viewed as zero-sum games. As a commander of such a local unit, you often have decision problems where you survive or die, depending on your action and the action selected by your enemy.
Washburn [9] introduces the fundamentals of zero-sum game theory in the context of tactical military decision problem, typical in the Navy. Pursuit evasion games are common in Navy applications. A special kind of such games, a two-on-one pursuit evasion game, is studied by Zhang et al. [10]. They combine theoretical modeling with simulation studies to determine and study the optimal decisions. This approach is very convincing and motivating, from the perspective of Navy officers. In a similar way, Lohmander [11] presents very fundamental zero-sum game theory in combination with solutions to four central and frequent decision problems for army officers, at platoon, company and battalion levels. Of course, optimal strategies in such games of conflict are functions of all parameters in the problems. Such parameters cannot, however, be exactly known. Lohmander [12] determines how parameter estimation errors in mixed strategy zero-sum game problems affect the optimal strategy frequencies and expected results. These and recent connected theoretical and applied results in the field of zero-sum games are presented by Lohmander [13].
The two-participant zero-sum games are mostly defined with a table, where the rows and columns represent the possible decisions of the parties. The table shows how much each participant will gain or lose for each possible combination of decisions. In order to numerically and/or analytically solve such problems, the common method is to use linear programming. Zhang et al. [14] model a matrix game. They apply an optimal search algorithm and use simulation to verify the effectiveness of the method. In many types of problems of conflict, however, the values of the consequences of alternative decisions to the decision makers are nonlinear functions of the decision combinations. Furthermore, a particular decision may be a continuous variable, such as the level of arms support, and the consequences of the action are typically functions of time. Partly for this reason, the development of the general theory of optimal control was in focus, during the cold war period. Pesch and Plail [15] describe this development by Pontryagin and colleagues at the Steklov institute in USSR, and Hestenes and others, at RAND corporation, USA. Of course, optimal control is and was not only useful for military problems. Rocket science and space research are and have been other areas of application.
Gillispie et al. [16] explicitly used optimal control to study the arms race problem. They defined national goals as objective functions, based on the arms balance. In the analysis, they also applied Richardson differential equations and performed equilibrium and stability analysis. They obtained results of extremely high relevance to world security such as these: direct confrontation between the USA and the Soviet Union would not lead to a stable equilibrium. Stable equilibria could, however, be found if the USA and the Soviet Union acted within NATO and WTO. In the Middle East, the Israeli policy could give a stable equilibrium, which was not the case with the Arab policy.
Optimal control theory has important military applications at many levels. Chen and Zhang [17] use optimal control theory to model warfare as a dynamic system where different kinds of troops meet a homogenous enemy force. Such problems are often denoted Hybrid Warfare problems. They also apply dynamic programming and simulation. In recent years, Unmanned Aerial Vehicles, UAVs, or drones, have become important tools in the battle fields. Louadj et al. [18] utilize optimal control theory to optimize the moves and stability of UAVs, minimizing the distance between the true multidimensional state and the desired state, at a particular point in time.
Optimal control can, by definition, be used to derive the best possible strategy, in problems with dynamic consequences. However, the application of optimal control is impossible if the objective functions of the decision makers are unknown. This may seem obvious but is often forgotten. In most systems with many decision makers, such as the world market of food, there are extremely many sellers and buyers. If some of these behave irrationally, it has very small consequences for the world market prices and trade. In wars, however, the consequences of war-related strategies can be enormous. If national war strategies are created by some dictator with personal and unknown motives (and objective functions) and perhaps with an unrealistic view on the real situation and consequences of alternative actions, the world faces a difficult future.
Käihkö [19] describes that it is unclear how to define the nation Russia and how and why the Russian strategy is determined. He concludes that, if a good strategy should be determined, it is necessary to understand these things. It is important for possible opponents to be aware that the Russian strategy may be formulated based on unknown motives.

1.2. Problem Statement

In this paper, we define and analyze a military strategy optimization problem with three parties, BLUE, GREEN and RED, using optimal control theory. We motivate and investigate one way, open to BLUE, to reduce the military power of the aggressive power, RED. BLUE optimizes the arms support to GREEN, when RED tries to invade GREEN, via optimal control theory. Hence, the main conflict really concerns BLUE and RED, but takes place in the GREEN territory, in the form of a proxy war. The reader may note that the problem described and analyzed in this paper has similarities to a real war that started in Europe in 2022.

2. Materials and Methods

2.1. Description of the Initial Military Situation and the Decision Problem

A proxy war between a coalition of countries, BLUE, and a country, RED, is considered. RED wants to increase the size of the RED territory and rule more regions. BLUE wants to involve more regions in trade and other types of cooperation. GREEN is a small and independent nation that wants to become a member of BLUE. RED attacks GREEN and tries to take control of that country.
The map in Figure 1 shows the territory of country GREEN. The X axis, with direction east, is used to determine the location of the war front, x, at time t, denoted x(t). In order to simplify the notation, we define the western border from the condition X = 0 and the eastern border from X = 10. In some parts of the analysis, we use X = K, as a more general definition of the eastern border.
The war front is illustrated as a dashed line from south to north.
The war starts this way: RED attacks GREEN from the east and rapidly sends combined armor and infantry units along the roads, in direction west. At time t, the RED units reach the frontline x(t). The area east of the frontline x(t) is not controlled by RED, since GREEN has several GREEN military units in the area. GREEN can attack RED east of the front line. GREEN army units have been positioned to secure the area west of x(t). BLUE supports GREEN with ammunition, combat service support and artillery. This way, GREEN can temporarily stop RED from going further west from x(t).
BLUE and RED both have large amounts of nuclear weapons and other weapons of mass destruction. BLUE wants to avoid using these in order not to start a world war that would completely destroy the territories of BLUE, RED and most other parts of the planet. BLUE is economically stronger than RED and has more advanced conventional weapons, artillery with longer shooting ranges, more efficient missiles and antitank weapons.
BLUE decides not to participate in the war with troops on the ground, in order not to make RED start using nuclear weapons. However, BLUE decides to give arms support to GREEN. This support can help GREEN in the war against RED and simultaneously reduce the military power of RED, which is valuable to BLUE, also outside this particular proxy war, since RED may confront BLUE also in other regions. BLUE demands that the arms support is only used within the territory of GREEN.

2.2. Briefing on the Determination of the Optimal Strategy

The analysis contains the following parts:
The optimal dynamic arms support problem, from the BLUE perspective, is defined in general form.
The objective function is a weighted sum of the present value of the free GREEN territory, west of the front line, and the present value obtained by BLUE, represented by the net loss of military resources in the RED army, during the war.
The net loss of RED at a particular point in time is a function of the location of the front line and the size of the mobile GREEN forces east of the front line.
First, it is assumed that the expected RED net loss is proportional to (a particular definition of) the force ratio east of x, the location of the front line. Then, it is proved that the net loss function is a strictly concave quadratic function of x. It is also proved that the unique maximum of the expected RED net loss function occurs at the same warfront location, x; this also occurs if the net loss function is proportional to the force ratio raised to some strictly positive exponent plus some constant. Neither the particular value of the exponent nor the value of the added constant influence the value of x that maximizes the RED net loss function.
The location of the front line is dynamically changing and determined by a differential equation, influenced by the level of attack from RED and the level of arms support from BLUE.
Since military analysis has already convinced BLUE that RED has too limited resources and competence to win this proxy war and to gain the GREEN territory, BLUE does not think that RED is optimizing the strategy in a logical way. Furthermore, the war clearly implies considerable costs of dead and injured soldiers and noncombatants, destroyed cities, infrastructure and military resources. These costs hurt all participants in the war, in particular GREEN and RED. Furthermore, these costs are in general nonlinear functions of the strategies of all parties. For these reasons, a zero-sum game theory approach is simply not relevant. Since the war will not only determine a modified location of the front line and the borders between GREEN and RED, a standard differential game model of the war cannot capture the true and relevant problem.
The observed level of attack from RED is not possible to interpret, by BLUE, as economically optimized by RED, in the interest of the people in country RED. The BLUE interpretation is that the RED command has other motives for the attack on GREEN. BLUE has, however, qualified intelligence resources that can give a reliable prediction of the time path of the military resources that RED can and will send to the front.
From the BLUE perspective, there is an optimal position of the front. This position is a function of the weights in the objective function and all other parameters.
The optimal control solution shows that the optimal arms support strategy for BLUE is to initially send an optimized volume of arms to GREEN, which rapidly makes it possible for GREEN to move the front to the optimal position. Then, the support should be almost constant during most of the war, keeping the war front location stationary. In the final part of the conflict, when RED has almost no military resources left and has to retire from the GREEN territory, BLUE should strongly increase the arms support and make sure that GREEN rapidly can regain the complete territory and end the war.

2.3. Derivation of the Optimal Net Profit Principle

Below, fundamental mathematical methods will be used. These are well presented by Chiang (1974). We consider a country, GREEN, with a rectangular land surface. Compare the illustration in Figure 1. The coordinate in the west to east direction is denoted X. At the border to the west, X = 0. At the border to the east, X = K. At time t, the war front has the X coordinate x(t). The war front is a line in direction north, from the southern border to the northern border.
GREEN has complete control of the territory to the west of the front. RED attacks GREEN from the east.
The number of troops that can be supported by GREEN and can be active at the front and to the east of the front, behind the RED line, attacking RED logistics during transport to the front, is proportional to the area controlled by GREEN and denoted nG.
n G = c G x ( t )   ,   c G > 0
The distance that the RED logistics support has to travel, at time t, from the RED border to the front, is K-x(t). RED has a fixed number of tanks that can be used to protect the RED logistics. Hence, if the distance from the eastern border to the front increases, and the amount of support needed at the front per time unit is constant, then nR, the number of tanks per protected and transported unit, decreases.
n R = c R ( K x ( t ) ) 1   ,   c R > 0
In Figure 2, the war front is located to the west of the war front in Figure 1. Furthermore, in Figure 2, the number of GREEN units east of the front is lower than in Figure 1. This illustrates Equation (1). In Figure 2, the RED logistics arrows are thinner than in Figure 1. This illustrates Equation (2).
Figure 2. The war map of the country GREEN at time t. Explanations are given in the main text. In this case, the frontline x(t) = 3.0. Compare Figure 1, where the frontline at the same point in time has another location.
Figure 2. The war map of the country GREEN at time t. Explanations are given in the main text. In this case, the frontline x(t) = 3.0. Compare Figure 1, where the frontline at the same point in time has another location.
Automation 04 00004 g002
y is a particular military force ratio, defined in (3).
y = n G n R
Clearly, as we see from Equations (4) and (5), y is a quadratic function of x.
y ( x ) = c G x c R ( K x ) 1
y ( x ) = c G c R x ( K x ) = c G c R ( K x x 2 )   ,   c G > 0 , c R > 0
Equations (6) and (7) show that y takes the value zero in case the front is identical to the western or eastern border. In all other war front locations, y is different from zero.
y ( 0 ) = 0 , y ( K ) = 0
( y ( x ) = 0 ) ( x = 0 x = K )
First, we assume that the expected net profit of BLUE caused by RED losses is proportional to y. Equations (8)–(10) show that y has one unique optimum and that this is a unique maximum. A star indicates an optimal value. This optimum occurs when the war front is located exactly in the middle of the country GREEN, when the war front has location K/2. The absolute values of the constants cG and cR do not affect this result, as long as they are both strictly positive.
y ( x ) = c G c R ( K x x 2 )
( d y ( x ) d x = c G c R ( K 2 x ) = 0 ) x * = x = K 2
d 2 y ( x ) d x 2 = 2 c G c R < 0

2.4. The Optimal Front Location and the Functional Form of the BLUE Net Profit Function

Would the optimal location of the war front, from the BLUE perspective, be different from K/2, if the expected profit would not be proportional to the force ratio, y, but proportional to y2, y3 or y raised to some other exponent? Would some added constant influence the optimal x-value? To answer these questions, let z(x) be a generalized function of the force ratio, according to (11).
z ( x ) = z 0 + ( y ( x ) ) φ   ,   φ > 0
Equations (12)–(16) show that the unique value of x that maximizes z(x) also is the unique value of x that maximizes y(x).
d z ( x ) d x = φ ( y ( x ) ) φ 1 d y ( x ) d x
( d z ( x ) d x = 0 ) ( d y ( x ) d x = 0 ) x * = x = K 2
d 2 z ( x ) d x 2 = ( φ 1 ) φ ( y ( x ) ) φ 2 d y ( x ) d x + φ ( y ( x ) ) φ 1 d 2 y ( x ) d x 2
d 2 z ( x ) d x 2 | d y ( x ) d x = 0 = φ ( y ( x ) ) φ 1 d 2 y ( x ) d x 2
sgn ( d 2 z ( x ) d x 2 | d y ( x ) d x = 0 ) = sgn ( d 2 y ( x ) d x 2 ) < 0
Hence, the unique value of x that maximizes y(x) also is the unique value of x that maximizes z(x).
If BLUE is interested in maximizing the expected present value of the net profit of RED losses, the optimal location of the war front is K/2. Hence, if the value of the free GREEN territory is not at all considered in the strategy optimization, it does not matter if the expected profit of BLUE is proportional to the strength ratio y, or the strength ratio raised to some other power strictly greater than 0, such as 2 or 3. Furthermore, the optimal value of x is not affected by constants such as z0.

2.5. The Objective Function, Partial Functions and Motivation

The following functions will now be considered as parts of the objective function. The value of the “free GREEN territory” is considered to be proportional to the area to the west of the war front, f1(x), according to Equation (17).
f 1 ( x ( t ) ) = a 1 x ( t )   ,   a 1 > 0
The value of the expected net loss of RED, is f2(x), as seen in (18). The motivation is found in Equation (8).
f 2 ( x ( t ) ) = a 2 x ( t ) b 2 ( x ( t ) ) 2   ,   a 2 > 0 , b 2 > 0
The cost of arms support at time t, f3(t), is a strictly increasing and strictly convex function of the support level, u(t), according to Equation (19).
f 3 ( u ( t ) ) = g u ( t ) + h ( u ( t ) ) 2   ,   g > 0 , h > 0
In the optimization, the different revenues and costs are all included in f(t), as seen in (20)–(22).
f ( t ) = f 1 ( x ( t ) ) + f 2 ( x ( t ) ) f 3 ( u ( t ) )
f ( t ) = ( a 1 + a 2 ) x ( t ) b 2 ( x ( t ) ) 2 g u ( t ) h ( u ( t ) ) 2
f ( t ) = a x ( t ) b ( x ( t ) ) 2 g u ( t ) h ( u ( t ) ) 2   ,    a = ( a 1 + a 2 ) > 0 , b = b 2 > 0 , g > 0 , h > 0
If we simplify the notation, we obtain Equation (23).
f = a x b x 2 g u h u 2   ,   a > 0 , b > 0 , g > 0 , h > 0

2.6. The Simplified Stationary Problem with Optimal Solutions in Three Different Cases

In this paper, we determine the optimal solutions to the dynamic decision problems. The optimal solutions will be reported as explicit functions and as graphical solutions to alternative numerically specified cases. First, however, we determine the optimal locations of the war front via static problems. In the later analysis, these optimal static solutions are compared to the optimal dynamic solutions. We have to select x within the region defined in (24).
0 x K = 10
STATIC CASE A:
The value of the free GREEN region, to the west of the war front, is given in (25) and illustrated in Figure 3.
f 1 = 50 x
The value of the expected net profit of BLUE, caused by expected RED losses, is given in (26) and illustrated in Figure 4.
f 2 = 100 x 10 x 2
Now, we construct the total static objective function, (27), using (25) and (26). Since we are still only interested in the optimal static warfront solution, we do not have to specify the details of the arms support cost function yet. That will, however, be relevant and important in the later parts of this paper.
f = 150 x 10 x 2 g u h u 2
If we only care about the value of the free GREEN region, the optimal value of x is 10 = K, which means that all of the GREEN territory should be liberated from RED troops. This is also found via (28) and (29). The optimal objective function value is then found in (30). Compare Figure 3.
d f 1 d x = 50 > 0
( d f 1 d x > 0 ) ( x 1 * = K )
f 1 * ( x 1 * ) = max x 1 f 1 | 0 x 1 K = 10 = 50 x 1 * = 500
If we want to maximize the expected net profit of BLUE, caused by expected RED losses, given in (26) and illustrated in Figure 4, we should use Equations (31)–(33). Hence, as we already know from Equation (9), the optimal value of x would be K/2 = 5.
d f 2 d x = 100 20 x
d 2 f 2 d x 2 = 20 < 0
( d f 2 d x = 0 ) ( x 2 * = x = 5 ) ( f 2 * = 100 x 2 * 10 ( x 2 * ) 2 = 250 )
Now, we optimize the location of the war front based on the total objective function (27). Equations (34)–(36) show how this is achieved. The optimal war front is now located between the different solutions that were optimal with consideration of the objective functions f1(x) and f2(x). This is shown in (37). These results are also illustrated in Figure 5.
d f d x = 150 20 x
d 2 f d x 2 = 20 < 0
( d f d x = 0 ) ( x * = x = 7.5 ) ( f * = 150 x * 10 x 2 = 562.5 )
x 2 * < x * < x 1 *
Observe that we have now analyzed a function of the type (38). Then, the maximum value of (38) is less than or equal to the maximum values of the two components f1(x) and f2(x). The reader can check that this also is true in Figure 5.
f ( x ) = f 1 ( x ) + f 2 ( x )
f * ( f 1 * + f 2 * )
STATIC CASE B:
Now, we see how the optimal location of the war front is affected if the value per area unit of the free GREEN territory increases, by 40%, as illustrated in Figure 6. Then, in Figure 7, we observe that the optimal location of the war front moves to the east.
STATIC CASE C:
In Figure 8, the net profit of BLUE, caused by expected net losses of RED, increases by 100%, for every possible level of x. Then, Figure 9 illustrates how the optimal location of the war front moves west.

2.7. Observations and Conclusions from the Static Optimizations

We may consider the decision problem as a multi objective optimization problem. We make the following observations concerning the optimal static solutions:
Let us consider the total objective functions (20) and (23). We regard them as weighted objective functions, where, in STATIC CASE A, the weight of component f1(x) is 1, and the weight of component f2(x) is 1. Then, the optimal location of the war front is 7.5, as shown in Equation (36) and Figure 5. The optimal total objective function value is 562.5. Compare Equation (36) and Figure 5.
In STATIC CASE B, we have increased the weight of f1(x), by 40%, to the new value 1.4. Then, the optimal static location of the front moves to the east, and the total objective function value increases, compared to STATIC CASE A. Compare Figure 6 and Figure 7.
In STATIC CASE C, we have increased the weight of f2(x) by 100% to the new value 2.0. Then, the optimal static location of the front moves to the west and the total objective function value increases, compared to STATIC CASE A. Compare Figure 8 and Figure 9.
We conclude that, in a cooperate strategy negotiation between GREEN and BLUE, it is natural that GREEN is more interested in a high value of the weight of f1(x), since the inhabitants of the GREEN territory want to have a large free territory, and that BLUE is more interested in a high value of the weight of f2(x), since the expected net value of the war is expressed as that function.
Hence, depending on the relative negotiation powers of the parties GREEN and BLUE, the optimal static solution of x is found in some location in the interval between 5 and 10, or more generally, between K/2 and K.

2.8. The General Dynamic Optimal Control Problem

Now, we move on to the optimal control problem in continuous time. We consider a proxy war that starts at t = 0 and ends at t = T. The rate of interest in the capital market is r, and the total present value is F. At every point in time, we have the total objective function (23). Then, the objective function, which we want to maximize, is (40).
F = 0 T e r t ( a x b x 2 g u h u 2 ) d t
The location of the war front, x, is governed by the differential Equation (41). This is based on the following assumptions: if the arms support, u, from BLUE to GREEN increases, the time derivative of the war front increases. If the level of attack from RED, v0 + v1t, increases, the time derivative of the war front decreases. Hence, if u = v0 + v1t, the war front stays in one place. If u > v0 + v1t, the front moves east and if u < v0 + v1t, the front moves west. The following procedure optimizes the time path of u, and the optimal function u(t) will be determined.
x · = u v 0 v 1 t
The Hamiltonian function is (42), where λ denotes the adjoint variable, which is also a function of time.
H = e r t ( a x b x 2 g u h u 2 ) + λ ( u v 0 v 1 t )
The first order optimum condition is:
d H d u = e r t ( g 2 h u ) + λ = 0
The second order maximum condition is:
d 2 H d u 2 = 2 h e r t < 0
The first order maximum condition gives:
( d H d u = 0 ) ( λ = e r t ( g + 2 h u ) )
( x · = u v 0 v 1 t ) ( u = x · + v 0 + v 1 t )
λ = e r t ( g + 2 h ( x · + v 0 + v 1 t ) )
λ · = e r t ( g r 2 h r ( x · + v 0 + v 1 t ) + 2 h ( x · · + v 1 ) )
The adjoint equation is:
d H d x = λ ·
d H d x = e r t ( a 2 b x )
λ · = e r t ( a + 2 b x )
e r t ( g r 2 h r ( x · + v 0 + v 1 t ) + 2 h ( x · · + v 1 ) ) = e r t ( a + 2 b x )
g r 2 h r ( x · + v 0 + v 1 t ) + 2 h ( x · · + v 1 ) = a + 2 b x
x · · r x · b h x = ( g r a 2 h + r v 0 v 1 ) + ( r v 1 ) t
x · · r x · b h x = m + n t   ,   m = g r a 2 h + r v 0 v 1 , n = r v 1
Complementary solution:
x c · · r x c · b h x c = 0
x c ( t ) = A e s t
( s 2 r s b h ) x c = 0
( x c 0 ) ( s 2 r s b h = 0 )
s 1 = p 2 p 2 4 q   ,   ( p , q ) = ( r , b h )
s 2 = p 2 + p 2 4 q   ,   ( p , q ) = ( r , b h )
( r 2 4 + b h > 0 ) ( s 1 s 2 ( s 1 R ) ( s 2 R ) )   ,   b > 0 , h > 0

2.9. Observations Concerning the Nature of the Complementary Solution

We observe that exactly two real valued roots always exist. These roots are always different from each other. Hence, the relevant complementary solution to the differential equation can never be based on two equal roots. Furthermore, the roots can never contain imaginary parts, which means that the complementary solution can never contain trigonometric functions such as sine and cosine functions. The complementary solution always has this form:
x c ( t ) = A 1 e s 1 t + A 2 e s 2 t
A deeper investigation of the two real roots:
s 1 = ( r 2 r 2 4 + b h )
( r 2 4 + b h ) > ( r 2 4 ) = ( r 2 ) ( s 1 < 0 )   ,   b > 0 , h > 0
s 2 = ( r 2 + r 2 4 + b h )
( r 2 4 + b h ) > ( r 2 4 ) = ( r 2 ) > 0 ( s 2 > 0 )   ,   b > 0 , h > 0 , r > 0
We now know that one of the real roots always is strictly negative, and the other always is strictly positive.
lim t A 1 e s 1 t = 0   ,   A 1
lim t A 2 e s 2 t = { f o r A 2 > 0 0 f o r A 2 = 0 f o r A 2 < 0
lim t x c ( t ) = { f o r A 2 > 0 0 f o r A 2 = 0 f o r A 2 < 0

2.10. The Limiting Value of the Complementary Solution

As time goes to infinity, the complementary function converges to infinity, zero or minus infinity, in case A2 is strictly positive, zero or strictly negative.
The Particular Solution:
Let the particular solution have this functional form:
x P ( t ) = w 0 + w 1 t
x P · · r x P · b h x P = m + n t
r w 1 b h w 0 b h w 1 t = m + n t
{ r w 1 b h w 0 = m b h w 1 = n
( w 0 , w 1 ) = ( ( h 2 n r b 2 h m b ) , ( h b n ) )
Finally, we conclude that the particular solution is:
x P ( t ) = ( h 2 n r b 2 h m b ) ( h b n ) t
Observation:
The particular solution is always a linear function of time.
The solution:
Since the complete solution is the sum of the complementary solution and the particular solution, we have:
x ( t ) = x c ( t ) + x P ( t )
This can be explicitly stated as:
x ( t ) = A 1 e s 1 t + A 2 e s 2 t + w 0 + w 1 t
The time derivative of x is:
x · ( t ) = s 1 A 1 e s 1 t + s 2 A 2 e s 2 t + w 1
We remember the expression of the adjoint variable from Equation (47). Now, thanks to the explicit form of the time derivative of x, we can express the adjoint variable as an explicit function of time:
λ ( t ) = e r t ( g + 2 h ( ( s 1 A 1 e s 1 t + s 2 A 2 e s 2 t + w 1 ) + v 0 + v 1 t ) )
We note that x and the adjoint variable are both explicit functions of time. These functions contain a number of parameters that are already known. These functions, however, contain two more parameters, that have not yet been determined, namely A1 and A2.
We can now determine A1 and A2 from two boundary conditions. These two boundary conditions have strictly logical motivations:
We can observe the initial value of x, at time t = 0, denoted x0.
x ( 0 ) = A 1 e s 1 0 + A 2 e s 2 0 + w 0 + w 1 × 0 = x 0
A 1 + A 2 = x 0 w 0
At time T, we want x to take the value xT.
x ( T ) = A 1 e s 1 T + A 2 e s 2 T + w 0 + w 1 T = x T
e s 1 T A 1 + e s 2 T A 2 = x T w 0 w 1 T
In some cases, we may know that the “shadow price”, the marginal capacity value or the adjoint variable, at the time horizon, T, has to be zero. In such cases, the following equations are relevant:
λ ( T ) = e r T ( g + 2 h ( ( s 1 A 1 e s 1 T + s 2 A 2 e s 2 T + w 1 ) + v 0 + v 1 T ) ) = 0
( e r T 0 ) ( g + 2 h ( ( s 1 A 1 e s 1 T + s 2 A 2 e s 2 T + w 1 ) + v 0 + v 1 T ) = 0 )
So, if the adjoint variable, at the time horizon, T, has to be zero, the following equation would have to be included in the linear equation system that should determine A1 and A2.
s 1 e s 1 T A 1 + s 2 e s 2 T A 2 = g 2 h w 1 v 0 v 1 T
However, in this particular analysis, we will not make the assumption that the marginal capacity value has to be zero at time T. On the other hand, we will demand that x takes the value xT at time T. This way, we have a linear equation system with two equations that will be used to determine the relevant values of A1 and A2.
Here is the linear simultaneous equation system with two equations and two unknowns:
[ 1 1 e s 1 T e s 2 T ] [ A 1 A 2 ] = [ x 0 w 0 x T w 0 w 1 T ]
Thanks to Cramer’s rule, we instantly obtain the solutions:
A 1 = | ( x 0 w 0 ) 1 ( x T w 0 w 1 T ) e s 2 T | | 1 1 e s 1 T e s 2 T | = ( x 0 w 0 ) e s 2 T ( x T w 0 w 1 T ) ( e s 2 T e s 1 T )
A 2 = | 1 ( x 0 w 0 ) e s 1 T ( x T w 0 w 1 T ) | | 1 1 e s 1 T e s 2 T | = ( x T w 0 w 1 T ) e s 1 T ( x 0 w 0 ) ( e s 2 T e s 1 T )
Observations concerning the values A1 and A2:
We observe that the expressions of A1 and A2, (89) and (90), have real value nominators, divided by the denominator:
e s 2 T e s 1 T
We have already determined that s1 and s2 are strictly different and that they are both real. We also know that s2 > s1. Hence, for all values of T, equal to or greater than 0, the denominator is strictly positive. Hence, A1 and A2 both exist, are determined via the expressions, and are real.
The signs of A1 and A2 are the same as the nominators in the corresponding expressions.
As a result, we now know these two functions:
( x ( t ) , λ ( t ) )   ,   0 < t < T
We also already know how u is linked to the adjoint variable. Compare Equation (45).
We can obtain the optimal control as a function time, via the adjoint function:
u ( t ) = e r t λ ( t ) g 2 h
Clearly, this can also be expressed as an explicit function of time:
u ( t ) = e r t ( e r t ( g + 2 h ( ( s 1 A 1 e s 1 t + s 2 A 2 e s 2 t + w 1 ) + v 0 + v 1 t ) ) ) g 2 h
u ( t ) = s 1 A 1 e s 1 t + s 2 A 2 e s 2 t + w 1 + v 0 + v 1 t
Observation:
The optimal control function values can also be obtained in another way, as seen below. The two alternative ways to calculate u(t) can be used to confirm the correctness of the calculations.
( x · ( t ) = u ( t ) v 0 v 1 t ) ( u ( t ) = x · ( t ) + v 0 + v 1 t )
The reader may confirm that Equation (95) corresponds to Equation (96).

2.11. Results Based on the General Dynamic Optimal Control Problem

The optimal and explicit time dependent functions of the arms support level, the war front location and the adjoint variable, are found in Equations (78), (80) and (95).
Now, we use the optimal general dynamic results, expressed in the forms of equations, to derive some optimal dynamic results for numerically specified cases. These optimal dynamic results are compared to the optimal statics results derived in the earlier parts of this paper.
Below, six different dynamic cases will be investigated in detail. DYNAMIC CASE 0, represents the standard case and may be viewed as a dynamic version of STATIC CASE A.
Parameter values in DYNAMIC CASE 0: a = 150, b = 10, g = 1, h = 0.1, r = 0.05, v0 = 1, v1 = −0.1, x0 = 5, T = 1 and xT = 10.
In each of the dynamic cases 1 to 6, some parameter has been changed from the value according to DYNAMIC CASE 0. All other parameters are the same as in DYNAMIC CASE 0. This way, it is possible to investigate how sensitive the optimal dynamic solutions are to different parameter values.
The dynamic cases contain many more parameters and parameter values than the static cases, since these are needed to define and handle several things that were not present in the static problem. In (40), we have the objective function in the dynamic problem, and in (41), we have the differential equation the war front. Hence, several parameters in the dynamic optimal control problem can be found in (40) and (41).
More parameters are needed in the dynamic problem than in the static problem: the parameters of the cost function of the control u, namely g and h, the rate of interest in the capital market, r, the parameters of the RED attack level function, v0 and v1, the total time of the proxy war, T, and the initial and final locations of the war front, x0 and xT.
DYNAMIC CASE 0
This case may be viewed as a dynamic version of STATIC CASE A, since the values of a and b are the same. Compare Equation (27). In Figure 10, we observe that the location of the war front starts at the initial location, x0 = 5, and ends at the final value xT = 10, which means that GREEN controls the complete territory at the end of the war. Most of the time, during the war, the war front is very close to location 7.5, which also is the optimal value in STATIC CASE A.
In Figure 11, we see the time path of the optimal arms support from BLUE to GREEN, as a function of time. This level is very high in the beginning (t < 0.2), since the initial location of the war front, 5, is far below the optimal value of the war front, 7.5, according to the STATIC CASE A. Hence, it is important to rapidly move the front to a location close to the optimal location. That can be accomplished with massive arms support, a high value of u, for low values of t. The logic behind that is clear from (41). During most of the war, the war front should be close to the optimal static value, which means that the optimal level of arms support, u, should be almost the same as the level of attack from RED, which only changes very slowly. For that reason, the level of u is low and almost constant, for 0.2 < t < 0.8. When we reach the end of the war, it is important for BLUE to send large amounts of arms support to GREEN to rapidly move the war front to the original border between GREEN and RED, namely 10. This way, GREEN regains control of the complete GREEN territory exactly when the war ends.
DYNAMIC CASE 1
DYNAMIC CASE 1 is identical to the DYNAMIC CASE 0, with respect to all parameters except for the fact that x0 = 3. This means that the initial war front is located to the west of the corresponding war front in DYNAMIC CASE 0. Figure 12 shows how the time path of the war front develops over time. We see that the front is rapidly moved east, until t = 0.4, when it reaches almost the same level as we had in DYNAMIC CASE 0 and in STATIC CASE A. In the end of the conflict, the DYNAMIC CASES 0 and 1 have almost identical war front developments.
In order to initially move the war front more to the east, in DYNAMIC CASE 1, than in DYNAMIC CASE 0, a higher level of arms support is needed in the early period of the war. This is also graphically seen in Figure 13, before t = 0.4.
DYNAMIC CASE 2
DYNAMIC CASE 2 is identical to the DYNAMIC CASE 0, with respect to all parameters except for the fact that a = 170. This corresponds to the STATIC CASE B. One possible interpretation is that the relative weight in the objective function of the value of the free GREEN territory increases. Compare Figure 6 and Figure 7.
Figure 14 shows how the time path of the war front develops over time. We see that the front is rapidly moved further east, than in DYNAMIC CASE 0, until t = 0.4, when it reaches almost the same level as we had in STATIC CASE B. In the end of the conflict, the front moves to 10, and the complete GREEN territory is liberated from RED.
In order to initially move the war front more to the east, in DYNAMIC CASE 2, than in DYNAMIC CASE 0, a higher level of arms support is needed in the early period of the war. In the end of the war, less arms support is needed than in DYNAMIC CASE 0, since the front does not have to move very far during the final period. Compare Figure 15.
DYNAMIC CASE 3
DYNAMIC CASE 3 is identical to the DYNAMIC CASE 0, with respect to all parameters except for the fact that b = 250. This corresponds to the STATIC CASE C. One possible interpretation is that the relative weight in the objective function of the value of the net profit of BLUE, caused by RED losses, increases. Compare Figure 8 and Figure 9.
Figure 16 shows how the time path of the war front develops over time. We see that the front is initially moved east to a position to the west of the corresponding position in DYNAMIC CASE 0. This position is almost the same as we had in STATIC CASE C. In the end of the conflict, the front moves to 10, and the complete GREEN territory is liberated from RED.
In order to initially move the war front less to the east, in DYNAMIC CASE 3, than in DYNAMIC CASE 0, a lower level of arms support is needed in the early period of the war. In the end of the war, more arms support is needed than in DYNAMIC CASE 0, since the front must move very far during this period. Compare Figure 17.
DYNAMIC CASE 4
DYNAMIC CASE 4 is identical to the DYNAMIC CASE 0, with respect to all parameters except for the fact that v0 = 5. This means that the RED attack level is considerably higher than in DYNAMIC CASE 0.
Figure 18 shows how the time path of the war front develops over time. We see that the front develops exactly as in DYNAMIC CASE 0. In order to handle the increased level of RED attack, keeping the front line at in the same location as in DYNAMIC CASE 0, the level of arms support from BLUE to GREEN must be higher, during every time interval. This is clearly illustrated in Figure 19.
DYNAMIC CASE 5
DYNAMIC CASE 5 is identical to the DYNAMIC CASE 0, with respect to all parameters except for the fact that v1 = 1. This means that the level of RED attack is an increasing function of time. In DYNAMIC CASE 0, the RED level of attack was a slowly decreasing function of time. Figure 20 shows how the time path of the war front develops over time. We see that the front develops exactly as in DYNAMIC CASE 0. In order to handle the increasing level of RED attack, keeping the front line at in the same location as in DYNAMIC CASE 0, the level of arms support from BLUE to GREEN must be a higher than in DYNAMIC CASE 0, particularly in the later part of the war. This is also shown in Figure 21.

3. Conclusions

From the BLUE perspective, there is an optimal position of the war front. This optimal position is a function of the weights in the objective function and all other parameters.
The optimal arms support strategy for BLUE is to initially send a large volume of arms support to GREEN to rapidly move the front to the optimal position.
Then, the support should be almost constant during most of the war, keeping the war front location stationary.
In the final part of the conflict, when RED has almost no military resources left and tries to retire from the GREEN territory, BLUE should strongly increase the arms support and make sure that GREEN can rapidly regain the complete territory and end the war.

4. Discussion

A proxy war in country GREEN, between a coalition of countries, BLUE, and the attacking country, RED, has been analyzed, where RED wants to increase the size of the RED territory, and BLUE wants to involve more regions in trade and other types of cooperation. This type of conflict has considerable similarities to a real war in Europe that started in the year 2022. It is critical to the safety and stability of our world to understand and be able to manage this and similar proxy wars, in the optimal way. Hopefully the reader will be able to utilize and adapt the optimization approach developed in this paper, to help stabilize our world and to reduce the levels of future military conflicts.
In the model and in the real world, BLUE and RED both have large amounts of nuclear weapons and other weapons of mass destruction. It is urgent that we, the inhabitants of Earth, can avoid using these in order not to start a world war that would destroy most parts of our unique planet.

Funding

This research received no external funding.

Data Availability Statement

This analytical research has not used any empirical data.

Acknowledgments

The author is grateful to Carol Paraniak and Dan Magnusson for reading the manuscript and for discussing alternative specifications of the strategic problem under analysis. These officers also provided the author with highly appreciated military education and training at the Norrland Dragoon Regiment, K4. The author is also grateful to well-motivated comments from two anonymous reviewers that improved the structure of the paper.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Braun, M. Differential Equations and Their Applications. In Applied Mathematical Sciences, 3rd ed.; Springer: Cham, Switzerland, 1983; Volume 15, Available online: http://www.mmcmodinagar.ac.in/econtent/physics/DifferentialEquationsAndTheirApplications.pdf (accessed on 1 January 2021).
  2. Fleming, W.H.; Rishel, R.W. Deterministic and Stochastic Optimal Control. Applications of Mathematics; Springer: New York, NY, USA, 1975; Available online: https://link.springer.com/book/10.1007/978-1-4612-6380-7 (accessed on 1 January 2021).
  3. Sethi, S.P.; Thompson, G.L. Optimal Control Theory, Applications to Management Science and Economics, 2nd ed.; Kluwer Academic Publishers: Boston, MA, USA, 2000; Available online: https://www.amazon.com/Optimal-Control-Theory-Applications-Management/dp/0387280928 (accessed on 1 January 2021).
  4. Luce, R.D.; Raiffa, H. Games and Decisions, Introduction and Critical Survey; Dover Books on Mathematics; Dover Publications Inc.: New York, NY, USA, 1989. (First published 1957); Available online: https://www.scribd.com/book/271636770/Games-and-Decisions-Introduction-and-Critical-Survey (accessed on 1 January 2021).
  5. Isaacs, R. Differential Games, A Mathematical Theory with Applications to Warfare and Pursuit, Control and Optimization; Dover Publications Inc.: New York, NY, USA, 1965. (First published 1965); Available online: https://www.amazon.com/Differential-Games-Mathematical-Applications-Optimization/dp/0486406822 (accessed on 1 January 2021).
  6. Kim, G.J.; Kim, B.; Kim, J. Equilibrium in a war of attrition with an option to fight decisively. Oper. Res. Lett. 2019, 47, 326–330. Available online: https://www.sciencedirect.com/science/article/abs/pii/S016763771930121X (accessed on 1 January 2021). [CrossRef]
  7. Lohmander, P. Applications and mathematical modeling in operations research. In Proceedings of the KEYNOTE, International Conference on Mathematics and Decision Science, Guangzhou, China, 12–15 September 2016; International Center of Optimization and Decision Making & Guangzhou University: Guangzhou, China, 2016; Available online: http://www.Lohmander.com/PL_ICODM_2016_KEY_PAPER.pdf (accessed on 1 January 2021).
  8. Lohmander, P. Applications and Mathematical Modeling in Operations Research. In Fuzzy Information and Engineering and Decision. IWDS 2016: Advances in Intelligent Systems and Computing; Cao, B.Y., Ed.; Springer: Cham, Switzerland, 2018; Volume 646, Available online: https://link.springer.com/chapter/10.1007/978-3-319-66514-6_5 (accessed on 1 January 2021).
  9. Washburn, A.R. Two-Person Zero-Sum Games, 3rd ed.; INFORMS, Topics in Operations Research Series; Springer: Cham, Switzerland, 2003; Available online: https://www.amazon.com/Two-Person-Zero-Sum-International-Operations-Management/dp/1461490499 (accessed on 1 January 2021).
  10. Zhang, Y.; Zhang, P.; Wang, X.; Song, F.; Li, C.; Hao, J. An open loop Stackelberg solution to optimal strategy for UAV pursuit-evasion game. Aerosp. Sci. Technol. 2022, 129, 107840. Available online: https://www.sciencedirect.com/science/article/abs/pii/S1270963822005144 (accessed on 1 January 2021). [CrossRef]
  11. Lohmander, P. Four central military decision problems, General methods and solutions. R. Swed. Acad. War Sci. Proc. J. 2019, 2, 119–134. Available online: http://www.lohmander.com/PLRSAWS_19.pdf (accessed on 1 January 2021).
  12. Lohmander, P. Optimal decisions and expected values in two player zero sum games with diagonal game matrixes—Explicit functions, general proofs and effects of parameter estimation errors. Int. Robot. Autom. J. 2019, 5, 186–198. Available online: https://medcraveonline.com/IRATJ/IRATJ-05-00193.pdf (accessed on 1 January 2021). [CrossRef]
  13. Lohmander, P. Recent advances in general game theory and applications. In Proceedings of the Invited Lecture at MIUN, Mid Sweden University, Dept. of Economics, Geography, Law and Tourism (EJT), Sundsvall, Sweden, 16 January 2020; Available online: http://www.Lohmander.com/PL_Game_200116.pdf (accessed on 1 January 2021).
  14. Zhang, T.; Li, C.; Ma, D.; Wang, X.; Li, C. An optimal task management and control scheme for military operations with dynamic game strategy. Aerosp. Sci. Technol. 2021, 115, 106815. Available online: https://www.sciencedirect.com/science/article/abs/pii/S1270963821003254 (accessed on 1 January 2021). [CrossRef]
  15. Pesch, H.J.; Plail, M. The cold war and the maximum principle of optimal control. In Documenta Mathematica; Extra Volume ISMP (2012); Deutschen Mathematiker-Vereinigung (DMV): Berlin, Germany, 2012; pp. 243–253. Available online: https://www.math.uni-bielefeld.de/documenta/vol-ismp/48_pesch-hans-josef-cold-war.pdf (accessed on 1 January 2021).
  16. Gillespie, J.; Zinnes, D.; Tahim, G.; Schrodt, P.; Rubison, R. An Optimal Control Model of Arms Races. Am. Political Sci. Rev. 1977, 71, 226–244. [Google Scholar] [CrossRef]
  17. Chen, X.; Zhang, A. Modeling and Optimal Control of a Class of Warfare Hybrid Dynamic Systems Based on Lanchester Attrition Model. Math. Probl. Eng. 2014, 2014, 481347. [Google Scholar] [CrossRef] [Green Version]
  18. Louadj, K.; Demim, F.; Nemra, A.; Marthon, P. An Optimal Control Problem of Unmanned Aerial Vehicle. In Proceedings of the 5th International Conference on Control, Decision and Information Technologies (CoDIT 2018), Thessaloniki, Greece, 10–13 April 2018; pp. 751–756. [Google Scholar] [CrossRef]
  19. Käihkö, I. Strategy, State-centrism and Pessimism—The Case of Russia, 2019. R. Swed. Acad. War Sci. Proc. J. 2019, 3, 132–136. Available online: https://www.researchgate.net/profile/Ilmari-Kaeihkoe/publication/336936907_Strategy_State-centrism_and_Pessimism_-_The_Case_of_Russia_2019/links/5dbbd61d4585151435daec26/Strategy-State-centrism-and-Pessimism-The-Case-of-Russia-2019.pdf (accessed on 1 January 2021).
Figure 1. The war map of the country GREEN at time t. Explanations are given in the main text. The frontline x(t) = 6.8. Compare Figure 2, which represents another case, where the frontline at the same point in time has another location.
Figure 1. The war map of the country GREEN at time t. Explanations are given in the main text. The frontline x(t) = 6.8. Compare Figure 2, which represents another case, where the frontline at the same point in time has another location.
Automation 04 00004 g001
Figure 3. The value of the free GREEN region, to the west of the war front.
Figure 3. The value of the free GREEN region, to the west of the war front.
Automation 04 00004 g003
Figure 4. The value of the expected net profit of BLUE, caused by expected RED losses.
Figure 4. The value of the expected net profit of BLUE, caused by expected RED losses.
Automation 04 00004 g004
Figure 5. The optimized location of the war front based on the total objective function. The optimal war front is now located between the different solutions that were optimal with consideration of the objective functions f1(x) and f2(x).
Figure 5. The optimized location of the war front based on the total objective function. The optimal war front is now located between the different solutions that were optimal with consideration of the objective functions f1(x) and f2(x).
Automation 04 00004 g005
Figure 6. The value per area unit of the free GREEN territory increases, by 40%.
Figure 6. The value per area unit of the free GREEN territory increases, by 40%.
Automation 04 00004 g006
Figure 7. If the value per area unit of the free GREEN territory increases, by 40%, the optimal location of the war front moves to the east.
Figure 7. If the value per area unit of the free GREEN territory increases, by 40%, the optimal location of the war front moves to the east.
Automation 04 00004 g007
Figure 8. The net profit of BLUE, caused by expected net losses of RED, increases by 100%, for every possible level of x.
Figure 8. The net profit of BLUE, caused by expected net losses of RED, increases by 100%, for every possible level of x.
Automation 04 00004 g008
Figure 9. If the net profit of BLUE, caused by expected net losses of RED, increases by 100%, for every possible level of x, then the optimal location of the war front moves west.
Figure 9. If the net profit of BLUE, caused by expected net losses of RED, increases by 100%, for every possible level of x, then the optimal location of the war front moves west.
Automation 04 00004 g009
Figure 10. The optimal time path of x in Dynamic case 0.
Figure 10. The optimal time path of x in Dynamic case 0.
Automation 04 00004 g010
Figure 11. The optimal time path of u in Dynamic case 0.
Figure 11. The optimal time path of u in Dynamic case 0.
Automation 04 00004 g011
Figure 12. The optimal time path of x in Dynamic case 1.
Figure 12. The optimal time path of x in Dynamic case 1.
Automation 04 00004 g012
Figure 13. The optimal time path of u in Dynamic case 1.
Figure 13. The optimal time path of u in Dynamic case 1.
Automation 04 00004 g013
Figure 14. The optimal time path of x in Dynamic case 2.
Figure 14. The optimal time path of x in Dynamic case 2.
Automation 04 00004 g014
Figure 15. The optimal time path of u in Dynamic case 2.
Figure 15. The optimal time path of u in Dynamic case 2.
Automation 04 00004 g015
Figure 16. The optimal time path of x in Dynamic case 3.
Figure 16. The optimal time path of x in Dynamic case 3.
Automation 04 00004 g016
Figure 17. The optimal time path of u in Dynamic case 3.
Figure 17. The optimal time path of u in Dynamic case 3.
Automation 04 00004 g017
Figure 18. The optimal time path of x in Dynamic case 4.
Figure 18. The optimal time path of x in Dynamic case 4.
Automation 04 00004 g018
Figure 19. The optimal time path of u in Dynamic case 4.
Figure 19. The optimal time path of u in Dynamic case 4.
Automation 04 00004 g019
Figure 20. The optimal time path of x in Dynamic case 5.
Figure 20. The optimal time path of x in Dynamic case 5.
Automation 04 00004 g020
Figure 21. The optimal time path of u in Dynamic case 5.
Figure 21. The optimal time path of u in Dynamic case 5.
Automation 04 00004 g021
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lohmander, P. Optimal Dynamic Control of Proxy War Arms Support. Automation 2023, 4, 31-56. https://doi.org/10.3390/automation4010004

AMA Style

Lohmander P. Optimal Dynamic Control of Proxy War Arms Support. Automation. 2023; 4(1):31-56. https://doi.org/10.3390/automation4010004

Chicago/Turabian Style

Lohmander, Peter. 2023. "Optimal Dynamic Control of Proxy War Arms Support" Automation 4, no. 1: 31-56. https://doi.org/10.3390/automation4010004

Article Metrics

Back to TopTop