# A Space Decomposition-Based Deterministic Algorithm for Solving Linear Optimization Problems

^{1}

^{2}

## Abstract

**:**

## 1. Introduction

## 2. The Method

#### 2.1. General Approach and Definitions

#### 2.2. The Dominant Dimension

**Conjecture**

**1.**

**Linear optimization problem bounding condition.**A linear optimization problem is bounded if and only if its dominant growth dimension is bounded.

**Proof.**

#### 2.3. Problem Decomposition

**Conjecture**

**2.**

**All projections of a bounded linear optimization problem are bounded in the sense of the dominant dimension.**Any projection over a space with n − 1 or less dimensions of an n-dimensional linear optimization problem which is bounded in the sense of the dominant dimension is also bounded in the sense of the dominant dimension.

**Proof.**

#### 2.4. Constraint Closest Point

#### 2.5. Upper and Lower Constraints

#### 2.6. Inward and Outward Constraints

**Conjecture**

**3.**

**The extreme vertex is defined by at least an inward constraint.**For any bounded system of constraints and an objective function, the extreme vertex is defined by the intersection of constraints from which at least one must an inward constraint.

**Proof.**

#### 2.7. Principal Planes and Constraint Projections

#### 2.7.1. Identifying the 2D Projected Problem Active Upper-Constraints

#### 2.7.2. Identifying the 2D Projected Problem Active Lower-Constraints

#### 2.8. Selecting Active Constraints and Solving the Problem

## 3. The Algorithm

#### 3.1. Define and Dimension set Variables and Arrays

- 3.1.1.
- Check dimensional coherence. Define variables and arrays. Allocating values for matrixes and vectors describing the problem takes $O\left(n\right)+O\left(mn\right)$ + $2\text{}O\left(m\right)$. This includes allocating values for the objective function’s coefficients ${c}_{i}$, the constraints’ coefficients ${a}_{ji}$, the resources ${b}_{j}$ and the vector of constraint operator-comparators (greater than or equal, lower or than or equal).
- 3.1.2.
- Determine dominant dimension g. Fastest growing dimension g. Identifying the fastest growing dimension g is performed by applying Criterion (2) to each dimensional component of each constraints. Thus, Criterion (2) is evaluated $O\left(mn\right)$ times.
- 3.1.3.
- Determine the coordinate $xc{p}_{j}$ for each dimension of the closest point for all constraints $j$. In this segment, the code first computes the shortest distance from the origin to each constraint. Then the coordinate value for each dimension is determined. This process requires $2\xb7O\left(mn\right)$ steps.
- 3.1.4.
- Determine the objective-function’s angles $APjctnOi\left[i\right],$ and the on-principal-planes-projected-constraint angles $APjctGji\left[j,\text{}\mathrm{i}\right]$.

#### 3.2. Study the 2D-Projected Problem’s Upper-Constraints

- 3.2.1.
- Recognize upper-constraints for all projections over the principal planes. Apply Equation (3). The variable UpperBoundsDIMg[j, i] is set to true or false for each constraint and for each dimension except for the dominant dimension g. The computational complexity of this segment is $O\left(m\left(n-1\right)\right)$.
- 3.2.2.
- Identify the upper-constraint with the shortest hypotenuse for each principal plane projection. Apply Equation (7). 3.2.2.a. Compute the projection-factor for upper-constraints. Equation (6). In this segment the hypotenuse length is determined for all constraint-projections on having a value UpperBoundsDIMg[j, i] = true. There are n − 1 constraint-projections for each constraint, then the worst case scenario leads to $O\left(m\left(n-1\right)\right)$ steps.
- 3.2.3.
- Build array with the possible active upper-constraints in each projections over the principal planes. Rank upper constraints according to the constraint gradient criterion.

#### 3.3. Study the 2D-Projected Problem’s Lower-Constraints

#### 3.4. Select Active Constraints

#### 3.5. Find the Extreme Vertex’s Coordinates by Intersecting Active Constraints

## 4. Computational Study

## 5. Discussion

- A better understanding of the effect each constraint exerts over the problem’s feasible region by classifying the role of each constraint.
- Returning lists of active, non-active and redundant constraints.
- Ranking non-active constraints according to their distance to the feasible extreme vertex found. This ranking provides a useful scope of the sequence of conditions bounding a real situation beyond the found extreme point.

## 6. Conclusions

## Funding

## Conflicts of Interest

## Appendix A

#### Appendix A.1. Example 1: Problem A.1: 3 Dims, 8 Constraints. Cradle

max z = | $1.0\xb7{x}_{1}+2.5\xb7{x}_{2}+1.0\xb7{x}_{3},$ | , | |||

subject to: | R1 | $0.5\xb7{x}_{1}+0.5\xb7{x}_{2}$ | ≤ | 4.0 | , |

R2 | $1.0\xb7{x}_{1}$ | ≤ | 8.0 | , | |

R3 | $1.0\xb7{x}_{1}$ | ≤ | 5.2 | , | |

R4 | $1.0\xb7{x}_{2}$ | ≤ | 8.0 | , | |

R5 | $1.0\xb7{x}_{3}$ | ≤ | 10.0 | , | |

R6 | $1.0\xb7{x}_{1}$ | ≤ | 0 | , | |

R7 | $1.0\xb7{x}_{2}$ | ≤ | 0 | , | |

R8 | $1.0\xb7{x}_{3}$ | ≤ | 0 | . |

**Figure A1.**Spatial representation of Problem A.1. (

**Left**) Objective function’s normal vector. (

**Right**) Represents the eight constraint boundaries. The planes representing these constraint boundaries are infinite. They are shown truncated to allow the viewing of hidden objects.

#### Appendix A.1.1. Define and Dimension Set Variables and Arrays

- A.1.1.1. Check dimensional coherence. Define variables and arrays.
- A.1.1.2. Determine dominant dimension g. Fastest growing g dimension g. Applying Equation (2) the dominant dimension ${x}_{g}$ is ${x}_{2}$, thus g = 2.
- A.1.1.3. and A.1.1.4. Determine the coordinate xcp
_{i}for each dimension of the closest point for all constraints j. Determine the objective-function’s angles $APjctnOi\left[i\right]$, and the on-principal-planes-projected-constraint angles $APjctGji\left[j,i\right]$.

Closest Point Coordinates | Projection Angles to Axes (Rad) | ||||||
---|---|---|---|---|---|---|---|

Constraint | Dimension i | Dimension i | |||||

Rj | By Objective Function Direction Distance to Origin | xcp_{0} | xcp_{1} | xcp_{2} | ${\mathrm{APjctG}}_{j0}$ | ${\mathrm{APjctG}}_{j1}$ | ${\mathrm{APjctG}}_{j2}$ |

R0 | 6.5652 | 2.296 | 5.714 | 2.296 | π/4 | 0 | 0 |

R1 | 22.9783 | 8 | 20 | 8 | π/2 | 0 | ∞ |

R2 | 14.9359 | 5.2 | 13 | 5.2 | π/2 | 0 | ∞ |

R3 | 9.1913 | 3.2 | 8 | 3.2 | 0 | 0 | 0 |

R4 | 28.7228 | 10 | 25 | 10 | ∞ | 0 | π/2 |

R5 | NR | NR | NR | NR | −π/2 | 0 | ∞ |

R6 | NR | NR | NR | NR | −π | 0 | −π |

R7 | NR | NR | NR | NR | ∞ | 0 | −π/2 |

#### Appendix A.1.2. Study the 2D-Projected Problem’s Upper-Constraints and Lower-Constraints

**Table A2.**Hypotenuse for projected upper-constraints and possibilities for being the active upper-constraint.

Over Principal Planes Projected Extreme-Point-Movement Hypotenuse Length | True if Projected Upper-Constraint Limits Dim i. Stack Position (1 = Top) Indicates Higher Probability | |||||
---|---|---|---|---|---|---|

Constraint | Dimension i | Dimension i | ||||

Rj | 0 | 1 (*) | 2 | 0 | 1 (*) | 2 |

R0 | 11.3137 | NR | DNUC | true (1) | NR | false |

R1 | ∞ | NR | DNUC | false | NR | false |

R2 | ∞ | NR | DNUC | true (2) | NR | false |

R3 | DNUC | NR | DNUC | false | NR | false |

R4 | DNUC | NR | ∞ | false | NR | true (1) |

R5 | DNUC | NR | DNUC | false | NR | false |

R6 | DNUC | NR | DNUC | false | NR | false |

R7 | DNUC | NR | DNUC | false | NR | false |

#### Appendix A.1.3. Study the 2D-Projected Problem’s Lower-Constraints

**Table A3.**Values of the wCritDistG criterion for projected lower-constraints and possibilities for being the active lower-constraint.

Over Principal Planes Criterion Distance wCritDistG | True if Projected Lower-Constraint Limits Dim i | |||||
---|---|---|---|---|---|---|

Constraint | Dimension i | Dimension i | ||||

Rj | 0 | 1 (*) | 2 | 0 | 1 (*) | 2 |

R0 | DNWC | NR | 5.174 | false | NR | true |

R1 | DNWC | NR | DNWC | false | NR | false |

R2 | DNWC | NR | DNWC | false | NR | false |

R3 | 8 | NR | 8 | true | NR | true |

R4 | DNWC | NR | DNWC | false | NR | false |

R5 | 8 | NR | DNWC | true | NR | false |

R6 | DNWC | NR | DNWC | false | NR | false |

R7 | DNWC | NR | DNWC | false | NR | false |

**Figure A2.**Constraints R0, R3, R4 and R5 projected over the principal planes to work as upper and lower constraints for two-dimensional problems. Projected upper-constraints are blue, and projected lower-constraints are red. Bright red is used for constraint R0 projected (wCritDistG = 5.174) over plane ${x}_{1}{x}_{2}$ to indicate it has priority over projected constraint R3 projected (wCritDistG = 8) shown with dark red.

#### Appendix A.1.4. Select Active Constraints

#### Appendix A.1.5. Find the Extreme Vertex’s Coordinates by Intersecting Active Constraints

## References

- Dantzig, G.B. Origins of The Simplex Method; Technical Report SOL 87-5; Stanford University: Stanford, CA, USA, 1987. [Google Scholar] [CrossRef]
- Nelder, J.A.; Mead, R. A Simplex Method for Function Minimization. Comput. J.
**1965**, 7, 308–313. [Google Scholar] [CrossRef] - Shamos, M.I.; Hoey, D. CLOSEST-POINT PROBLEMS. 1975. Available online: https://pdfs.semanticscholar.org/dfba/35c318f0fc244c6d6cad98c1ad33f82d16ad.pdf (accessed on 15 October 2018).
- Shamos, M.I.; Hoey, D. Geometric Intersection Problems. 1976. Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.9983&rep=rep1&type=pdf (accessed on 15 October 2018).
- Shamos, M.I. Computational Geometry [Internet]. Yale Universiy. 1978. Available online: http://euro.ecom.cmu.edu/people/faculty/mshamos/1978ShamosThesis.pdf (accessed on 15 October 2018).
- Khachiyan, L. Polynomial algorithm in linear programming. Sov. Math. Dokl.
**1979**, 20, 191–194. [Google Scholar] [CrossRef] - Karmarkar, N. A new polynomial-time algorithm for linear programming. Combinatorica
**1984**, 4, 373–395. [Google Scholar] [CrossRef] - Paparrizos, K. An infeasible (exterior point) simplex algorithm for assignment problems. Math. Program.
**1991**, 51, 45–54. [Google Scholar] [CrossRef] - Anstreicher, K.M.; Terlaky, T. A Monotonic Build-Up Simplex Algorithm for Linear Programming. Oper. Res.
**1994**, 42, 556–561. [Google Scholar] [CrossRef] [Green Version] - Andrus, J.F.; Schaferkotter, M.R. An exterior-point method for linear programming problems. J. Optim. Theory Appl.
**1996**, 91, 561–583. [Google Scholar] [CrossRef] - Hassan, A.S.O.; Abdel-Malek, H.L.; Sharaf, I.M. An exterior point algorithm for some linear complementarity problems with applications. Eng. Optim.
**2007**, 39, 661–677. [Google Scholar] [CrossRef] - Paparrizos, K.; Samaras, N.; Sifaleras, A. Exterior point simplex-type algorithms for linear and network optimization problems. Ann. Oper. Res.
**2015**, 229, 607–633. [Google Scholar] [CrossRef] - Glavelis, T.; Ploskas, N.; Samaras, N. Improving a primal–dual simplex-type algorithm using interior point methods. Optimization
**2018**, 67, 2259–2274. [Google Scholar] [CrossRef] - Elble, J.M.; Sahinidis, N.V. Scaling linear optimization problems prior to application of the simplex method. Comput. Optim. Appl.
**2012**, 52, 345–371. [Google Scholar] [CrossRef] - Gondzio, J. Another Simplex-type Method for Large Scale 1 Introduction; Technical Report ZTSW-1-G244/94; Systems Research Institute, Polish Academy of Sciences: Warsaw, Poland, 1996. [Google Scholar]
- Kalai, G.A. Subexponential randomized simplex algorithm. In Proceedings of the Twenty-Fourth Annual ACM Symposium on Theory of Computing, STOC’92, Victoria, BC, Canada, 4–6 May 1992; pp. 475–482. [Google Scholar]
- Hansen, T.D.; Zwick, U. An Improved Version of the Random-Facet Pivoting Rule for the Simplex Algorithm Categories and Subject Descriptors. In Proceedings of the 47th ACM Symposium of Theory Compution, Portland, OR, USA, 14–17 June 2015; pp. 209–218. [Google Scholar] [CrossRef]
- Kelner, J.A.; Spielman, D.A. A randomized polynomial-time simplex algorithm for linear programming. In Proceedings of the Thirty-Eighth Annual ACM Symposium on Theory of Computing—STOC’06, Seattle, WA, USA, 21–23 May 2006; Volume 51. [Google Scholar] [CrossRef]
- Ploskas, N.; Samaras, N. GPU accelerated pivoting rules for the simplex algorithm. J. Syst. Softw.
**2014**, 96, 1–9. [Google Scholar] [CrossRef] - Clarkson, K.L. Las Vegas algorithms for linear and integer programming when the dimension is small. J. ACM
**1995**, 42, 488–499. [Google Scholar] [CrossRef] [Green Version] - Brise, Y.; Gartner, B. Clarkson’s Algorithm for Violator Spaces. In Proceedings of the 21st Annual Canadian Conference on Computational Geometry, Vancouver, BC, Canada, 30 June 2009. [Google Scholar]
- RinnooyKan, A.H.G.; Telgen, J. The Complexity of Linear Programming. Stat. Neerl.
**1981**, 35, 91–107. [Google Scholar] [CrossRef] - Febres, G.L. Non-Iterative Linear Maximization Algorithm [Internet]. figshare.com. 2019. Available online: https://figshare.com/articles/Non-Iterative_Linear_Maximization_Algorithm/7928948 (accessed on 20 May 2019).
- Febres, G.L. Basis to Develop a Platform for Multiple-Scale Complex Systems Modeling and Visualization: MoNet v3. arXiv
**2017**, arXiv:1701.04064. [Google Scholar] - Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. Numerical Recipes in C: The Art of Scientific Computing, 2nd ed.; Cambridge University Press: Cambridge, UK, 1992; Available online: http://www.jstor.org/stable/1269484?origin=crossref (accessed on 15 May 2019).
- Febres, G.L. Gerardo Luis Febres’s Homepage. 2019. Available online: Gfebres.net (accessed on 20 May 2019).
- Klee, V.; Minty, G.J. How Good Is the Simplex Method. In Inequalities III; Shisha, O., Ed.; Academic Press: New York, NY, USA, 1972; pp. 159–175. [Google Scholar]

**Figure 1.**Possibilities for two constraints combined to limit the objective function of two-dimensional problems. (

**a**): Two constraints blocking the objective from their upper regions (blue areas). The objective function is not bounded. (

**b**): Two constraints blocking the objective from their lower regions (red areas). The objective function is not bounded. (

**c**): A constraint blocks the objective from entering its upper region (blue area), and another constraint blocks the objective from entering its lower region (red area). The objective function is bounded. The figure also shows the closest points for several constraints and their corresponding coordinates $xc{p}_{Ri}$ and $xc{p}_{Rg}$ over the axes i and g.

**Figure 2.**Identifying upper and lower-constraints for the 2D-projected problems. (

**a**): A constraint projected of on the principal plane ig is a 2D-upper-constraint if it is in the range of angles between zero and $\raisebox{1ex}{$\pi $}\!\left/ \!\raisebox{-1ex}{$2$}\right.$ radians with respect to the objective-function’s vector. (

**b**): For each 2D-upper-constraint identified, the projection on the principal plane ig is a 2D-lower-constraint if the projected angle is within the range from zero to −π radians respect to 2D-upper-constraint’s normal vector.

**Figure 3.**Graphic representation of the elements of a generic two-dimensional linear optimization problem. The shaded areas indicate the feasible region for each constraint. (

**a**) m constraints defining a multidimensional feasible space (

**b**) constraints 1 to k representing inward constraints. (

**c**) constraints k + 1 to m representing outward constraints.

**Figure 4.**Moving the extreme point in the vicinity of the intersection of the objective function vector and constraint ${R}_{u}$. (

**a**) Shows a perspective of the constraint ${R}_{u}$, and the direction of the closest point $\mathit{x}\mathit{c}\mathit{p}$. The direction of movements, represented by the blue arrow, are limited to the surface of generic constraint ${R}_{u}$ and only varying values for dimension i and the dominant dimension g. Thus, these movements are parallel to plane ig. The violet arrow represents these movements projected on the plane ig. (

**b**) Shows these movements’ projections and the projection factor $P{F}_{u}$.

**Figure 5.**Identifying possibly active upper-constraints. Shaded (upper-constraints) areas indicate unfeasible regions. The white area represents the region not restricted by upper-constraints. Constraints ${R}_{u}$ and ${R}_{uh}$ both can be the active upper-constraint because they cut within the quadrant ${x}_{g}$ > 0 and ${x}_{i}$ > 0. Constraint ${R}_{k}$ is a redundant constraint in this context, since it does not cut the shortest-hypotenuse constraint ${R}_{uh}$ within the quadrant ${x}_{g}$ > 0 and ${x}_{i}$ > 0.

**Figure 6.**Identifying the active lower-constraint. (

**a**) The criterion to identify the active lower-constraint is the value $wCriterion$ of dim g at the intersection of the projected point-movement around the lower-constraint closest point and axis g. The smallest value $wCriterion$ is an indication of the constraint to be selected as active. (

**b**) For lower-constraint passing over the origin, the value $wCriterion$ is considered equal to the value of dim g at the intersection of the upper-constraint and axis g.

© 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Febres, G.L.
A Space Decomposition-Based Deterministic Algorithm for Solving Linear Optimization Problems. *Axioms* **2019**, *8*, 92.
https://doi.org/10.3390/axioms8030092

**AMA Style**

Febres GL.
A Space Decomposition-Based Deterministic Algorithm for Solving Linear Optimization Problems. *Axioms*. 2019; 8(3):92.
https://doi.org/10.3390/axioms8030092

**Chicago/Turabian Style**

Febres, Gerardo L.
2019. "A Space Decomposition-Based Deterministic Algorithm for Solving Linear Optimization Problems" *Axioms* 8, no. 3: 92.
https://doi.org/10.3390/axioms8030092