 Previous Article in Journal
Analytical Approximations of Well Function by Solving the Governing Differential Equation Representing Unsteady Groundwater Flow in a Confined Aquifer
Previous Article in Special Issue
Computer Simulation of Coke Sediments Burning from the Whole Cylindrical Catalyst Grain

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Apex Method: A New Scalable Iterative Method for Linear Programming †

School of Electronic Engineering and Computer Science, South Ural State University (National Research University), 76, Lenin Prospekt, 454080 Chelyabinsk, Russia
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in 2020 Global Smart Industry Conference (GloSIC), Chelyabinsk, Russian Federation, 17–19 November 2020; pp. 20–26.
These authors contributed equally to this work.
Mathematics 2023, 11(7), 1654; https://doi.org/10.3390/math11071654
Received: 13 March 2023 / Revised: 24 March 2023 / Accepted: 28 March 2023 / Published: 29 March 2023

## Abstract

:
The article presents a new scalable iterative method for linear programming called the “apex method”. The key feature of this method is constructing a path close to optimal on the surface of the feasible region from a certain starting point to the exact solution of a linear programming problem. The optimal path refers to a path of the minimum length according to the Euclidean metric. The apex method is based on the predictor—corrector framework and proceeds in two stages: quest (predictor) and target (corrector). The quest stage calculates a rough initial approximation of the linear programming problem. The target stage refines the initial approximation with a given precision. The main operation used in the apex method is an operation that calculates the pseudoprojection, which is a generalization of the metric projection to a convex closed set. This operation is used both in the quest stage and in the target stage. A parallel algorithm using a Fejér mapping to compute the pseudoprojection is presented. An analytical estimation of the parallelism degree of this algorithm is obtained. AlsoAdditionally, an algorithm implementing the target stage is given. The convergence of this algorithm is proven. An experimental study of the scalability of the apex method on a cluster computing system is described. The results of applying the apex method to solve problems from the Netlib-LP repository are presented.
MSC:
90C05; 65K05; 49M20

## 1. Introduction

This article is an expanded and revised version of the conference paper . The following motivation encouraged us to delve into this subject. The rapid development of big big-data storage and processing technologies [2,3] has led to the emergence of optimization mathematical models in the form of multidimensional linear programming (LP) problems . Such LP problems arise in industry, economics, logistics, statistics, quantum physics, and other fields [5,6,7]. An important class of LP applications are is non-stationary problems related to optimization in dynamic environments . For a non-stationary LP problem, the objective function and/or constraints change over the computational process. Examples of non-stationary problems are the following: decision support in high-frequency trading [9,10], hydro-gas-dynamics problems , optimal control of technological processes [12,13,14], transportation [15,16,17], and scheduling [18,19].
One of the standard approaches to solving non-stationary optimization problems is to consider each change as the appearance of a new optimization problem that needs to be solved from scratch . However, this approach is often impractical because solving a problem from scratch without reusing information from the past can take too much time. Thus, it is desirable to have an optimization algorithm capable of continuously adapting the solution to the changing environment, reusing information obtained in the past. This approach is applicable for real-time processes if the algorithm tracks the trajectory of the moving optimal point fast enough. In the case of large-scale LP problems, the last requirement necessitates the development of scalable methods and parallel algorithms of LP.
One of the most promising approaches to solving complex problems in real time is the use of neural network models . Artificial neural networks are a powerful universal tool that is applicableapplies to solving problems in almost all areas. The most popular neural network model is the feedforward neural network. Training and operation of such networks can be implemented very efficiently on GPUs . An important property of a feedforward neural network is that the time to solve a problem is a known constant that does not depend on the problem parameters. This feature is necessary for real-time mode. Pioneering work on the use of neural networks to solve LP problems is the article by Tank and Hopfield . The article describes a two-layer recurrent neural network. The number of neurons in the first layer is the number of variables of the LP problem. The number of neurons in the second layer coincides with the number of constraints of the LP problem. The first and second layers are fully connected. The weights and biases are uniquely determined by the coefficients and the right-hand sides of the linear inequalities defining the constraints, and the coefficients of the linear objective function. Thus, this network does not require training. The state of the neural network is described by the differential equation $x ˙ ( t ) = ∇ E ( x ( t ) )$, where $E ( x ( t ) )$ is an energy function of a special type. Initially, an arbitrary point of the feasible region is fed to the input of the neural network. Then, the signal of the second layer is recursively fed to the first layer. Such processa process leads to convergence to a stable state in which the output remains constant. This state corresponds to the minimum of the energy function, and the output signal is a solution to the LP problem. The Tank and Hopfield approach has been expanded and improved in numerous works (see, for example, [23,24,25,26,27]). The main disadvantage of this approach is the unpredictable number of work cycles of the neural network. Therefore, a recurrent network based on an energy function cannot be used to solve large-scale LP problems in real time.
In the recent paper , a n-dimensional mathematical model for visualizing LP problems was proposed. This model makes it possible to use feedforward neural networks, including convolutional networks , to solve multidimensional LP problems, the feasible region of which is a bounded nonempty-empty set. However, there are practically no works in scientific periodicals devoted to the use of convolutional neural networks for solving LP problems . The reason isThe reason for this is that convolutional neural networks focus on image processing, but there are no methods for constructing training datasets based on a visual representation of the multidimensional LP problems.
This article describes a new scalable iterative method for solving multidimensional LP problems. This method is called the “apex method”. The apex method allows you to generate training datasets for the development of feedforward neural networks capable of finding a solution to a multidimensional LP problem based on its visual representation. The apex method is based on the predictor—corrector framework. At the prediction step, a point belonging to the feasible region of the LP problem is calculated. The corrector step calculates a sequence of points converging to the exact solution of the LP problem. The rest of the paper is organized as follows. Section 2 provides a review of iterative projection-type methods and algorithms for solving linear feasibility problems and LP problems. Section 3 includes the theoretical basis of the apex method. Section 4 presents a formal description of the apex method. Section 4.1 considers the implementation of the pseudoprojection operation in the form of sequential and parallel algorithms and provides an analytical estimation of the scalability of parallel implementation. Section 4.2 describes the quest stage. Section 4.3 describes the target stage. Section 5 presents an informationinformation about the software implementation of the apex method and describes the results of large-scale computational experiments on a cluster computing system. Section 6 discusses the issues related to the main contribution of this article, the advantages and disadvantages of the proposed approach, possible applications, and some other aspects of using the apex method. In Section 7, we present our conclusions and comment on possible further studies of the apex method. The final section Notations contains the main symbols used for describing the apex method.

## 2. Related Work

This section provides an overview of works devoted to iterative projection-type methods used to solve linear feasibility and LP problems. The linear feasibility problem can be stated as follows. Consider the system of linear inequalities in matrix form
$A x ⩽ b ,$
where $A ∈ R m × n$, and $b ∈ R n$. To avoid triviality, we assume that $m > 1$. The linear feasibility problem consists of finding a point $x ˜ ∈ R n$ satisfying matrix inequality system (1). We assume from now on that such a point exists.
Projection-type methods rely on the following geometric interpretation of the linear feasibility problem. Let $a i ∈ R n$ be a vector formed by the elements of the ith row of the matrix A. Then, the matrix inequality $A x ⩽ b$ is represented as a system of inequalities
$a i , x ⩽ b i , i = 1 , … , m .$
Here and further on, $〈 · , · 〉$ stands for the dot product of vectors. We assume from now on that
$a i ≠ 0$
for all $i = 1 , … , m$. For each inequality $a i , x ⩽ b i$, define the closed half-space
$H ^ i = x ∈ R n | a i , x ⩽ b i ,$
and its bounding hyperplane
$H i = x ∈ R n | a i , x = b i .$
For any point $x ∈ R n$, the orthogonal projection$π ( x )$ of point x onto the hyperplane $H i$ can be calculated by the equation
$π i ( x ) = x − a i , x − b i a i 2 a i .$
Here and below, $·$ denotes the Euclidean norm. Let us define the feasible polytope
$M = ⋂ i = 1 m H ^ i .$
that presents the set of feasible points of system (1). Note thatPlease note that in this case, the polytope M is a closed convex set. Here, we always assume that $M ≠ ∅$, i.e., the solution of system (1) exists. In geometric interpretation, the linear feasibility problem consists of finding a point $x ˜ ∈ M$.
The forefathers of the iterative projection-type methods for solving linear feasibility problems are Kaczmarz and Cimmino. In  (English translation in ), Kaczmarz presented a sequential projections method for solving a consistent system of linear equations
$a i , x = b i , i = 1 , … , m .$
His method, starting from an arbitrary point $x ( 0 , m ) ∈ R$, calculates the following sequence of point groups:
$x ( k , 1 ) = π 1 x ( k − 1 , m ) , x ( k , 2 ) = π 2 x ( k , 1 ) , … , x ( k , m ) = π m x ( k , m − 1 )$
for $k = 1 , 2 , 3 , …$ Here, $π i$$( i = 1 , … , m )$ is the orthogonal projection onto the hyperplane $H i$ defined by Equation (6). This sequence converges to the solution of system (8). Geometrically, the method can be interpreted as follows. The initial point $x ( 0 , m )$ is projected orthogonally onto hyperplane $H 1$. The projection is the point $x ( 1 , 1 )$, which now is thrown onto $H 2$. The resulting point $x ( 1 , 2 )$ is then thrown onto $H 3$ and gives the point $x ( 1 , 3 )$, etc. As a result, we obtain the last point $x ( 1 , m )$ from the first point group. The second point group is constructed in the same way, starting from the point $x ( 1 , m )$. The process is repeated for $k = 2 , 3 , …$
Cimmino proposed in  (English description in ) a simultaneous projection method for the same problem. This method uses the following orthogonal reflection operation
$ρ i ( x ) = x − 2 a i , x − b i a i 2 a i ,$
which calculates the point $ρ i ( x )$ symmetric to the point x with respect to the hyperplane $H i$. For the current approximation $x ( k )$, the Cimmino method simultaneously calculates reflections with respect to all hyperplanes $H i$$( i = 1 , … , m )$, and then a convex combination of these reflections is used to form the next approximation:
$x ( k + 1 ) = ∑ i = 1 m w i ρ i x ( k ) ,$
where $w i > 0$$( i = 1 , … , m )$, $∑ i = 1 m w i = 1$. When $w i = 1 m$$( i = 1 , … , m )$, Equation (11) is transformed into the following equation:
$x ( k + 1 ) = 1 m ∑ i = 1 m ρ i x ( k ) .$
Agmon  and Motzkin and Schoenberg  generalized the projection method from equations to inequalities. To solve problem (1), they introduce the relaxed projection
$π i λ ( x ) = ( 1 − λ ) x + λ π i ( x ) ,$
where $0 < λ < 2$. It is obvious that $π i 1 ( x ) = π i ( x )$. To calculate the next approximation, the relaxed projection method uses the following equation:
$x ( k + 1 ) = π l λ x ( k ) ,$
where
$l = arg max i x ( k ) − π i x ( k ) x ( k ) ∉ H ^ i .$
Informally, the next approximation $x ( k + 1 )$ is a relaxed projection of the previous approximation $x ( k )$ with respect to the furthest hyperplane $H l$ bounding the half-space $H ^ l$ not containing $x ( k )$. Agmon in  showed that sequence $x ( k )$ converges, as $k → ∞$, to a point on the boundary of M.
Censor and Elfving, in , generalized the Cimmino method to the case of linear inequalities. They consider the relaxed projection onto the half-space $H ^ i$ defined as follows:
$π ^ i λ ( x ) = ( 1 − λ ) x − λ max 0 , a i , x − b i a i 2 a i$
that gives the equation
$x ( k + 1 ) = ∑ i = 1 m w i π ^ i λ x ( k ) .$
Here, $0 < λ < 2$, and $w i > 0$$( i = 1 , … , m )$, $∑ i = 1 m w i = 1$. In , De Pierro proposed an approach to convergence proof for this method, which differs from the approach of Censor and Elfving. De Piero’s approach is also acceptable for the case when the underlying system of linear inequalities is infeasible. In this case, for $λ = 1$, sequence (17) converges to the point that is the minimum of the function $f ( x ) = ∑ i = 1 m w i π ^ i x − x 2$, i.e., it is a weighted (with the weights $w i$) least least-squares solution of system (1).
The Cimmino-like methods allow efficient parallelization, since orthogonal projections (reflections) can be calculated simultaneously and independently. The article  investigates the efficiency of parallelization of the Cimmino-like method on Xeon Phi manycore processors. In , the scalability of the Cimmino method for multiprocessor systems with distributed memory is evaluated. The applicability of the Cimmino-like method for solving non-stationary systems of linear inequalities on computing clusters is considered in .
As a recent work, we can mention article , which extends the Agmon-Motzkin-Schoenberg relaxation method for the case of semi-infinite inequality systems. The authors consider the system with an infinite number of inequalities in the finite-dimensional Euclidean space $R n$:
$a i , x ⩽ b i , i ∈ I ,$
where I is an arbitrary infinite index set. The main idea of the method is as follows. Let the hyperplane $H x ( ∞ ) = sup max a i , x − b i , 0 i ∈ I$ be the biggest violation with respect to x. Let $x ( 0 )$ be an arbitrary initial point. If the current iteration $x ( k )$ is not a solution of system (18), then let $x ( k + 1 )$ be the orthogonal projection of $x ( k )$ onto a hyperplane $H i$ ($i ∈ I$) near the biggest violation $H x ( k ) ( ∞ )$. If system (18) is consistent, then the sequence $x ( k ) k = 1 , 2 , …$ generated by the described method converges to the solution of this system.
Solving systems of linear inequalities is closely related to LP problems, so projection-type methods can be effectively used to solve this class of problems. The equivalence of the linear feasibility problem and the LP problem is based on the primal-dual LP problem. Consider the primal LP problem in the matrix form:
$x ¯ = arg max x c , x | A x ⩽ b , x ⩾ 0 ,$
where $c , x ∈ R n$, $b ∈ R m$, $A ∈ R m × n$, and $c ≠ 0$. Let us construct the dual problem with respect to problem (19):
$u ¯ = arg min u b , u | A T u ⩾ c , u ⩾ 0 ,$
where $u ∈ R m$. The following primal-dual equality holds:
$c , x ¯ = max A x ⩽ b , x ⩾ 0 c , x = min A T u ⩾ c , u ⩾ 0 b , u = b , u ¯ .$
In [43,44], Eremin proposed the following method based on the primal-dual approach. Let the inequality system
$A ′ x ⩽ b ′$
define the feasible region of primal problem (19). This system is obtained by adding to the system $A x ⩽ b$ the vector inequality $− x ⩽ 0$. In this case, $A ′ ∈ R ( m + n ) × n$, and $b ′ ∈ R m + n$. Let $a i ′$ stand the ith row of the matrix $A ′$. For each inequality $a i ′ , x ⩽ b i ′$, define the closed half-space
$H ^ i ′ = x ∈ R n | a i ′ , x ⩽ b i ′ ,$
and its bounding hyperplane
$H i ′ = x ∈ R n | a i ′ , x = b i ′ .$
Let $π i ′ ( x )$ stand the orthogonal projection of point x onto the hyperplane $H i ′$:
$π i ′ ( x ) = x − a i ′ , x − b i ′ a i ′ 2 a i ′ .$
Let us define the projection onto the half-space $H ^ i ′$:
$π ^ i ′ ( x ) = x − max 0 , a i ′ , x − b i ′ a i ′ 2 a i ′ .$
This projection has the following two properties:
$x ∉ H ^ i ′ ⇒ π ^ i ′ ( x ) = π i ′ ( x ) ;$
$x ∈ H ^ i ′ ⇒ π ^ i ′ ( x ) = x .$
Define $φ 1 : R n → R n$ as follows:
$φ 1 ( x ) = 1 m + n ∑ i = 1 m + n π ^ i ′ ( x ) .$
In the same way, define the feasible region of dual problem (20) as follows:
$D ′ x ⩾ c ′ ,$
where $D = A T ∈ R n × m$, $D ′ ∈ R ( m + n ) × m$, and $c ′ ∈ R n + m$. Denote
$η ^ j ′ ( u ) = u − max 0 , d j ′ , u − c j ′ d j ′ 2 d j ′ ,$
and define $φ 2 : R m → R m$ as follows:
$φ 2 ( u ) = 1 n + m ∑ j = 1 n + m η ^ j ′ ( x ) .$
Now, define $φ 3 : R n + m → R n + m$ as follows:
$φ 3 [ x , u ] = [ x , u ] − c , x − b , u c 2 + b 2 [ c , − b ] ,$
which is corresponding to Equation (21). Here, $[ · , · ]$ stands for the concatenation of vectors.
Finally, define $φ : R n + m → R n + m$ as follows:
$φ x , u = φ 3 [ φ 1 ( x ) , φ 2 ( u ) ] .$
If the feasible region of the primal problem is a bounded and nonempty set, then the sequence
$x ( k + 1 ) , u ( k + 1 ) = φ x ( k ) , u ( k )$
converges to the point $[ x ¯ , u ¯ ]$, where $x ¯$ is the solution of primal problem (19), and $u ¯$ is the solution of dual problem (20).
Article  proposes a method for solving non-degenerate LP problems based on calculating the orthogonal projection of some special point, independent of the main part of data describing the LP problem, onto a problem-dependent cone generated by the constraint inequalities. Actually, tThis method solves a symmetric positive definite system of linear equations of a special kind. The author demonstrates a finite algorithm of an active-set family that is capable of calculating orthogonal projections for problems with up to thousands of rows and columns. The main drawback of this method is a a significant increasing increase in the dimension of the primary problem.
In article , Censor proposes the linear superiorization (LinSup) method as a tool for solving LP problems. The LinSup method does not guarantee finding to find the minimum point of the LP problem, but it directs the linear feasibility-seeking algorithm that it uses toward a point with a decreasing value of the objective function. This process is not identical with to that employed by LP solvers but it is a possible alternative to the sSimplex method for problems of huge size. The basic idea of LinSup is to add an extra term, called perturbation term, to the iterative equation of the projection method. The perturbation term steers the feasibility-seeking algorithm toward reduced the objective function values. In the case of LP problem (19), the objective function is $f ( x ) = c , x$, and LinSup adds $− η c c$ as a perturbation term to iterative Equation (17):
$x ( k + 1 ) = − η c c + ∑ i = 1 m w i π ^ i λ x ( k ) .$
Here, $0 < η < 1$ is a perturbation parameter.
Article  presents an enthusiastic artificial-free linear programming method based on a sequence of jumps and the simplex method. It performsis performed in three phases. Starting with phase 1, it guarantees the existence of a feasible point by relaxing all non-acute constraints. With this initial starting feasible point, in phase 2, it sequentially jumps to the improved objective feasible points. The last phase reinstates the rest of the non-acute constraints and uses the dual simplex method to find the optimal point.
Article  proposes a mathematical model for the visual representation of multidimensional LP problems. To visualize a feasible LP problem, an objective hyperplane $H c$ is introduced, the normal to which is the gradient of the objective function $f ( x ) = c , x$. In the case of seeking the maximum, the objective hyperplane is positioned in such a way that the value of the objective function at all its points is greater thenthan the value of the objective function at all points of the convex polytope M, which is the feasible region of the LP problem. For any point $g ∈ H c$, the objective projection $γ M ( g )$ onto M is defined as follows:
$γ M ( g ) = arg min x x − g x ∈ M , π H c ( x ) = g , if ∃ x ∈ M : π H c ( x ) = g ; + ∞ , if ¬ ∃ x ∈ M : π H c ( x ) = g .$
Here, $π H c ( x )$ denotes the orthogonal projection onto $H c$. On the objective hyperplane $H c$, a rectangular lattice of points $G ∈ R n × R K ( n − 1 )$ is constructed, where K is the number of lattice points in one dimension. Each point $g ∈ G$ is mapped to the real number $γ M ( g ) − g$. This mapping generates a matrix of dimension $( n − 1 )$, which is an image of the LP problem. This approach opens up the possibility of using feedforward-forward artificial neural networks, including convolutional neural networks, to solve multidimensional LP problems. One of the main obstacles to the implementation of this approach is the problem of generating a training set. The literature review shows that there is no suitable method capable of constructing such a training set compatible with the described approach. In the next sections, we present such a method.

## 3. Theoretical Background

In this section In this section, we present a theoretical background used to construct the apex method. Consider the LP problem in the following form:
$x ¯ = arg max x ∈ R n c , x | A x ⩽ b ,$
where $c ∈ R n$, $b ∈ R m$, $A ∈ R m × n$, $m > 1$, and $c ≠ 0$. We assume that the constraint $x ⩾ 0$ is also included in the system $A x ⩽ b$ in the form of the following inequalities:
$− x 1 ⩽ 0 ; ⋯ − x n ⩽ 0 .$
Let $P$ stand for the set of row indices in matrix A:
$P = 1 , ⋯ , m .$
Let $a i ∈ R n$ be a vector formed by the elements of the ith row of the matrix A, and $a i ≠ 0$ for all $i ∈ P$. We denote by $H ^ i$ the closed half-space defined by the inequality $a i , x ⩽ b i$, and by $H i$ the hyperplane bounding $H ^ i$:
$H ^ i = x ∈ R n | a i , x ⩽ b i ;$
$H i = x ∈ R n | a i , x = b i .$
Definition 1.
The half-space $H ^$ is called neutral-dominant with respect to the vector c, or briefly c-neutral-dominant, if
$∀ x ∈ H ^ , ∀ λ ∈ R > 0 : x + λ c ∈ H ^ .$
The geometric meaning of this definition is that a ray outgoing from a point belonging to a half-space in the direction of vector c belongs to this half-space.
Definition 2.
The half-space $H ^$ is called recessive with respect to the vector c, or briefly c-recessive, if it is not c-neutral-dominant, i.e.,
$∀ x ∈ H ^ , ∃ λ ∈ R > 0 : x + λ c ∉ H ^ .$
The following proposition provides the necessary and sufficient condition for the c-recessivity of the half-space.
Proposition 1.
Let a half-space $H ^$ be defined by the following equation:
$H ^ = x ∈ R n | a , x ⩽ β .$
Then, the necessary and sufficient condition for the c-recessivity of the half-space $H ^$ is
$a , c > 0 .$
Proof.
Let us prove the necessity first. Let condition (43) hold. Denote
$x ′ = β a ∥ a ∥ 2 .$
It follows
$a , x ′ = a , β a ∥ a ∥ 2 = β a , a ∥ a ∥ 2 = β ,$
i.e., $x ′ ∈ H ^$. By virtue of (43), there is $λ ′ ∈ R > 0$ such that
$x ′ + λ ′ c ∉ H ^ ,$
i.e.,
$a , x ′ + λ ′ c > β .$
Substituting the right-hand side of Equation (46) instead of $x ′$, we obtain
$a , β a ∥ a ∥ 2 + λ ′ c > β .$
Since $λ ′ > 0$, it follows
$a , c > 0 .$
Thus, the necessity is proved.
Let us prove the sufficiency by contradiction. Assume that (45) holds, and $H ^$ is not c-recessive, i.e.,
$∀ x ∈ H ^ , ∀ λ ∈ R > 0 : x + λ c ∈ H ^ .$
Since $x ′$ defined by (46) belongs to $H ^$, it follows
$x ′ + λ c ∈ H ^$
for all $λ ∈ R > 0$, i.e.,
$a , x ′ + λ c ⩽ β .$
Substituting the right-hand side of Equation (46) instead of $x ′$, we obtain
$a , β a ∥ a ∥ 2 + λ c ⩽ β .$
Since $λ > 0$, it follows
$a , c ⩽ 0 .$
But However, this contradicts (45).    □
Denote
$e c = c c ,$
i.e., $e c$ stands for the unit vector parallel to vector c.
Proposition 2.
Let the half-space $H ^ i$ be c-recessive. Then, for any point $x ′ ∈ R n$, and any number $η > 0$, the point
$z = x ′ + η + b i − a i , x ′ a i , e c e c$
does not belong to the half-space $H ^ i$, i.e.,
$a i , z > b i .$
Proof.
The half-space $H ^ i$ is c-recessive, therefore, according to Proposition 1, the following inequality holds:
$a i , c > 0 .$
Taking (58) into account, we have
$a i , z = a i , x ′ + η + b i − a i , x ′ a i , e c e c = η a i , e c + b i .$
Substituting the right-hand side of Equation (57) instead of $e c$ in (61), we obtain
$a i , z = η c a i , c + b i .$
Since $η > 1$, by virtue of (60), the inequality $η c a i , c > 0$ holds. It follows that $a i , z > b i$, i.e., $z ∉ H ^ i$.    □
Define
$I c = i ∈ P a i , c > 0 ,$
i.e., $I c$ is the set of indices for which the half-space $H ^ i$ is c-recessive. We assume from now on that
$I c ≠ ∅ .$
Corollary 1.
Let an arbitrary feasible point $x ′$ of LP problem (38) be given:
$∀ i ∈ P : a i , x ′ ⩽ b i .$
Then, for any positive number $η ∈ R > 0$, the point
$z = x ′ + η + max b i − a i , x ′ a i , e c i ∈ I c e c$
does not belong to any c-recessive half-space $H ^ i$, i.e.,
$∀ i ∈ I c : a i , z > b i .$
Proof.
From (65), we obtain
$∀ i ∈ I c : b i − a i , x ′ ⩾ 0 .$
According to (63) and (57), the following condition holds:
$∀ i ∈ I c : a i , e c > 0 .$
Hence,
$max b i − a i , x ′ a i , e c i ∈ I c ⩾ 0$
for any $i ∈ I c$. Fix any $j ∈ I c$, and define
$η ′ = η + max b i − a i , x ′ a i , e c i ∈ I c − b j − a j , x ′ a j , e c ,$
where $η > 0$. Taking into account (70), it follows that $η ′ > 0$. Using (66) and (71), we obtain
$z = x ′ + η + max b i − a i , x ′ a i , e c i ∈ I c e c = x ′ + η ′ + b j − a j , x ′ a j , e c e c .$
According to Proposition 2, it follows that $a j , z > b j$, i.e., the point z defined by (66) does not belong to the half-space $H ^ j$ for any $j ∈ I c$.    □
The following proposition specifies the region containing a solution of LP problem (38).
Proposition 3.
Let $x ¯$ be a solution ofto LP problem (38). Then, there is an index $i ′ ∈ I c$ such that
$x ¯ ∈ H i ′ ,$
i.e., there is a c-recessive half-space $H ^ i ′$ such that its bounding hyperplane $H i ′$ includes $x ¯$.
Proof.
Denote by $J c$ the set of indices for which the half-space $H ^ j$ is c-neutral-dominant:
$J c = P \ I c .$
Since $x ¯$ belongs to the feasible region of LP problem (38), then
$x ¯ ∈ ⋂ j ∈ J c H ^ j ,$
and
$x ¯ ∈ ⋂ i ∈ I c H ^ i .$
Define the ray Y as follows:
$Y = x ¯ + λ c λ ∈ R ⩾ 0 .$
By Definition 1, we have
$Y ⊂ ⋂ j ∈ J c H ^ j ,$
i.e., the ray Y belongs to the all c-neutral-dominant half-spaces. By virtue of Definition 2,
$∀ i ∈ I c , ∃ λ ∈ R > 0 : x ¯ + λ c ∉ H ^ i .$
Taking into account (76), it means that
$∀ i ∈ I c : Y ∩ H i = y i ∈ R n ,$
i.e., the intersection of the ray Y and any hyperplane $H i$ bounding the c-recessive half-space $H ^ i$ is a single point $y i ∈ R n$. Let
$i ′ = arg min i ∈ I c x ¯ − y i y i = Y ∩ H i ,$
i.e., $H i ′$ is the nearest hyperplane to the point $x ¯$ for all $i ∈ I c$. Denote by $y ¯$ the intersection of the ray Y and the hyperplane $H i ′$:
$y ¯ = Y ∩ H i ′ .$
According to (81),
$y ¯ ∈ ⋂ i ∈ I c H ^ i ,$
i.e., the point $y ¯$ belongs to the all c-recessive half-spaces. By (78), it follows that
$y ¯ ∈ ⋂ i ∈ P H ^ i ,$
i.e., $y ¯$ belongs to the feasible region of LP problem (38). Let
$λ ′ = x ¯ − y ¯ .$
Then, in virtue of (77),
$c , y ¯ = c , x ¯ + λ ′ e c = c , x ¯ + λ ′ c , c c = c , x ¯ + λ ′ c .$
Since $x ¯$ is a solution of LP problem (38), the following condition holds:
$∀ y ∈ ⋂ i ∈ P H ^ i : c , y ⩽ c , x ¯ .$
Comparing this with (84), we obtain that
$c , y ¯ ⩽ c , x ¯ .$
Taking into account that $λ ′ ⩾ 0$ and $c ≠ 0$, by virtue (86) and (88), we obtain $λ ′ = 0$. By (85), it follows that $x ¯ = y ¯$. By (82), this means that $x ¯ ∈ H i ′$, where $H ^ i ′$ is a c-recessive half-space.    □
Definition 3.
Let $M ≠ ∅$ be a convex closed set. A single-valued mapping $φ : R n → R n$ is called M-Fejér mapping , if
$∀ x ∈ M : φ ( x ) = x ,$
and
$∀ x ∉ M , ∀ y ∈ R n : ∥ φ ( x ) − y ∥ < ∥ x − y ∥ .$
Proposition 4.
Let $x ( 0 ) ∈ R n$. If $φ ( · )$ is a continuous M-Fejér mapping and
$x ( k ) = φ k x ( 0 ) k = 1 ∞$
is the iterative process generated by this mapping, then
$x ( k ) → x ˜ ∈ M .$
Proof.
The convergence follows directly from Theorem 6.2 and Corollary 6.3 in .    □
Let $π i ( x )$ stand for the orthogonal projection of point x onto hyperplane $H i$:
$π i ( x ) = x − a i , x − b i a i 2 a i .$
The next proposition provides a continuous M-Fejér mapping, which will be used in the apex method.
Proposition 5.
Let $M ≠ ∅$ be the convex closed set representing the feasible region of LP problem (38):
$M = ⋂ i = 1 m H ^ i .$
For any point $x ∈ R n$, let us define
$J x = i a i , x > b i ; i ∈ P ,$
i.e., $J x$ is the set of indices for which the half-space $H ^ i$ does not contain the point x. Then, the single-valued mapping $ψ : R n → R n$ defined by the equation
$ψ ( x ) = x , if x ∈ M ; 1 | J x | ∑ i ∈ J x π i ( x ) , if x ∉ M .$
is a continuous M-Fejér mapping.
Proof.
Obviously, the mapping $ψ ( · )$ is continuous. Let us prove that condition (90) holds. Our proof is based on a general scheme presented in . Let $y ∈ M$, and $x ∉ M$. It follows that
$J x ≠ ∅ .$
By virtue of the EquationEquation (94), the following inequalities holds
$π i ( x ) − x > 0$
for all $i ∈ J x$. According to Lemma 3.13 in , the following inequality holds for all $i ∈ J x$:
$π i ( x ) − y 2 ⩽ x − y 2 − π i ( x ) − x 2 .$
It follows that
$y − ψ ( x ) 2 = y − 1 | J x | ∑ i ∈ J x π i ( x ) 2 = 1 | J x | ∑ i ∈ J x y − π i ( x ) 2 ⩽ ⩽ 1 | J x | ∑ i ∈ J x x − y 2 − π i ( x ) − x 2 ⩽ 1 | J x | ∑ i ∈ J x y − π i ( x ) 2 ⩽ ⩽ 1 | J x | ∑ i ∈ J x x − y 2 − π i ( x ) − x 2 ⩽ x − y 2 − 1 | J x | ∑ i ∈ J x π i ( x ) − x 2 .$
According to (96) and (97), the following inequality holds:
$1 | J x | ∑ i ∈ J x π i ( x ) − x 2 > 0 .$
Hence,
$∀ x ∉ M , ∀ y ∈ R n : ψ ( x ) − y < x − y .$
□
Definition 4.
Let $M ≠ ∅$ be the feasible region of LP problem (38), $ψ ( · )$ be the mapping defined by Equation (95). The pseudoprojection $ρ M ( x )$ of the point x onto the feasible polytope M is the limit point of the sequence $[ x , ψ ( x ) , ψ 2 ( x ) , … , ψ k ( x ) , … ]$:
$lim k → ∞ ρ M ( x ) − ψ k ( x ) = 0 .$
The correctness of this definition is ensured by Propositions 4 and 5.

## 4. Description of Apex Method

In this section, we describe a new scalable iterative method for solving LP problem (38), called the “apex method”. The apex method is based on the predictor–-corrector framework and proceeds in two stages: quest (predictor) and target (corrector). The quest stage calculates a rough initial approximation of LP problem (38). The target stage refines the initial approximation with a given precision. The main operation used in the apex method is an operation that calculates a pseudoprojection according to Definition 4. This operation is used both in the quest stage and in the target stage. In the next section, we describe and investigate a parallel algorithm for calculating a pseudoprojection.

#### 4.1. Algorithm for Calculating Pseudoprojection

In this section, we consider the implementation of the pseudoprojection operation in the form of sequential and parallel algorithms. The pseudoprojection operation $ρ M ( · )$ maps an arbitrary point $x ∈ R n$ to a point $ρ M ( x )$ belonging to the feasible polytope M, which is the feasible region of LP problem (38). The calculation of $ρ M ( x )$ is organized as an iterative process using Equation (95). A sequential implementation of this process is presented by Algorithm 1.
Let us give brief comments on this implementation. The main iterative process of constructing a sequence of Fejér’s approximations is are represented by the repeat/until loop implemented in Ssteps 4–20. The Ssteps 5–10 calculate the set $J$ of indices of half-spaces $H ^ i$ violated by the point $x ( k )$ presenting the current approximation. In Ssteps 14–18, the next approximation $x ( k + 1 )$ is calculated by Equation (95). The process terminates when the distance between adjacent approximations becomes less than $ϵ$, where $ϵ$ is a small positive parameter. Computational experiments show that, in the case of large LP problems, the calculation of a pseudoprojection is a process with high computational complexity . Therefore, we developed a parallel implementation of Algorithm 1, presented by Algorithm 2.
 Algorithm 1 Calculating the pseudoprojection $ρ M ( x )$. Require: $H ^ i = x ∈ R n | a i , x ⩽ b i$, $M = ⋂ i = 1 m H ^ i$, $M ≠ ∅$1:function $ρ M$(x)2:      $k : = 0$3:      $x ( 0 ) : = x$4:      repeat5:            $J : = ∅$6:            for $i = 1 … m$ do7:                  if $a i , x ( k ) > b i$ then8:                        $J : = J ∪ { i }$9:                  end if10:            end for11:            if $J = ∅$ then12:                  return $x ( k )$13:            end if14:            $S : = 0$15:            for all $i ∈ J$ do16:                  $S : = S + a i , x ( k ) − b i a i / a i 2$17:            end for18:            $x ( k + 1 ) : = x ( k ) − S / J$19:            $k : = k + 1$20:      until $x ( k ) − x ( k − 1 ) < ϵ$21:      return $x ( k )$22:end function
 Algorithm 2 Parallel calculation of a pseudoprojection. Master lth Worker $( l = 0 , ⋯ , L − 1 )$ 1:input $n , x ( 0 )$2:    3:$k : = 0$4:repeat5:    Bcast $x ( k )$6:    7:    8:    Gather $L r e d u c e$9:    $( u , σ ) : = R e d u c e ( ⊕ , L r e d u c e )$10:    $x ( k + 1 ) : = u / σ$11:    $k : = k + 1$12:    $e x i t : = x ( k ) − x ( k − 1 ) < ϵ$13:    Bcast $e x i t$14:until $e x i t$15:output $x ( k )$16:stop 1:input $n , m , A , b , c$2:$L : = NumberOfWorkers$3:$L m a p ( l ) : = [ l m / L , … , ( ( l + 1 ) m / L ) − 1 ]$4:repeat5:    RecvFromMaster $x ( k )$6:    $L r e d u c e ( l ) : = M a p ( F x ( k ) , L m a p ( l ) )$7:    $( u l , σ l ) : = R e d u c e ( ⊕ , L r e d u c e ( l ) )$8:    SendToMaster $( u l , σ l )$9:    10:    11:    12:    13:    RecvFromMaster $e x i t$14:until $e x i t$15:  16:stop
Algorithm 2 is based on the BSF parallel computation model  designed for a cluster computing system. The BSF model uses the master/worker paradigm and requires the representation of the algorithm in the form of operations on lists using higher-order functions Map and Reduce. In Algorithm 2, we use the list $L m a p = 1 , … , m$ of ordinal numbers of constraints of LP problem (38) as the second parameter of the higher-order function Map. As the first parameter of the higher-order function Map, we use the parameterized function
$F x : P → R n × Z ⩾ 0$
defined as follows:
$F x ( i ) = u i , σ i ; u i = π i ( x ) , if a i , x > b i ; 0 , if a i , x ⩽ b i ; σ i = 1 , if a i , x > b i ; 0 , if a i , x ⩽ b i .$
Thus, the higher-order function $M a p F x , L m a p$ transforms the list $L m a p$ of constraint numbers into a list of pairs $( u i , σ i )$. Here, $u i$ is the orthogonal projection of the point x onto the hyperplane $H i$ in the case $x ∉ H ^ i$, and the zero vector otherwise; $σ i$ is the indicator that x violates the half-space $H ^ i$ ($i = 1 , … , m$):
$M a p F x , L m a p = F x ( 1 ) , … , F x ( m ) = ( u 1 , σ 1 ) , … , ( u m , σ m ) .$
Denote $L r e d u c e = ( u 1 , σ 1 ) , … , ( u m , σ m )$. Define a binary associative operation
$⊕ : R n × Z ⩾ 0 → R n × Z ⩾ 0 ,$
which is the first parameter of the higher-order function Reduce, as follows:
$u ′ , σ ′ ⊕ u ″ , σ ″ = u ′ + u ″ , σ ′ + σ ″ .$
The higher-order function $R e d u c e ⊕ , L r e d u c e$ folds the list $L r e d u c e$ into the single pair by sequentially applying the operation ⊕ to all elements of the list:
$R e d u c e ⊕ , L r e d u c e = ( u 1 , σ 1 ) ⊕ … ⊕ ( u m , σ m ) = ( u , σ ) ,$
where
$u = ∑ i = 1 m u i ;$
$σ = ∑ i = 1 m σ i .$
In Algorithm 2, a parallel execution of work is organized according to the master/worker scheme. The parallel algorithm includes $L + 1$ processes: one master process and L worker processes. The master manages the computations, distributes the work among the workers, gathers the results back from them, and summarizes all the results to obtain the final result. For the sake of simplicity, it is assumed that the number of constraints m of LP problem (38) is a multiple of the number of workers L. In Step 1, the master reads the space dimension n and the starting point $x ( 0 )$. Step 3 of the master assigns zero to the iteration counter k. Steps 4–14 implement the main repeat/until loop calculating the pseudoprojection. In Step 5, the master broadcasts the current approximation $x ( k )$ to all workers. Step 8 of the master gathers partial results from all workers. In Step 9, the master folds the partial results into the pair $( u , σ )$, which is used to calculate the next approximation $x ( k + 1 )$ in Step 10. Step 11 of the master increases the iteration counter k by 1. In Step 12, the master calculates the criterion for stopping the iterative process and assigns the result to the Boolean variable $e x i t$. In Step 13, the master broadcasts the value of the Boolean variable $e x i t$ to all workers. In Step 14, the repeat/until loop ends if the Boolean variable $e x i t$ takes the value $t r u e$. Step 15 of the master outputs the last approximation $x ( k )$ as a result of the pseudoprojection. Step 16 terminates the master process.
All workers execute the same program codes, but with different data. In Step 1, the lth worker reads problem data. In Steps 2 and 3, the lth worker defines its own sublist $L m a p ( l )$ for processing. For convenience, we number the constraints starting from zero. The sublists of different workers do not overlap, and their concatenation represents the entire list to be processed:
$L m a p = L m a p ( 0 ) + + … + + L m a p ( L − 1 ) .$
The repeat/until loop of the lth worker corresponds to the repeat/until loop of the master (Ssteps 4–14). In Step 5, the lth worker receives the current approximation $x ( k )$ from the master. Step 6 of the lth worker executes the higher-order function $M a p$, which applies the parameterized function $F x ( k )$ defined by (101) to all elements of the sublist $L m a p ( l )$, resulting in the sublist $L r e d u c e ( l )$. Step 7 of the lth worker executes the higher-order function $R e d u c e$, which applies the operation ⊕ defined by (103) to all elements of the list $L r e d u c e ( l )$, resulting in the pair $( u l , σ l )$. In Step 8, the lth worker sends its resulting pair $( u l , σ l )$ to the master. In Step 13, the lth worker receives a value of the Boolean variable $e x i t$ from the master. If $e x i t = t r u e$, then the worker process is terminated. Otherwise, the repeat/until loop continues its work. The exchange operators Bcast, Gather, RecvFromMaster, and SendToMaster provide synchronization of the master and workers processes.
Let us estimate the scalability boundary of the described pseudoprojection parallel algorithm, using the cost metrics of BSF model . Here, the scalability boundary refers to the number of worker processes at which the maximum speedup is achieved. The cost metric of the BSF model includes the following parameters.
m:
length of the list $L m a p$;
D:
latency (time taken by the master to send a one one-byte message to a single worker);
$t c$:
time taken by the master to send the current approximation $x ( k )$ to a single worker and receive the pair $( u l , σ l )$ from it (including latency);
$t M a p$:
time taken by a single worker to process the higher-order function Map for the entire list $L m a p$;
$t a$:
time taken by computing the binary operation ⊕.
According to Equation (14) from , the scalability boundary $L m a x$ of a parallel algorithm can be estimated as follows:
$L m a x = 1 2 t c t a ln 2 2 + t M a p t a + 4 m − t c t a ln 2 .$
Let us calculate the time parameters in Equation (108). To do this, we introduce the following notation for one iteration of the repeat/until loop implemented in Steps 4–14 of Algorithm 2:
$c c$:
quantity of numbers sent from the master to the lth worker and back within one iteration;
$c F$:
quantity of arithmetic and comparison operations required to compute the function $F x$ defined by Equation (101);
$c ⊕$:
quantity of arithmetic and comparison operations required to compute the binary operation ⊕ .
In Step 5, the master sends to the lth worker one vector of dimension n. Then, in Step 8, the master receives a pair consisting of a vector of dimension n and a single number from the lth worker. In addition, in Step 13, the master sends a single Boolean value to the lth worker. Hence,
$c c = 2 n + 1 .$
Taking into account Equations (92) and (101), and assuming that $a i 2$ is calculated in advance, we obtain
$c F = 3 n + 2 .$
According to (103), the following equations holds for $c ⊕$:
$c ⊕ = 2 n + 1 .$
Let us denote by $τ o p$ the execution time of one arithmetic or comparison operation, and by $τ t r$ the time of sending a single real number from one process to another (excluding latency). Then, using (109)–(111), we obtain
$t c = c c τ t r + 3 D = ( 2 n + 1 ) τ t r + 3 D ;$
$t M a p = c F m τ o p = ( 3 n + 2 ) m τ o p ;$
$t a = c ⊕ τ o p = ( 2 n + 1 ) τ o p .$
Recall that the parameterparameter D denotes the latency. Substituting the right-hand sides of these equations into (108), we have
$L m a x = 1 2 ( 2 n + 1 ) τ t r + 3 D ( 2 n + 1 ) τ o p ln 2 2 + n + 1 2 n + 1 + 5 m − ( 2 n + 1 ) τ t r + 3 D ( 2 n + 1 ) τ o p ln 2 ,$
where n is the space dimension, m is the number of constraints. For large values of n and m, this is equivalent to
$L m a x ≈ O ( m ) .$
This estimation suggests that Algorithm 2 is limited-scalable, and the scalability depends on the number of constraints m.

#### 4.2. Quest Stage

The quest stage of the apex method plays the role of a predictor and includes the following steps.
1.
Calculate a feasible point $x ˜ ∈ M$.
2.
Calculate the apex point z.
3.
Calculate the point $u ( 0 )$ that is the pseudoprojection of the apex point z onto the feasible polytope M.
The feasible point $x ˜$, in Step 1, can be calculated by the following equation:
$x ˜ = 0 , if 0 ∈ M ; ρ M ( 0 ) , if 0 ∉ M ,$
where $ρ M ( · )$ is the operation of pseudoprojection onto the feasible polytope M (see Definition 4).
Step 2 calculates the apex point z by the following equation:
$z = x ˜ + η + max b i − a i , x ′ a i , e c i ∈ I c e c ,$
where $I c$ defined by Equation (63) is the set of indices for which the half-space $H ^ i$ is c-recessive, and $η ∈ R > 0$ is a positive parameter. Corollary 1 guarantees that the point z chosen according to Equation (117) does not belong to any c-recessive half-space $H ^ i$. This choice is based on the intuition that the pseudoprojection from such a point will not be very far from the exact solution of the LP problem. The interpretation of this intuition comes from Proposition 3, which states that the solution of the LP problem (38) lies on some hyperplane $H i$ bounding the c-recessive half-space $H ^ i$. The parameter $η$ can significantly affect the proximity of the point $ρ M ( z )$ to the exact solution. The optimal value of $η$ can be obtained by seeking the maximum of the objective function using the successive dichotomy method.
Step 3 calculates the initial approximation $u ( 0 )$ for the target stage by the following equation:
$u ( 0 ) = ρ M ( z ) .$
Numerous computational experiments show that the process of calculating the pseudoprojection by Definition 4 starting from an exterior point always converges to a point on the boundary of the feasible polytope M. However, at the moment, we do not have a rigorous proof of this fact.

#### 4.3. Target Stage

The target stage of the apex method plays the role of a corrector and calculates a sequence of points
$u ( 0 ) , u ( 1 ) , … , u ( k ) , …$
that has the following properties for all $k ∈ { 0 , 1 , 2 , … }$:
$u ( k ) ∈ Γ M ;$
$c , u ( k ) < c , u ( k + 1 ) ;$
$lim k → ∞ u ( k ) − x ¯ = 0 .$
Here, $Γ M$ stands for the set of boundary points of the feasible polytope M. Condition (120) means that all points of sequence (119) lie on the boundary of the polytope M. Condition (121) states that the value of the objective function at each next point of sequence (119) is greater than at the previous one. According to condition (122), sequence (119) converges to the exact solution of LP problem (38). An implementation of the Target stage is presented in Algorithm 3.
Let us give brief comments on the steps of Algorithm 3. Step 1 reads the initial approximation $u ( 0 )$ constructed at the quest stage. Step 2 assigns zero to the iteration counter k. Step 3 adds the vector $δ e c$ to $u ( k )$ and assigns the result to v. Here, $e c$ is a unit vector parallel to c, $δ$ is a positive parameter. The parameter $δ$ must be small enough to ensure that $x ∈ R n x = ( 1 − λ ) w − λ u ( k ) , 0 ⩽ λ ⩽ 1 ⊂ Γ M$. Recall that $Γ M$ denotes the set of boundary points of the feasible polytope M. Step 4 calculates the pseudoprojection $ρ M ( v )$ and assigns the result to w. Steps 5–19 implement the main loop. This loop is processed while the following condition holds:
$c , w − u ( k ) > ϵ f .$
Here, $ϵ f$ is a small positive parameter. Step 6 introduces the point u moving along the surface of the polytope M from the point $u ( k )$ to the next approximation $u ( k + 1 )$. Step 7 calculates the vector d, which defines the direction of movement of the point u. The loop in Ssteps 8–14 moves the point u along the surface of the polytope M in this direction as far as possible. To achieve this, the vector d is successively divided in half each time the next step moves u beyond the boundary of the polytope M. The movement stops when the length of the vector d becomes less than $ϵ d$. Here, $ϵ d$ is a small positive parameter. Step 15 sets the next approximation $u ( k + 1 )$ using the value of u. Step 16 increases the iteration counter k by 1. Steps 17 and 18 calculate new points v and w for the next iteration of the main loop. Step 20 outputs $u ( k )$ as the final approximation of the exact solution $x ¯$ of LP problem (38). Schematically, the work of Algorithm 3 is shown in Figure 1.
 Algorithm 3 Target stage. Require: $H ^ i = x ∈ R n | a i , x ⩽ b i$, $M = ⋂ i = 1 m H ^ i$, $M ≠ ∅$1:input $u ( 0 )$2:$k : = 0$3:$v : = u ( k ) + δ e c$4:$w : = ρ M v$5:while $c , w − u ( k ) > ϵ f$do6:    $u : = u ( k )$7:    $d : = w − u ( k )$8:    while $d > ϵ d$ do9:        if $u + d ∈ M$ then10:           $u : = u + d$11:        else12:           $d : = d / 2$13:        end if14:    end while15:    $u ( k + 1 ) : = u$16:    $k : = k + 1$17:    $v : = u ( k ) + δ e c$18:    $w : = ρ M v$19:end while20:output $u ( k )$21:stop
The following proposition guarantees the convergence of Algorithm 3.
Proposition 6.
Let the feasible polytope M of LP problem (38) be a closed bounded set, and $M ≠ ∅$. Then, the sequence $u ( k )$ generated by Algorithm 3 terminates in finite number of iterations $K ⩾ 0$, and
$c , u ( 0 ) < c , u ( 1 ) < c , u ( 2 ) < … < c , u ( K ) .$
Proof.
The case when $K = 0$ is trivial. Let $K > 0$ or $K = ∞$. First, we show that for any $k < K$ the following inequality holds:
$c , u ( k ) < c , u ( k + 1 ) .$
Indeed, inequality (123) implies
$c , u ( k ) < c , w .$
According to Step 7 in Algorithm 3, it follows that
$d ≠ 0 .$
Without loss of generality, we can assume that $w − u ( k ) > ϵ d$. Then, according to Steps 8–15, we obtain
$u ( k + 1 ) = u ( k ) + μ d ,$
where $μ > 0$. Taking into account inequality (123) and Step 7 of Algorithm 3, it follows
$c , u ( k + 1 ) = c , u ( k ) + μ d = c , u ( k ) + μ w − u ( k ) = = c , u ( k ) + μ c , w − u ( k ) > c , u ( k ) .$
Now, we show that $K < ∞$. Assume the opposite, i.e., Algorithm 3 generates the infinite sequence of points. In this case, we obtain the monotonically increasing numerical sequence
$c , u ( 0 ) < c , u ( 1 ) < c , u ( 2 ) < …$
Since the feasible polytope M is bounded, sequence (129) is bounded from above. According to the monotone convergence theorem, a monotonically increasing numerical sequence bounded from above converges to its supremum. This means that there exists $K ′ ∈ N$ such that
$∀ k > K ′ : c , u ( k + 1 ) − c , u ( k ) < ϵ d .$
It follows
$∀ k > K ′ : c , w − c , u ( k ) < ϵ d$
that is equivalent to
$∀ k > K ′ : c , w − u ( k ) < ϵ d .$
Thus, we obtain a contradiction with the stopping criterion (123) used in Step 5 of Algorithm 3. □
The notion of pseudoprojection is a generalization of the notion of metric projection, which can be defined as follows .
Definition 5.
Let Q be a closed convex set in $R n$, and $Q ≠ ∅$. The metric projection $P Q ( x )$ of the point $x ∈ R n$ onto the set Q is defined by the equation
$P Q ( x ) = arg min x − q q ∈ Q .$
For the metric projection, the following proposition is similar to Proposition 6 for the pseudoprojection holds.
Proposition 7.
The sequence $u ( k )$ generated by Algorithm 3 with the metric projection $P M ( · )$ instead of the pseudoprojection $ρ M ( · )$ terminates in finite number of iterations $K ⩾ 0$, and
$c , u ( 0 ) < c , u ( 1 ) < c , u ( 2 ) < … < c , u ( K ) .$
Proof.
The proof follows the same scheme as the proof of Proposition 6. □
The following proposition states that Algorithm 3 with metric projection converges to the exact solution of the LP problem.
Proposition 8.
Let the pseudoprojection $ρ M ( · )$ be replaced by the metric projection $P M ( · )$ in Algorithm 3. Then, Algorithm 3 converges to the exact solution $x ¯$ of LP problem (38).
Proof.
Let $u ¯$ stand for the terminal point of the sequence $u ( k )$ generated by Algorithm 3 with the metric projection $P M ( · )$. This point exists according to Proposition 7. Assume the opposite, i.e., $u ¯ ≠ x ¯$. This is equivalent to
$c , u ¯ < c , x ¯ .$
Let $S δ ( v )$ designate the open n-ball of radius $δ$ and center v, where
$v = u ¯ + δ e c .$
According to (135), it follows that
$S δ ( v ) ∩ M ≠ ∅ .$
Let
$w = arg min x − v x ∈ S δ ( v ) ∩ M .$
This is equivalent to
$w = P M ( v ) .$
It is easy to see that the following inequality holds:
$c , w > c , u ¯ .$
Condition (136), (139), and (140) say that $u ¯$ is not the terminal point of the sequence $u ( k )$ generated by Algorithm 3. Thus, we obtain a contradiction, and the proposition is proved. □
The convergence of Algorithm 3 with pseudoprojection to the exact solution is based on the intuition that $ρ M ( v ) → P M ( v )$ with $δ → 0$. However, a rigorous proof of this fact is beyond the scope of this article.

## 5. Implementation and Computational Experiments

We implemented a parallel version of the apex method in C++ using the BSF-skeleton , which is based on the BSF parallel computation model . The BSF-skeleton encapsulates all aspects related to the parallelization of a program using the MPI library. The source code of this implementation is freely available at https://github.com/leonid-sokolinsky/Apex-method (accessed on 1 March 2023). Using this program, we investigated the scalability of the apex method. The computational experiments were conducted on the “Tornado SUSU” computing cluster , whose specifications are shown in Table 1.
As test problems, we used random synthetic LP problems generated by the program FRaGenLP . A verification of solutions obtained by the apex method was performed using the program VaLiPro . We conducted a series of computational experiments in which we investigated the dependence of speedup and parallel efficiency on the number of worker nodes used. The results are presented in Figure 2.
Here, the speedup $α$ is defined as the ratio of the time $T ( 1 )$ required by the parallel algorithm using one master node and one worker node to solve a problem to the time $T ( L )$ required by the parallel algorithm using one master node and L worker nodes to solve the same problem:
$α = T ( 1 ) T ( L ) .$
The parallel efficiency $ϵ$ is calculated as the ratio of the speedup $α$ to the number L of worker nodes:
$ϵ = α L .$
The computations were performed with the following dimensions: 5000, 7500, and 10,000. The number of inequalities was 10,002, 15,002, and 20,002, respectively. The experiments showed that the scalability boundary of the parallel apex algorithm depends significantly on the size of the LP problem. For $n = 5000$, the scalability boundary was approximately 55 worker nodes. For the problem of dimension $n = 7500$, this boundary increased to 80 nodes, and for the problem of dimension n = 10,000, it was close to 100 nodes. Further increasing the problem size caused the compiler error: “insufficient memory”. It should be noted that the computations were performed in the double double-precision floating-point format occupying 64 bits in computer memory. An attempt to use the single-precision floating-point format occupying 32 bits failed because the apex algorithm stopped to convergeconverging. Parallel efficiency also significantly depends on the size of the LP problem. For $n = 5000$, the efficiency dropped below 50% at 70 worker nodes. For $n = 7500$ and n = 10,000, the 50% drop of the efficiency occurred at 110 and 130 worker nodes, respectively.
The experiments have also shown that the parameter $η$ used in Equation (117) to calculate the apex point z at the quest stage has a negligible effect on the total time of solving the problem when this parameter has large values (more then than 100,000). If the apex point is not far enough away from the polytope, then its pseudoprojection-projection can be an interior point of some polytope face. If the apex point is taken far enough away from the polytope (the value $η = 20 , 000 · n$ was used in the experiments), then the pseudoprojection-projection always fell into one of the polytope vertices. AlsoAdditionally, we would like to note that, in the case of synthetic LP problems generated by the program FRaGenLP, all computed points in the sequence $u ( k )$ were vertices of the polytope. The computational experiment showed that more than 99% of the time spent for solving the LP problem by the apex method was taken up by the calculation of pseudoprojections (Step 18 of Algorithm 3). At that, theThe calculation of one approximation $u ( k )$ for a problem of dimension n = 10,000 on 100 worker nodes took 44 min.
We also tested the apex method on a subset of LP problems from the Netlib-LP repository  available at https://netlib.org/lp/data (accessed on 1 March 2023). The Netlib suite of linear optimization problems includes many real real-world applications like such as stochastic forestry problems, oil refinery problems, flap settings of aircraft, pilot models, audit staff scheduling, truss structure problems, airline schedule planning, industrial production, and allocation models, image restoration problems, and multisector economic planning problems. It contains problems ranging in size from 32 variables and 27 constraints up to 15,695 variables and 16,675 constraints . The exact solutions (optimal values of objective functions) of all problems were obtained from paper . The results are presented in Table 2.
These experiments showed that the relative error of the rough solution calculated at the quest stage was less than or equal to 0.2, excluding the adlittle, blend, and fit1d problems. The relative error of the refined solution calculated at the target stage was less than $10 − 3$, excluding the kb2 and sc105, for which the error was $0.035$ and $0.007$, respectively. The runtime varied from a few seconds for afiro to tens of hours for blend. One of the main parameters affecting the convergence rate of the apex method was the parameter $ϵ$ used in Step 12 of the parallel Algorithm 2 calculating the pseudoprojection. All runs are available on https://github.com/leonid-sokolinsky/Apex-method/tree/master/Runs (accessed on 1 March 2023).

## 6. Discussion

In this section of the article, we will discuss some issues related to the scientific contribution and applicability of the apex method in practice and give answers to the following questions.
1.
2.
What is the practical significance of the apex method?
3.
What is our confidence that the apex method always converges to the exact solution of the LP problem?
4.
How can we speed up the convergence of the Algorithm 1 calculating a pseudoprojection on the feasible polytope M?
The main scientific contribution of this article is that it presents the apex method that allows, for the first time, as far as we know, to construct a path close to optimal on the surface of a feasible polytope from a certain starting point to the exact solution of the LP problem. Here, the optimal path refers to a path of the minimum length according to the Euclidean metric. By intuition, moving in the direction of the greatest increase in the value of the objective function will give us the shortest path to the point of maximum of the objective function on the surface of the feasible polytope. We intend to present a formal proof of this fact in a futurefuture work.
The practical significance of the apex method is based on the issue of applying feedforward-forward neural networks, including convolutional neural networks, to solve LP problems. In recenta recent paper , a method of visual representation of n-dimensional LP problems was proposed. This method constructs an image of the feasible polytope M in the form of a matrix I of dimension $( n − 1 )$ using the rasterization technique. A vector antiparallel to the gradient vector of the objective function is used as the view ray. Each pixel value in I is proportional to the value of the objective function at the corresponding point on the surface of a feasible polytope M. Such an image makes it possible to use a feedforward neural network to construct the optimal path on the surface of the feasible polytope to the solution of the LP problem. Actually, tThe feedforward neural network can directly calculate the vector d in Algorithm 3, making the calculation of pseudoprojection redundant. The advantage of this approach is that the feedforward-forward neural network works in real time, which is important for robotics. We are not aware of other methods that solvessolve LP problems in real time. However, applying a feedforward neural network to solve LP problems involves the task of preparing a training dataset. The apex method provides the possibility to construct such a training dataset.
Proposition 6 states that Algorithm 3 converges to some point on the surface of the feasible polytope M in a finite number of steps, but leaves open the question of whether the terminal point will be a solution to the LP problem. According to Proposition 8, the answer to this question is positive if, in Algorithm 3, the pseudoprojection is replaced by the metric projection. However, there are no methods for constructing the metric projection for an arbitrarily bounded convex polytope. Therefore, we are forced to use the pseudoprojection. Numerous experiments show that the apex method converges to the solution of the LP problem, but this fact requires a formal proof. We plan to make such a proof in our future work.
The main drawback of the apex method is the slow rate of convergence to the solution of the LP problem. The LP problem, which takes several seconds to find the optimal solution by standard linear programming solvers, may take several hours to solve by the apex method. Computational experiments demonstrated that more than 99% of the time spent for solving the LP problem by the apex method was taken by the calculation of pseudoprojections. Therefore, the issue of speeding up the process of calculating the pseudoprojection is urgent. In the apex method, the pseudoprojection is calculated by Algorithm 1, which belongs to the class of projection-type methods discussed in detail in Section 2. In the case of closed bounded polytope $M ≠ ∅$, the projection-type methods have a low linear rate of convergence:
$x ( k + 1 ) − ρ M x ( 0 ) ⩽ C q k ,$
where $0 < C < ∞$ is some constant, and $q ∈ ( 0 , 1 )$ is a parameter that depends on the angles between the half-spaces corresponding to the faces of the polytope M . This means that the distance between adjacent approximations decreases at each iteration by a constant factor of less than 1. For small angles, the convergence rate can decrease todecrease to values close to zero. This fundamental limitation of the projection-type methods cannot be overcome. However, we can reduce the number of half-spaces used to compute the pseudoprojection. According to Proposition 3, the solution of LP problem (38) belongs to some c-recessive half-space. Hence, in Algorithm 1 calculating the pseudoprojection, we can take into account only the c-recessive hyperplanes. On average, this reduces the number of half-spaces by two times. Another way to reduce the pseudoprojection calculation time is to parallelize Algorithm 1, as was done in Algorithm 2. However, the degree of parallelism in this case, in this case, will be limited by the theoretical estimation (115).

## 7. Conclusions and Future Work

In this paper, we proposed a new scalable iterative method for linear programming called the “apex method”. The key feature of this method is constructing a path close to optimal on the surface of the feasible region from a certain starting point to the exact solution of the linear programming problem. The optimal path refers to a path of the minimum length according to the Euclidean metric. The main practical contribution of the apex method is that it opens the possibility of using feedforward neural networks to solve multidimensional LP problems.
The paper presents a theoretical basis used to construct the apex method. The half-spaces generated by the constraints of the LP problem are considered. These half-spaces form the feasible polytope M, which is a closed bounded set. These half-spaces are divided into two groups with respect to the gradient c of the linear objective function: c-neutral-dominant and c-recessive. The necessary and sufficient condition for the c-recessivity is obtained. It is proved that the solution of to the LP problem lies on the boundary of a c-recessive half-space. The equation defining the apex point not belonging to any c-recessive half-space is derived. The apex point is used to calculate the initial approximation on the surface of the feasible polytope M. The apex method constructs a path close to optimal on the surface of the feasible polytope M from this initial approximation to the solution of the LP problem. To do this, it uses a parallel algorithm constructing the pseudoprojection, which is a generalization of the notion of metric projection. For this parallel algorithm, an analytical estimation of the scalability bound is obtained. This estimation says that the scalability boundary of the parallel algorithm, of calculating the pseudoprojection on a cluster computing system, does not exceed $O m$ processor nodes, where m is the number of constraints of the linear programming problem. The algorithm constructing a path close to optimal on the surface of the feasible polytope, from the initial approximation to the exact solution of the linear programming problem, is described. The convergence of this algorithm is proven.
The parallel version of the apex method is implemented in C++ using the BSF-skeleton based on the BSF parallel computation model. Large-scale computational experiments were conducted to investigate the scalability of the apex method on a cluster computing system. These experiments show that, for a synthetic scalable linear programming problem with a dimension of 10,000 and a constraint number of 20,002, the scalability boundary of the apex methods is close to 100 processor nodes. At the same time, these experiments showed that more than 99% of the time spent for solving the LP problem by the apex method was taken by the calculation of pseudoprojections.
In addition, the apex method was used to solve 10 problems from the Netlib-LP repository. These experiments showed that the relative error varied from $3.5 × 10 − 3$ to $8.6 × 10 − 9$. The runtime ranged from a few seconds for to tens of hours. The main parameter affecting the convergence rate of the apex method was the precision of calculating the pseudoprojection.
Our future research directions on this subject are as followingfollows. We plan to develop a new method for calculating the pseudoprojection onto the feasible polytope of the LP problem. The basic idea is to reduce the number of half-spaces used in one iteration. At the same time, the number of these half-spaces should remain large enough to enable the efficient parallelization. The new method should outperform Algorithm 2 in terms of convergence rate. We will also need to prove that the new method converges to a point that lies on the boundary of the feasible region. In addition, we plan to investigate the usefulness of utilizusing the linear superiorization technique  in the apex method.

## Author Contributions

All authors contributed equally to the main text. Conceptualization, L.B.S.; Methodology, L.B.S.; Software, L.B.S.; Validation, L.B.S.; Investigation, L.B.S. and I.M.S.; Writing—original draft, L.B.S. and I.M.S. All authors have read and agreed to the published version of the manuscript.

## Funding

This research was supported by RSF (project No. 23-21-00356).

Not applicable.

## Conflicts of Interest

The authors declare no conflict of interest.

## Notations

 $R n$ real Euclidean space $·$ Euclidean norm $〈 · , · 〉$ dot product of two vectors $[ · , · ]$ concatenation of two vectors $f ( x )$ linear objective function c gradient of objective function $f ( x )$ $e c$ unit vector parallel to vector c $x ¯$ solution of LP problem M feasible polytope $Γ M$ set of boundary points of feasible polytope M $a i$ ith row of matrix A $H ^ i$ half-space defined by inequality $a i , x ⩽ b i$ $H i$ hyperplane defined by equation $a i , x = b i$ $P$ set of row indices in matrix A $I c$ set of indices for which the half-space $H ^ i$ is c-recessive $π i ( · )$ orthogonal projection onto hyperplane $H i$ $ρ M ( · )$ pseudoprojection onto feasible polytope M $P M ( · )$ metric projection onto feasible polytope M

## References

1. Sokolinsky, L.B.; Sokolinskaya, I.M. Scalable Method for Linear Optimization of Industrial Processes. In Proceedings of the 2020 Global Smart Industry Conference, GloSIC, Chelyabinsk, Russia, 17–19 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 20–26. [Google Scholar] [CrossRef]
2. Jagadish, H.V.; Gehrke, J.; Labrinidis, A.; Papakonstantinou, Y.; Patel, J.M.; Ramakrishnan, R.; Shahabi, C. Big data and its technical challenges. Commun. ACM 2014, 57, 86–94. [Google Scholar] [CrossRef]
3. Hartung, T. Making Big Sense From Big Data. Front. Big Data 2018, 1, 5. [Google Scholar] [CrossRef] [PubMed][Green Version]
4. Sokolinskaya, I.; Sokolinsky, L.B. On the Solution of Linear Programming Problems in the Age of Big Data. In Proceedings of the Parallel Computational Technologies. PCT 2017. Communications in Computer and Information Science, Kazan, Russia, 3–7 April 2017; Sokolinsky, L., Zymbler, M., Eds.; Springer: Cham, Switzerland, 2017; Volume 753, pp. 86–100. [Google Scholar] [CrossRef][Green Version]
5. Chung, W. Applying large-scale linear programming in business analytics. In Proceedings of the 2015 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), Singapore, 6–9 December 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1860–1864. [Google Scholar] [CrossRef]
6. Gondzio, J.; Gruca, J.A.; Hall, J.A.J.; Laskowski, W.; Zukowski, M. Solving large-scale optimization problems related to Bell’s Theorem. J. Comput. Appl. Math. 2014, 263, 392–404. [Google Scholar] [CrossRef][Green Version]
7. Sodhi, M.S. LP modeling for asset-liability management: A survey of choices and simplifications. Oper. Res. 2005, 53, 181–196. [Google Scholar] [CrossRef][Green Version]
8. Branke, J. Optimization in Dynamic Environments. In Evolutionary Optimization in Dynamic Environments. Genetic Algorithms and Evolutionary Computation; Springer: Boston, MA, USA, 2002; Volume 3, pp. 13–29. [Google Scholar] [CrossRef]
9. Brogaard, J.; Hendershott, T.; Riordan, R. High-Frequency Trading and Price Discovery. Rev. Financ. Stud. 2014, 27, 2267–2306. [Google Scholar] [CrossRef][Green Version]
10. Deng, S.; Huang, X.; Wang, J.; Qin, Z.; Fu, Z.; Wang, A.; Yang, T. A Decision Support System for Trading in Apple Futures Market Using Predictions Fusion. IEEE Access 2021, 9, 1271–1285. [Google Scholar] [CrossRef]
11. Seregin, G. Lecture Notes on Regularity Theory for the Navier-Stokes Equations; World Scientific Publishing Company: Singapore, 2014; p. 268. [Google Scholar] [CrossRef]
12. Demin, D.A. Synthesis of optimal control of technological processes based on a multialternative parametric description of the final state. East. Eur. J. Enterp. Technol. 2017, 3, 51–63. [Google Scholar] [CrossRef][Green Version]
13. Kazarinov, L.S.; Shnayder, D.A.; Kolesnikova, O.V. Heat load control in steam boilers. In Proceedings of the 2017 International Conference on Industrial Engineering, Applications and Manufacturing, ICIEAM 2017—Proceedings, Saint Petersburg, Russia, 16–19 May 2017; IEEE: Piscataway, NJ, USA, 2017. [Google Scholar] [CrossRef]
14. Zagoskina, E.V.; Barbasova, T.A.; Shnaider, D.A. Intelligent Control System of Blast-furnace Melting Efficiency. In Proceedings of the SIBIRCON 2019—International Multi-Conference on Engineering, Computer and Information Sciences, Proceedings, Novosibirsk, Russia, 21–27 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 710–713. [Google Scholar] [CrossRef]
15. Fleming, J.; Yan, X.; Allison, C.; Stanton, N.; Lot, R. Real-time predictive eco-driving assistance considering road geometry and long-range radar measurements. IET Intell. Transp. Syst. 2021, 15, 573–583. [Google Scholar] [CrossRef]
16. Scholl, M.; Minnerup, K.; Reiter, C.; Bernhardt, B.; Weisbrodt, E.; Newiger, S. Optimization of a thermal management system for battery electric vehicles. In Proceedings of the 14th International Conference on Ecological Vehicles and Renewable Energies, EVER, Monte-Carlo, Monaco, 8–10 May 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar] [CrossRef]
17. Meisel, S. Dynamic Vehicle Routing. In Anticipatory Optimization for Dynamic Decision Making. Operations Research/Computer Science Interfaces Series; Springer: New York, NY, USA, 2011; Volume 51, pp. 77–96. [Google Scholar] [CrossRef]
18. Cheng, A.M.K. Real-Time Scheduling and Schedulability Analysis. In Real-Time Systems: Scheduling, Analysis, and Verification; John Wiley and Sons: Hoboken, NJ, USA, 2002; pp. 41–85. [Google Scholar] [CrossRef]
19. Kopetz, H. Real-Time Scheduling. In Real-Time Systems. Real-Time Systems Series; Springer: Boston, MA, USA, 2011; pp. 239–258. [Google Scholar] [CrossRef]
20. Prieto, A.; Prieto, B.; Ortigosa, E.M.; Ros, E.; Pelayo, F.; Ortega, J.; Rojas, I. Neural networks: An overview of early research, current frameworks and new challenges. Neurocomputing 2016, 214, 242–268. [Google Scholar] [CrossRef]
21. Raina, R.; Madhavan, A.; Ng, A.Y. Large-scale deep unsupervised learning using graphics processors. In Proceedings of the 26th Annual International Conference on Machine Learning (ICML’09), Montreal, QC, Canada, 14–18 June 2009; ACM Press: New York, NY, USA, 2009; pp. 873–880. [Google Scholar] [CrossRef][Green Version]
22. Tank, D.W.; Hopfield, J.J. Simple ‘neural’ optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit. IEEE Trans. Circuits Syst. 1986, CAS-33, 533–541. [Google Scholar] [CrossRef][Green Version]
23. Kennedy, M.P.; Chua, L.O. Unifying the Tank and Hopfield Linear Programming Circuit and the Canonical Nonlinear Programming Circuit of Chua and Lin. IEEE Trans. Circuits Syst. 1987, 34, 210–214. [Google Scholar] [CrossRef]
24. Rodriguez-Vazquez, A.; Dominguez-Castro, R.; Rueda, A.; Huertas, J.L.; Sanchez-Sinencio, E. Nonlinear Switched-Capacitor “Neural” Networks for Optimization Problems. IEEE Trans. Circuits Syst. 1990, 37, 384–398. [Google Scholar] [CrossRef]
25. Zak, S.H.; Upatising, V. Solving Linear Programming Problems with Neural Networks: A Comparative Study. IEEE Trans. Neural Netw. 1995, 6, 94–104. [Google Scholar] [CrossRef] [PubMed]
26. Malek, A.; Yari, A. Primal–dual solution for the linear programming problems using neural networks. Appl. Math. Comput. 2005, 167, 198–211. [Google Scholar] [CrossRef]
27. Liu, X.; Zhou, M. A one-layer recurrent neural network for non-smooth convex optimization subject to linear inequality constraints. Chaos Solitons Fractals 2016, 87, 39–46. [Google Scholar] [CrossRef]
28. Olkhovsky, N.; Sokolinsky, L. Visualizing Multidimensional Linear Programming Problems. In Proceedings of the Parallel Computational Technologies. PCT 2022. Communications in Computer and Information Science, Dubna, Russia, 29–31 March 2022; Sokolinsky, L., Zymbler, M., Eds.; Springer: Cham, Switzerland, 2022; Volume 1618, pp. 172–196. [Google Scholar] [CrossRef]
29. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
30. Lachhwani, K. Application of Neural Network Models for Mathematical Programming Problems: A State of Art Review. Arch. Comput. Methods Eng. 2020, 27, 171–182. [Google Scholar] [CrossRef]
31. Kaczmarz, S. Angenherte Auflsung von Systemen linearer Gleichungen. Bull. Int. L’Acadmie Pol. Sci. Lett. Cl. Sci. Mathmatiques Nat. Srie A Sci. Mathmatiques 1937, 35, 355–357. [Google Scholar]
32. Kaczmarz, S. Approximate solution of systems of linear equations. Int. J. Control. 1993, 57, 1269–1271. [Google Scholar] [CrossRef]
33. Cimmino, G. Calcolo approssimato per le soluzioni dei sistemi di equazioni lineari. In La Ricerca Scientifica, XVI; Series II, Anno IX, 1; RicercaSci.: Roma, Italy, 1938; pp. 326–333. [Google Scholar]
34. Gastinel, N. Linear Numerical Analysis; Academic Press: New York, NY, USA, 1971; p. ix+341. [Google Scholar]
35. Agmon, S. The relaxation method for linear inequalities. Can. J. Math. 1954, 6, 382–392. [Google Scholar] [CrossRef]
36. Motzkin, T.S.; Schoenberg, I.J. The relaxation method for linear inequalities. Can. J. Math. 1954, 6, 393–404. [Google Scholar] [CrossRef][Green Version]
37. Censor, Y.; Elfving, T. New methods for linear inequalities. Linear Algebra Appl. 1982, 42, 199–211. [Google Scholar] [CrossRef][Green Version]
38. De Pierro, A.R.; Iusem, A.N. A simultaneous projections method for linear inequalities. Linear Algebra Appl. 1985, 64, 243–253. [Google Scholar] [CrossRef][Green Version]
39. Sokolinskaya, I.; Sokolinsky, L. Revised Pursuit Algorithm for Solving Non-stationary Linear Programming Problems on Modern Computing Clusters with Manycore Accelerators. In Supercomputing. RuSCDays 2016. Communications in Computer and Information Science; Voevodin, V., Sobolev, S., Eds.; Springer: Cham, Switzerland, 2016; Volume 687, pp. 212–223. [Google Scholar] [CrossRef]
40. Sokolinskaya, I.M.; Sokolinsky, L.B. Scalability Evaluation of Cimmino Algorithm for Solving Linear Inequality Systems on Multiprocessors with Distributed Memory. Supercomput. Front. Innov. 2018, 5, 11–22. [Google Scholar] [CrossRef][Green Version]
41. Sokolinsky, L.B.; Sokolinskaya, I.M. Scalable parallel algorithm for solving non-stationary systems of linear inequalities. Lobachevskii J. Math. 2020, 41, 1571–1580. [Google Scholar] [CrossRef]
42. Gonzalez-Gutierrez, E.; Todorov, M.I. A relaxation method for solving systems with infinitely many linear inequalities. Optim. Lett. 2012, 6, 291–298. [Google Scholar] [CrossRef]
43. Vasin, V.V.; Eremin, I.I. Operators and Iterative Processes of Fejer Type. Theory and Applications; Inverse and III-Posed Problems Series; Walter de Gruyter: Berlin, Germany; New York, NY, USA, 2009; p. 155. [Google Scholar] [CrossRef]
44. Eremin, I.I.; Popov, L.D. Fejer processes in theory and practice: Recent results. Russ. Math. 2009, 53, 36–55. [Google Scholar] [CrossRef]
45. Nurminski, E.A. Single-projection procedure for linear optimization. J. Glob. Optim. 2016, 66, 95–110. [Google Scholar] [CrossRef]
46. Censor, Y. Can linear superiorization be useful for linear optimization problems? Inverse Probl. 2017, 33, 044006. [Google Scholar] [CrossRef][Green Version]
47. Visuthirattanamanee, R.; Sinapiromsaran, K.; Boonperm, A.A. Self-Regulating Artificial-Free Linear Programming Solver Using a Jump and Simplex Method. Mathematics 2020, 8, 356. [Google Scholar] [CrossRef][Green Version]
48. Gould, N.I. How good are projection methods for convex feasibility problems? Comput. Optim. Appl. 2008, 40, 1–12. [Google Scholar] [CrossRef][Green Version]
49. Sokolinsky, L.B. BSF: A parallel computation model for scalability estimation of iterative numerical algorithms on cluster computing systems. J. Parallel Distrib. Comput. 2021, 149, 193–206. [Google Scholar] [CrossRef]
50. Sokolinsky, L.B. BSF-skeleton: A Template for Parallelization of Iterative Numerical Algorithms on Cluster Computing Systems. MethodsX 2021, 8, 101437. [Google Scholar] [CrossRef] [PubMed]
51. Dolganina, N.; Ivanova, E.; Bilenko, R.; Rekachinsky, A. HPC Resources of South Ural State University. In Proceedings of the Parallel Computational Technologies. PCT 2022. Communications in Computer and Information Science, Dubna, Russia, 29–31 March 2022; Sokolinsky, L., Zymbler, M., Eds.; Springer: Cham, Switzerland, 2022; Volume 1618, pp. 43–55. [Google Scholar] [CrossRef]
52. Sokolinsky, L.B.; Sokolinskaya, I.M. FRaGenLP: A Generator of Random Linear Programming Problems for Cluster Computing Systems. In Proceedings of the Parallel Computational Technologies. PCT 2021. Communications in Computer and Information Science, Volgograd, Russia, 30 March–1 April 2021; Sokolinsky, L., Zymbler, M., Eds.; Springer: Cham, Switzerland, 2021; Volume 1437, pp. 164–177. [Google Scholar] [CrossRef]
53. Sokolinsky, L.B.; Sokolinskaya, I.M. VaLiPro: Linear Programming Validator for Cluster Computing Systems. Supercomput. Front. Innov. 2021, 8, 51–61. [Google Scholar] [CrossRef]
54. Gay, D.M. Electronic mail distribution of linear programming test problems. Math. Program. Soc. Coal Bull. 1985, 13, 10–12. [Google Scholar]
55. Keil, C.; Jansson, C. Computational experience with rigorous error bounds for the netlib linear programming library. Reliab. Comput. 2006, 12, 303–321. [Google Scholar] [CrossRef][Green Version]
56. Koch, T. The final NETLIB-LP results. Oper. Res. Lett. 2004, 32, 138–142. [Google Scholar] [CrossRef]
57. Deutsch, F.; Hundal, H. The rate of convergence for the cyclic projections algorithm I: Angles between convex sets. J. Approx. Theory 2006, 142, 36–55. [Google Scholar] [CrossRef]
Figure 1. Iteration execution scheme of the target stage.
Figure 1. Iteration execution scheme of the target stage.
Figure 2. Speedup and efficiency dependency for various dimensions on the number of worker nodes.
Figure 2. Speedup and efficiency dependency for various dimensions on the number of worker nodes.
Table 1. Specifications of the “Tornado SUSU” computing cluster.
Table 1. Specifications of the “Tornado SUSU” computing cluster.
ParameterValue
Number of processor nodes480
ProcessorIntel Xeon X5680 (6 cores, 3.33 GHz)
Processors per node2
Memory per node24 GB DDR3
InterconnectInfiniBand QDR (40 Gbit/s)
Operating systemLinux CentOS
Table 2. Applying the apex method to the Netlib-LP problems.
Table 2. Applying the apex method to the Netlib-LP problems.
NoProblemQuest StageTarget Stage
NameExact SolutionRough SolutionErrorRefined SolutionError
1adlittle    $2.25494963 × 10 5$    $3.67140280 × 10 5$$6.28 × 10 − 1$$2.2571324 × 10 5$$9.68 × 10 − 4$
2afiro$− 4.64753142 × 10 2$$− 4.55961488 × 10 2$$1.89 × 10 − 2$$− 4.6475310 × 10 2$$8.61 × 10 − 9$
3blend$− 3.08121498 × 10 1$$− 3.60232513 × 10 0$$8.83 × 10 − 1$$− 3.0811018 × 10 1$$3.19 × 10 − 5$
4fit1d$− 9.14637809 × 10 3$$− 3.49931014 × 10 3$$6.17 × 10 − 1$$− 9.1463386 × 10 3$$8.77 × 10 − 7$
5kb2$− 1.74990012 × 10 3$$− 1.39603193 × 10 3$$2.02 × 10 − 1$$− 1.6879152 × 10 3$$3.54 × 10 − 2$
6recipe$− 2.66616000 × 10 2$$− 2.66107349 × 10 2$$1.91 × 10 − 3$$− 2.6660404 × 10 2$$2.23 × 10 − 5$
7sc50a$− 6.45750770 × 10 1$$− 5.58016335 × 10 1$$1.36 × 10 − 1$$− 6.4568167 × 10 1$$1.06 × 10 − 4$
8sc50b$− 7.00000000 × 10 1$$− 6.92167246 × 10 1$$1.12 × 10 − 2$$− 6.9990792 × 10 1$$1.32 × 10 − 4$
9sc105$− 5.22020612 × 10 1$$− 4.28785710 × 10 1$$1.79 × 10 − 1$$− 5.1837995 × 10 1$$6.97 × 10 − 3$
10share2b$− 4.15732240 × 10 2$$− 4.28792528 × 10 2$$3.14 × 10 − 2$$− 4.1572001 × 10 2$$2.40 × 10 − 5$
 Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## Share and Cite

MDPI and ACS Style

Sokolinsky, L.B.; Sokolinskaya, I.M. Apex Method: A New Scalable Iterative Method for Linear Programming. Mathematics 2023, 11, 1654. https://doi.org/10.3390/math11071654

AMA Style

Sokolinsky LB, Sokolinskaya IM. Apex Method: A New Scalable Iterative Method for Linear Programming. Mathematics. 2023; 11(7):1654. https://doi.org/10.3390/math11071654

Chicago/Turabian Style

Sokolinsky, Leonid B., and Irina M. Sokolinskaya. 2023. "Apex Method: A New Scalable Iterative Method for Linear Programming" Mathematics 11, no. 7: 1654. https://doi.org/10.3390/math11071654

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.