Next Article in Journal
Quantum Mechanics of the Extended Snyder Model
Next Article in Special Issue
IRS-Assisted Hybrid Secret Key Generation
Previous Article in Journal
Three-Dimensional Path Planning of UAVs in a Complex Dynamic Environment Based on Environment Exploration Twin Delayed Deep Deterministic Policy Gradient
Previous Article in Special Issue
Review and Evaluation of Belief Propagation Decoders for Polar Codes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Newton-like Polynomial-Coded Distributed Computing for Numerical Stability

1
School of Computer Science and Technology, Kashi University, Kashi 844000, China
2
Guangdong Provincial Engineering Center for Ubiquitous Computing and Intelligent Networking, College of Electronic and Information Engineering, Shenzhen University, Shenzhen 518060, China
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(7), 1372; https://doi.org/10.3390/sym15071372
Submission received: 11 May 2023 / Revised: 27 June 2023 / Accepted: 30 June 2023 / Published: 6 July 2023
(This article belongs to the Special Issue New Advances in New-Generation Communication and Symmetry)

Abstract

:
For coded distributed computing (CDC), polynomial code is one prevalent encoding method for CDC (called Poly-CDC). It suffers from poor numerical stability due to the Vandermonde matrix serving as the coefficient matrix which needs to be inverted, and whose condition number increases exponentially with the size of the matrix or equivalently with the number of parallel worker nodes. To improve the numerical stability, especially for large networks, we propose a Newton-like polynomial code (NLPC)-based CDC (NLPC-CDC), with a design dedicated for both matrix–vector and matrix–matrix multiplications. The associated proof of the constructed code possesses a ( n , k ) -symmetrical combination property (CP), where symmetrical means the worker nodes have identical computation volume, CP means the k-symmetrical original computing tasks are encoded into n ( n k ) -symmetrically coded computing tasks, and the arbitrary k resulting from the n-coded computing tasks can recover the intended computing results. Extensive numerical studies verify the significant numerical stability improvement of our proposed NLPC-CDC over Poly-CDC.

1. Introduction

Matrix multiplications (especially large matrix multiplications) are widely adopted in data mining, machine learning, etc. [1]. A single computing node may not be able to handle large or frequent matrix multiplications [2,3,4,5,6]. Due to the development of 5G/6G communication techniques [7] and network virtualizations [8] that provide efficient and reliable data delivery, especially in future 6G communication networks, the strict requirements of a high data rate and computational complexity require frequent processing of large matrices [9,10], and distributed computing (DC) based on partitioning computing tasks could effectively alleviate this problem. By introducing proper redundant computing [11,12], coded distributed computing (CDC) effectively tackles the straggler problem of DC [13,14]. Prevalent CDC designs aims at obtaining the ( n , k ) -symmetrical combination property (CP): k original symmetrical computing tasks are encoded into n ( n k ) -coded computing tasks, and the arbitrary k resulting from the n-coded computing tasks can recover the intended computing results [15], where symmetrical means the worker nodes have identical computation volume. Based on this symmetrical assumption, the worker nodes can be viewed as having identical but independent capabilities, and the design focus of CDC can be directed towards achieving ( n , k ) CP.
Prevalent encoding methods for CDC are based on polynomial encoding methods [16,17,18], and is therefore called Poly-CDC, where the work in [16] set the fundamental/framework for Poly-CDC, while the work in [17,18] reported variants within this framework. More specifically, the the authors of [16] designed the fundamental encoding coefficients which laid the actual framework, and the authors in [17,18] modified the encoding coefficients within this framework. The drawback lies in poor the numerical stability during the associated decoding stage: The coefficient matrix of the polynomial method is a Vandermonde matrix whose condition number increases exponentially with the size of the matrix [19,20,21], and the decoding process inevitably involves the interpolation of polynomials, eventually evolving into the inversion of the coefficient matrix. Note that the size of the matrix may be large by network slicing techniques [22], and a large condition number means that a small perturbation will cause a larger perturbation after decoding [23,24], thus affecting the robustness of the decoding results. This makes large networks have extremely poor numerical stability.
In order to obtain higher numerical stability, based on Newton-like polynomial code (NLPC), we propose an NLPC-based CDC (NLPC-CDC). This encoding is designed for both matrix–vector and matrix–matrix multiplications. An associated proof that the constructed code possesses a ( n , k ) -symmetrical combination property (CP) is provided. Extensive numerical studies verify that this method improves numerical stability significantly.
Preliminaries are given in Section 2. Afterwards, our proposed NLPC-CDC is elaborated upon in Section 3. Numerical comparisons are executed in Section 4. Finally, conclusions are drawn in Section 5.

2. Preliminaries

We first formulate the matrix–vector and matrix–matrix multiplication within the CDC framework. Afterwards, we briefly describe the Newton interpolation polynomial (NIP) encoding method.

2.1. Matrix–Vector Multiplication within the CDC Framework

The first task is to calculate the matrix–vector multiplication A x , where matrix A R H × W , column vector x = x 1 , x 2 , , x W R W × 1 , and R stands for the real number field. For this task, the computing system consists of a master node and N-symmetrical worker nodes, as shown in Figure 1, where symmetrical means the worker nodes have identical computation capabilities. Under this symmetrical assumption, codes that have symmetical CP can be brought into play to tackle stragglers.
As H is very large, it is necessary to partition matrix A into m sub-matrices, including A 0 R ( H / m ) × W through A m 1 R ( H / m ) × W . Encoding the m sub-matrices into N matrices is denoted by C 1 through C N .
The master node delivers vector x to all worker nodes. Worker node i { 1 , 2 , , N } calculates the matrix–vector multiplication C i x and delivers the calculation result back to the master node. The master node can then execute decoding to recover the intended calculation result provided that enough results have been collected.

2.2. Matrix–Matrix Multiplication within the CDC Framework

The next task is to calculate the matrix–matrix multiplication A B , where the matrix A R H × W , B R W × L . For this task, the computing system consists of a master node and N worker nodes, as shown in Figure 2. The matrix A is split horizontally into m sub-matrices, including A 0 R ( H / m ) × W through A m 1 R ( H / m ) × W . Matrix B is then split vertically into q sub-matrices, including B 0 R W × ( L / q ) through B q 1 R W × ( L / q ) . We suppose H and L are divisible by m and q, respectively (otherwise, A and B are filled with zeros so that the number of rows of A and the number of columns of B are multiples of m and q).
The intended calculated matrix C becomes
C = A B = A 0 B 0 A 0 B 1 A 0 B q 1 A 1 B 0 A 1 B 1 A 1 B q 1 A m 1 B 0 A m 1 B 1 A m 1 B q 1 C 0 C 1 C q 1 C q C q + 1 C 2 q 1 C ( m 1 ) q C ( m 1 ) q + 1 C m q 1 .
Calculating C is equivalent to calculating C 0 through C m q 1 simultaneously. We set the recovery threshold K = m q , directly allocate C 0 through C m q 1 to K-symmetrical worker nodes (worker node i calculates C i ), and the calculation will encounter the problem of stragglers.
To tackle the straggler problem, an encoding strategy is applied on A 0 through A m 1 to obtain N > K -encoded sub-matrices, recorded as A ˜ 0 through A ˜ N 1 . Similarly, B 0 through B q 1 are encoded into B ˜ 0 through B ˜ N 1 .
The master node delivers the encoded pair A ˜ i , B ˜ i to worker node i N { 1 , 2 , , N } . Worker node i N then calculates the matrix–matrix multiplication C ˜ i = A ˜ i B ˜ i and delivers the calculation result back to the master node, which will then execute decoding to recover the original intended calculation result.

2.3. Newton Interpolation Polynomial (NIP) Encoding

As illustrated previously, symmetrical assumption leads the design to focus on CP encoding that can be directly applied in CDC systems. NIP may serve as a candidate for this encoding design.
NIP improves upon the Lagrange interpolation method by saving the number of multiplication and division operations. In addition, its base is not as complex as the Lagrange function, thus the solving process is much simpler than that of monomial based methods.
We suppose there are n + 1 real interpolation points x 0 x n in interval [ a , b ] , and the selected base is
π j ( x ) = k = 0 j 1 x x k = x x 0 x x 1 x x j 1 ,
for j = 0 , 1 , , n . This base has the property π j x i = 0 for i < j .
The polynomial form obtained by linear combination based on π j ( x ) is
j = 0 n a j π j ( x ) = a 0 + a 1 x x 0 + a 2 x x 0 x x 1 + +   a n x x 0 x x 1 x x n 1 ,
where a j is the undetermined coefficient. We let N n ( x ) denote the above formula, which form is called the Newton interpolation polynomial (NIP).
By substituting x 0 x n into the above Newton polynomial for interpolation, and re-arranging into matrix form, we obtain a lower triangular matrix (4).
π 0 x 0 π 1 x 0 π n x 0 π 0 x 1 π 1 x 1 π n x 1 π 0 x n π 1 x n π n x n a 0 a 1 a n = 1 0 0 1 x 1 x 0 0 1 x n x 0 x n x 0 x n x n 1 a 0 a 1 a n .
The objective is to solve for the undetermined coefficients a 0 a n . It can be observed from the above formula that the interpolation coefficient solution matrix of the Newton polynomial is a lower triangular matrix.

3. Proposed NLPC-Based CDC (NLPC-CDC)

The processing is divided into matrix–vector and matrix–matrix multiplications.

3.1. NLPC-CDC for Matrix–Vector Multiplication

By applying NIP to encode A 0 through A m 1 , we obtain
C ˜ ( x ) = i = 0 m 1 A i φ i ( x ) ,
where φ 0 ( x ) = 1 , φ i ( x ) = x x i 1 φ i 1 ( x ) . We expand C ˜ ( x ) to
C ˜ ( x ) = A 0 + A 1 x x 0 + A 2 x x 0 x x 1 + + A m 1 x x 0 x x 1 x x m 2 .
The master node delivers the encoded package C ˜ ( x ) and column vector x to the worker nodes. Worker node i performs multiplication of C ˜ ( x ) x , and interpolates C ˜ ( x ) at x i . The matrix form of the encoded results is shown in (7).
C ˜ x 0 C ¯ x 1 C ˜ x 2 C ¯ x m 1 C ¯ x n x = 1 0 0 0 0 0 1 x 1 x 0 0 0 0 0 1 x 2 x 0 x 2 x 1 x 2 x 0 0 0 0 0 1 x m 1 x 0 x m 1 x 1 x m 1 x 0 x m 1 x 2 x m 1 x 1 x m 1 x 0 x m 1 x m 2 x m 1 x 0 1 x n x 0 x n x 1 x n x 0 x n x 2 x n x 1 x n x 0 x n x m 2 x n x 0 A 0 A 1 A m 1 x .
It can be seen from the above formula that to solve A x , only m values from the interpolation coefficient matrix need to be returned, which goes through inversion and multiplying by C ˜ ( x ) obtains the final intended result. The interpolation coefficient matrix comprising m returned nodes is a lower triangular matrix, making the entire decoding matrix a sparse trapezoid and the recovery threshold K = m .

3.2. NLPC-CDC for Matrix–Matrix Multiplication

We suppose H = L , matrix A and B can both be partitioned into m blocks in the following manner:
A = A 0 A 1 A m 1 , B = B 0 B 1 B m 1 .
Applying encoding over these two matrices in the following manner gives:
A ˜ = i = 0 m 1 A i φ i ( x ) ,
B ˜ = j = 0 m 1 B j φ j m ( x ) ,
where φ 0 ( x ) = 1 , φ i ( x ) = x x i 1 φ i 1 ( x ) . Thus, the coded matrices have the following expansions:
A ˜ = i = 0 m 1 A i φ i ( x ) = A 0 + A 1 x x 0 + A 2 x x 0 x x 1 + + A m 1 x x 0 x x 1 x x m 2 ,
B ˜ = j = 0 m 1 B j φ j m ( x ) = B 0 + B 1 x x 0 x x 1 x x m 1 + B 2 x x 0 x x 1 x x 2 m 1 + + B m 1 x x 0 x x 1 x x m ( m 1 ) .
The master node delivers the encoded pair ( A ˜ , B ˜ ) to the worker nodes. Each worker node calculates
C ˜ = ( A ˜ , B ˜ ) = i = 0 m 1 A i φ i ( x ) j = 0 m 1 B j φ j m ( x ) = i = 0 m 1 j = 0 m 1 A i B j φ i ( x ) φ j m ( x ) .
We divide it into m parts, such that C ˜ can be written as shown in (14).
1 x x 0 x x 0 x x 1 x x 0 x x 1 x x m 2 A 0 B 0 A 1 B 0 A 2 B 0 A m 1 B 0 = M 1 c 1 .
Note that (13) can be re-expressed as
C ˜ = M c = M 1 M 2 M 3 M m c 1 c 2 c 3 c m ,
where M 1 denotes the interpolation coefficient term of the first m terms of C ˜ , and c 1 denotes the coefficient to be calculated for the first m terms of C ˜ . Similarly, the remaining items are processed.
We re-write (15) as C ˜ = M i c i , i { 1 , 2 , , m } . By interpolating C ˜ with x 0 x n , we obtain
C ˜ x 0 = A 0 B 0 , C ˜ x 1 = A 0 B 0 + A 1 B 0 x 1 x 0 , C ˜ x n = A 0 B 0 + A 1 B 0 x n x 0 + A 2 B 0 x n x 0 x n x 1 + + A m 1 B m 1 x n x 0 2 x n x 1 2 x n x m 2 2 x n x m ( m 1 ) 1 .
We divide n + 1 interpolation points x 0 x n into m + 1 sets x j , j 1 , 2 , , m , m , x 1 x m containing m points, where the data point of x 1 is x 0 x m 1 , the data point of x m is x m ( m 1 ) x m 2 1 , and the remaining n m 2 + 1 points ( x m 2 x n ) are placed in x m . Let M i j denote the result of interpolation of M i at x j .
After all interpolated results are processed in the above manner, they can be written in matrix form as shown in (17). Thus, similar to the Newton polynomial, the matrix for solving interpolation coefficients can also be treated as a trapezoid, called the Newton-like interpolation polynomial (NLIP). We have the following theorem, whose proof is deferred to the Appendix A as the proof is lengthy and does not hinder understanding the whole idea.
C ˜ x 0 C ˜ x m 1 C ˜ x m 2 1 C ˜ x n = C ˜ x 1 C ˜ x 2 C ˜ x 3 C ˜ x m 1 C ˜ x m C ˜ x m = M 11 0 M 12 M 22 0 M 13 M 23 M 33 0 0 M 1 ( m 1 ) M 2 ( m 1 ) M 3 ( m 1 ) M ( m 1 ) ( m 1 ) 0 M 1 m M 2 m M 3 m M ( m 1 ) m M m 2 M 1 m M 2 m M 3 m M ( m 1 ) m M m m c 1 c 2 c 3 c m 1 c m .
Theorem 1.
For any n ( n 2 ) , there is an NIP matrix of n rows and k columns, as shown in (18). Randomly selecting k rows from this matrix forms a k × k sub-matrix. This sub-matrix is invertible, meaning the Newton interpolation matrix has ( n , k ) -symmetrical CP.
1 0 0 0 0 0 1 x 1 x 0 0 0 0 0 1 x 2 x 0 x 2 x 1 x 2 x 0 0 0 0 0 1 x k 1 x 0 x k 1 x 1 x k 1 x 0 x k 1 x 2 x k 1 x 1 x k 1 x 0 x k 1 x k 2 x k 1 x 0 1 x n x 0 x n x 1 x n x 0 x n x 2 x n x 1 x n x 0 x n x k 2 x n x 0 .

4. Numerical Study

Experiments were divided into matrix–vector and matrix–matrix multiplication, where both the condition number and relative error performance was compared among the prevalent Poly-CDC and our proposed NLPC-CDC. Since the work in [16] is fundamental, our proposed NLPC is also fundamental, hence comparing these two fundamentals is needed, laying the foundation for future variant comparisons.

4.1. Matrix–Vector Multiplication

Condition number: The size of matrix A and column vector x are 5040 × 20 and 20 × 1 , respectively. We set the number of blocks m to be from 8 to 20. We set the interpolation point to be a random number in [ 0 , 3 ] . By averaging over 50 realizations, the condition number of the Poly-CDC and NLPC-CDC versus m are plotted in Figure 3 for N = 2 K and N = 10 K , respectively. The observations show that compared to Poly-CDC, NLPC-CDC reduces the condition number by over 10 10 and 10 8 for scenario N = 2 K and N = 10 K , respectively.
Relative error: The size of matrix A and column vector x are 900 × 20 and 20 × 1 , respectively. We set m to range from 10 to 30. The relative error of the results is plotted in Figure 4. The observations show that NLPC-CDC significantly outperforms Poly-CDC.

4.2. Matrix–Matrix Multiplication

Condition number: The size of matrix A and B are 2520 × 10 and 10 × 2520 , respectively. We set the number of blocks m to be from four to nine. A random number in [ 0 , 3 ] is selected as the interpolation point. By averaging over 50 realizations, the condition numbers of Poly-CDC and NLPC-CDC vs. m are plotted in Figure 5, for N = 2 K and N = 10 K , respectively. The observations show that the condition number of Poly-CDC increases rapidly when the number of blocks exceeds six, while the condition number of NLPC-CDC grows very slowly for both scenarios, N = 2 K and N = 10 K , indicating significant improvement of NLPC-CDC over Poly-CDC.
Relative error: The relatives error of the results are plotted in Figure 6. The observations show that NLPC-CDC significantly outperforms Poly-CDC.

5. Conclusions

To improve numerical stability, a novel NLPC-based CDC was proposed. A detailed design were executed for both matrix–vector and matrix–matrix multiplications. An associated proof that the constructed code possesses an ( n , k ) -symmetrical combination property (CP) was provided. A series of numerical studies verified that our proposed NLPC-CDC significantly outperforms Poly-CDC in terms of the achieved condition number or relative error, and the improvement was over 10 4 for all cases. This work adopted random interpolation points, without careful selection of real interpolation points. Therefore, future work may consider selecting appropriate interpolation points to obtain the best numerical results, which variants may outperform the Poly-CDC variants [17,18]. In addition, as the encoding procedure is complicated and tedious, we may consider designing a simple encoding method that can maintain numerical stability.

Author Contributions

Conceptualization, M.D., Y.T. and B.L.; Methodology, M.D. and X.L.; Validation, X.L. and Y.T.; Investigation, X.L.; Writing—original draft, M.D.; Writing—review & editing, B.L.; Supervision, M.D. and B.L.; Project administration, M.D. and B.L.; Funding acquisition, M.D. and B.L. All authors have read and agreed to the published version of the manuscript.

Funding

The research was jointly supported by research grants from the Natural Science Foundation of China (62071304), Guangdong Basic and Applied Basic Research Foundation (2020A1515010381), Basic Research foundation of Shenzhen City (20200826152915001, 20220809155455002), and Natural Science Foundation of Shenzhen University (00002501). The Program of the Science and Technology Bureau of Kashi (No. KS2022083) and Education Department of Xinjiang Uygur Autonomous Region (XJEDU2022P080).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Theorem 1.
Invertibility of a matrix is equivalent to the determinant of arbitrary square when a sub-matrix is non-zero. We prove this by the mathematical induction method.
(1)
We first consider the case where k = 2 . In this case, determinant 1 x i x 0 1 x j x 0 = x j x i , i j . Therefore, 1 0 1 x 1 x 0 = x 1 x 0 is non-zero, and hence we prove the determinant is non-zero.
(2)
We assume that the determinant is non-zero for an arbitrary ( k 1 ) × ( k 1 ) Newton-interpolation matrix, namely
1 x i x 0 x i x 0 x i x 1 x i x 0 x i x 1 x i x k 3 1 x j x 0 x j x 0 x j x 1 x j x 0 x j x 1 x j x k 3 1 x k x 0 x k x 0 x k x 1 x k x 0 x k x 1 x k x k 3 1 x p x 0 x p x 0 x p x 1 x p x 0 x p x 1 x p x k 3 = D k 1 0 ,
where x i , x j , x k x p takes actual values at random from x 0 x n . We let Q denote the above matrix within the determinant.
(3)
We prove that the determinant of an arbitrary k × k Newton-interpolation matrix is non-zero.
We randomly choose interpolation point x x from x 0 x n in the following manner and the chosen point should be distinct from x i , x j , x k x p of Q. We then add one row and one column for Q in the following manner:
1 x x x 0 x x x 0 x x x 1 x x x 0 x x x 1 x x x k 3 x x x 0 x x x 1 x x x k 2 x i x 0 x i x 1 x i x k 2 x j x 0 x j x 1 x j x k 2 Q x k x 0 x k x 1 x k x k 2 x n x 0 x n x 1 x n x k 2
= 1 x x x 0 x x x 0 x x x 1 x x x 0 x x x 1 x x x k 3 x x x 0 x x x 1 x x x k 2 1 x i x 0 x i x 0 x i x 1 x i x 0 x i x 1 x i x k 3 x i x 0 x i x 1 x i x k 2 1 x j x 0 x j x 0 x j x 1 x j x 0 x j x 1 x j x k 3 x j x 0 x j x 1 x j x k 2 1 x k x 0 x k x 0 x k x 1 x k x 0 x k x 1 x k x k 3 x k x 0 x k x 1 x k x k 2 1 x p x 0 x p x 0 x p x 1 x p x 0 x p x 1 x p x k 3 x p x 0 x p x 1 x p x k 2 .
Through elementary operations on this matrix, we can eliminate the first row except for the first element because
1 0 0 0 1 x i x x x i x x x i + x x x 1 x 0 x i k 1 x x k 1 x i k 2 x x k 2 L + x i k 3 x x k 3 M ± x i x x N 1 x j x x x j x x x j + x x x 1 x 0 x j k 1 x x k 1 x j k 2 x x k 2 L + x j k 3 x x k 3 M ± x j x x N 1 x k x x x k x x x k + x x x 1 x 0 x k k 1 x x k 1 x k k 2 x x k 2 L + x k k 3 x x k 3 M ± x k x x N 1 x p x x x p x x x p + x x x 1 x 0 x p k 1 x x k 1 x p k 2 x x k 2 L + x p k 3 x x k 3 M ± x p x x N .
Afterwards, we reduce the dimension to ( k 1 ) ( k 1 ) as
x i x x x i x x x i + x x x 1 x 0 x i k 1 x x k 1 x i k 2 x x k 2 L + x i k 3 x x k 3 M ± x i x x N x j x x x j x x x j + x x x 1 x 0 x j k 1 x x k 1 x j k 2 x x k 2 L + x j k 3 x x k 3 M ± x j x x N x k x x x k x x x k + x x x 1 x 0 x k k 1 x x k 1 x k k 2 x x k 2 L + x k k 3 x x k 3 M ± x k x x N x p x x x p x x x p + x x x 1 x 0 x p k 1 x x k 1 x p k 2 x x k 2 L + x p k 3 x x k 3 M ± x p x x N .
By extracting the common factor of each row, and letting T denote the common factor ( x i x x ) ( x j x x ) ( x k x x ) ( x p x x ) , we obtain
T 1 x i + x x x 1 x 0 x i k 2 + x i k 3 x x + + x x k 2 x i k 3 + x i k 4 x x + + x x k 3 L + ± N 1 x j + x x x 1 x 0 x j k 2 + x j k 3 x x + + x x k 2 x j k 3 + x j k 4 x x + + x x k 3 L + ± N 1 x k + x x x 1 x 0 x k k 2 + x k k 3 x x + + x x k 2 x k k 3 + x k k 4 x x + + x x k 3 L + ± N 1 x p + x x x 1 x 0 x p k 2 + x p k 3 x x + + x x k 2 x p k 3 + x p k 4 x x + + x x k 3 L + ± N
= T 1 x i + x x x 1 x 0 x i k 2 + x i k 3 x x L + x i k 4 x x 2 x x L + M + + x x k 2 ± N 1 x j + x x x 1 x 0 x j k 2 + x j k 3 x x L + x j k 4 x x 2 x x L + M + + x x k 2 ± N 1 x k + x x x 1 x 0 x k k 2 + x k k 3 x x L + x k k 4 x x 2 x x L + M + + x x k 2 ± N 1 x p + x x x 1 x 0 x p k 2 + x p k 3 x x L + x p k 4 x x 2 x x L + M + + x p k 2 ± N ,
where L , M , N is a constant that is a linear combination of the interpolation points x 0 x n . Through elementary transformations to eliminate these constants, we obtain
x i x x x j x x x k x x x p x x 1 x i x 0 x i x 0 x i x 1 x i x 0 x i x 1 x i x k 3 1 x j x 0 x j x 0 x j x 1 x j x 0 x j x 1 x j x k 3 1 x k x 0 x k x 0 x k x 1 x k x 0 x k x 1 x k x k 3 1 x p x 0 x p x 0 x p x 1 x p x 0 x p x 1 x p x k 3 = x i x x x j x x x k x x x p x x D k 1 0
which completes the proof. □

References

  1. Fan, M.; Hong, C.; Yingke, L. Blind Recognition of Forward Error Correction Codes Based on a Depth Distribution Algorithm. Symmetry 2021, 13, 1094. [Google Scholar] [CrossRef]
  2. Abbas, N.; Zhang, Y.; Taherkordi, A. Mobile edge computing: A survey. IEEE Internet Things J. 2017, 5, 450–465. [Google Scholar] [CrossRef] [Green Version]
  3. Shi, W.; Cao, J.; Zhang, Q. Edge Computing: Vision and Challenges. IEEE Internet Things J. 2016, 3, 637–646. [Google Scholar] [CrossRef]
  4. Zhu, H.; Luan, T.H.; Dong, M. Guest editorial: Fog computing on wheels. Peer-Peer Netw. Appl. 2018, 11, 735–737. [Google Scholar] [CrossRef] [Green Version]
  5. Wang, T.; Zhou, J.; Liu, A. Fog-Based Computing and Storage Offloading for Data Synchronization in IoT. IEEE Internet Things J. 2018, 6, 4272–4282. [Google Scholar] [CrossRef]
  6. Sisinni, E.; Saifullah, A.; Han, S. Industrial internet of things: Challenges, opportunities, and directions. IEEE Internet Things J. 2018, 14, 4724–4734. [Google Scholar] [CrossRef]
  7. Pokamestov, D. Adaptation of Signal with NOMA and Polar Codes to the Rayleigh Channel. Symmetry 2022, 14, 2103. [Google Scholar] [CrossRef]
  8. Shen, X.; Gao, J.; Wu, W.; Li, M.; Zhou, C.; Zhuang, W. Holistic network intelligence for 6G. IEEE Commun. Surv. Tutotrials 2022, 24, 1–30. [Google Scholar] [CrossRef]
  9. Li, J.; Dang, S.; Huang, Y.; Chen, P.; Qi, X.; Wen, M. Composite multiple-mode orthogonal frequency division multiplexing with index modulation. IEEE Trans. Wirel. Commun. 2022, 22, 3748–3761. [Google Scholar] [CrossRef]
  10. Li, J.; Dang, S.; Wen, M.; Li, Q.; Chen, Y.; Huang, Y. Index Modulation Multiple Access for 6G Communications: Principles, Applications, and Challenges. IEEE Netw. 2023, 37, 52–60. [Google Scholar] [CrossRef]
  11. Liu, N.; Li, K.; Tao, M. Code design and latency analysis of distributed matrix multiplication with straggling servers in fading channels. China Commun. 2021, 18, 15–29. [Google Scholar] [CrossRef]
  12. Shin, D.-J.; Kim, J.-J. Cache-Based Matrix Technology for Efficient Write and Recovery in Erasure Coding Distributed File Systems. Symmetry 2023, 15, 872. [Google Scholar] [CrossRef]
  13. Dean, J.; Barroso, L.A. The tail at scale. Commun. ACM 2013, 56, 74–80. [Google Scholar] [CrossRef]
  14. Herault, T.; Hoarau, W. FAIL-MPI: How fault-tolerant is fault-tolerant MPI? In Proceedings of the IEEE International Conference on Cluster Computing, Barcelona, Spain, 25–28 September 2006. [Google Scholar]
  15. Dai, M. SAZD: A low computational load coded distributed computing framework for IoT systems. IEEE Internet Things J. 2020, 7, 3640–3649. [Google Scholar] [CrossRef]
  16. Yu, Q.; Maddah-Ali, M.A.; Avestimehr, A.S. Polynomial Codes: An Optimal Design for High-Dimensional Coded Matrix Multiplication. Adv. Neural Inf. Process. Syst. 2017, 30, 1–11. [Google Scholar]
  17. Hasırcıoğlu, B.; Gómez-Vilardebó, J.; Gündüz, D. Bivariate Hermitian Polynomial Coding for Efficient Distributed Matrix Multiplication. In Proceedings of the IEEE Global Communications Conference, Taipei, Taiwan, 7–11 December 2020; pp. 1–6. [Google Scholar]
  18. Hasırcıoğlu, B.; Gómez-Vilardebó, J.; Gündüz, D. Bivariate Polynomial Coding for Efficient Distributed Matrix Multiplication. IEEE J. Sel. Areas Inf. Theory 2021, 2, 814–829. [Google Scholar] [CrossRef]
  19. Gautschi, W.; Inglese, G. Lower bounds for the condition number of vandermonde matrices. Numer. Math. 1987, 52, 241–250. [Google Scholar] [CrossRef]
  20. Gautschi, W. How (un) stable are vandermonde systems. Asymptot. Comput. Anal. 1990, 124, 193–210. [Google Scholar]
  21. Reichel, L.; Opfer, G. Chebyshev-vandermonde systems. Math. Comput. 1991, 57, 703–721. [Google Scholar] [CrossRef]
  22. Shen, X.; Gao, J.; Wu, W. AI-assisted network-slicing based next-generation wireless networks. IEEE Open J. Veh. Technol. 2020, 1, 45–66. [Google Scholar] [CrossRef]
  23. Quarteroni, A.; Sacco, R.; Saleri, F. Numerical Mathematics; Springer: Berlin/Heidelberg, Germany, 2010; Volume 37. [Google Scholar]
  24. Trefethen, L.N. Approximation Theory and Approximation Practice; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2013; Volume 128. [Google Scholar]
Figure 1. Matrix–vector multiplication within the CDC framework.
Figure 1. Matrix–vector multiplication within the CDC framework.
Symmetry 15 01372 g001
Figure 2. Matrix–matrix multiplication within the CDC framework.
Figure 2. Matrix–matrix multiplication within the CDC framework.
Symmetry 15 01372 g002
Figure 3. Condition number of matrix–vector multiplication. (a) N = 2 K . (b) N = 10 K .
Figure 3. Condition number of matrix–vector multiplication. (a) N = 2 K . (b) N = 10 K .
Symmetry 15 01372 g003
Figure 4. Relative error of matrix–vector multiplication. (a) N = 2 K . (b) N = 10 K .
Figure 4. Relative error of matrix–vector multiplication. (a) N = 2 K . (b) N = 10 K .
Symmetry 15 01372 g004
Figure 5. Condition number of matrix–matrix multiplication. (a) N = 2 K . (b) N = 10 K .
Figure 5. Condition number of matrix–matrix multiplication. (a) N = 2 K . (b) N = 10 K .
Symmetry 15 01372 g005
Figure 6. Relative error of matrix–matrix multiplication. (a) N = 2 K . (b) N = 10 K .
Figure 6. Relative error of matrix–matrix multiplication. (a) N = 2 K . (b) N = 10 K .
Symmetry 15 01372 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dai, M.; Lai, X.; Tong, Y.; Li, B. Newton-like Polynomial-Coded Distributed Computing for Numerical Stability. Symmetry 2023, 15, 1372. https://doi.org/10.3390/sym15071372

AMA Style

Dai M, Lai X, Tong Y, Li B. Newton-like Polynomial-Coded Distributed Computing for Numerical Stability. Symmetry. 2023; 15(7):1372. https://doi.org/10.3390/sym15071372

Chicago/Turabian Style

Dai, Mingjun, Xiong Lai, Yanli Tong, and Bingchun Li. 2023. "Newton-like Polynomial-Coded Distributed Computing for Numerical Stability" Symmetry 15, no. 7: 1372. https://doi.org/10.3390/sym15071372

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop