Next Article in Journal
On Some Characterizations for Uniform Dichotomy of Evolution Operators in Banach Spaces
Previous Article in Journal
An MM Algorithm for the Frailty-Based Illness Death Model with Semi-Competing Risks Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Sufficient Conditions for Orthogonal Matching Pursuit to Exactly Reconstruct Sparse Polynomials

School of Mathematics Science, Beihang University, Beijing 102200, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(19), 3703; https://doi.org/10.3390/math10193703
Submission received: 5 September 2022 / Revised: 27 September 2022 / Accepted: 4 October 2022 / Published: 10 October 2022
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
Orthogonal matching pursuit (OMP for short) is a classical method for sparse signal recovery in compressed sensing. In this paper, we consider the application of OMP to reconstruct sparse polynomials generated by uniformly bounded orthonormal systems, which is an extension of the work on OMP to reconstruct sparse trigonometric polynomials. Firstly, in both cases of sampled data with and without noise, sufficient conditions for OMP to recover the coefficient vector of a sparse polynomial are given, which are more loose than the existing results. Then, based on a more accurate estimation of the mutual coherence of a structured random matrix, the recovery guarantees and success probabilities for OMP to reconstruct sparse polynomials are obtained with the help of those sufficient conditions. In addition, the error estimation for the recovered coefficient vector is gained when the sampled data contain noise. Finally, the validity and correctness of the theoretical conclusions are verified by numerical experiments.

1. Introduction

It is well known that smooth functions have approximately sparse expansions under certain orthogonal systems. Therefore, one of the fundamental research problems in function approximation is the theoretical and algorithmic study of the exact reconstruction of the sparse polynomials [1,2,3,4,5,6]. In this paper, the form of the polynomial g ( x ) is considered as
g ( x ) = j Λ c j ϕ j ( x ) ,
where { ϕ j ( x ) } j Λ is a set of uniformly bounded orthonormal basis functions defined on Ω R d , Λ is the index set with | Λ | = n , where | Λ | denotes the number of elements in the set Λ , n can be finite or infinite. If the coefficient vector c = [ c 1 , , c n ] R n × 1 has at most s elements that are not zero, where 2 s n , we call the polynomial g ( x ) an s-sparse polynomial, and s is the sparsity of the polynomial g ( x ) and the coefficient vector c . Obviously, if the sparse coefficient vector c can be recovered exactly, the sparse polynomial g ( x ) can be reconstructed. Therefore, we transform the problem of reconstructing the sparse polynomial g ( x ) into the problem of recovering the sparse coefficient vector c . Only the case that n is finite and d = 1 is studied in this paper, but the results in this case can be generalized to high dimensions.
The commonly used recovery method is the interpolation method, which requires that the coefficient vector c of the undetermined interpolation polynomial g ˜ ( x ) must satisfy the following system of linear equations:
Φ c = b ,
where Φ R n × n is the interpolation matrix generated by the values of the basis functions taken at the sampling points { x i } i = 1 n , b = [ g ( x 1 ) , , g ( x n ) ] R n × 1 are the sampled data, c R n × 1 is the coefficient vector.
When the number of basis functions n is large, the system (1) is often ill-conditioned and cannot give a good recovery of the coefficient vector c . Moreover, in practical applications, sampling is often expensive. Therefore, determining how to give a better recovery of the sparse coefficient vector c by a small amount of samples is a key issue in the reconstruction of the sparse polynomial g ( x ) .

1.1. Compressed Sensing and the Reconstruction of Sparse Polynomial

In recent years, compressed sensing has developed rapidly [7]. Its main idea is to use nonlinear optimization to recover a sparse signal with as few observations as possible [8]. The original model for sparse signal recovery is
min c R n c 0 s . t . Φ c = b ,
where c 0 denotes the number of nonzero elements in the vector c , Φ R m × n is the measurement matrix, and  b R m × 1 is the observation vector. It is not difficult to find that the constraints in the model (2) are of the same form as the interpolation conditions (1). Hence, one considers applying the compressed sensing to recover the sparse coefficient vector with a small amount of samples and then reconstruct the sparse polynomial.
Unfortunately, the model (2) is an NP-hard problem. If we know in advance that the sparsity of the signal to be reconstructed is s, then we convert the model (2) into the following 2 -norm model with inequality constraints:
min c R n Φ c b 2 s . t . c 0 s .
Greedy algorithm is one of the commonly used algorithms for solving the model (3). The orthogonal matching pursuit (OMP for short) algorithm is one of the most classic and popular greedy algorithms with its advantages of high efficiency and accuracy [9,10,11,12].

1.2. The Recovery Guarantee for OMP to Recover Sparse Signals

The recovery guarantees for OMP to recover sparse signals are the sufficient conditions for OMP to recover sparse signals accurately, which are often given by the restricted isometry constant or the mutual coherence of the measurement matrix Φ . A formal definition of some of the terms used in this section will be formally introduced in Section 2.
The restricted isometry constant [13] (RIC for short) δ s of the measurement matrix is one of its important characteristic quantities, which is the smallest value in ( 0 , 1 ) that makes
( 1 δ s ) c 2 2 Φ c 2 2 ( 1 + δ s ) c 2 2
holds for every s-sparse vector c R n . In 2012, Mo and Shen [14] gave a sufficient condition for OMP to accurately recover s-sparse signals within s-step iterations as δ s + 1 < 1 / ( s + 1 ) . Wang and Shim [15] gave the same result in the same year. In 2015, Mo [16,17] and other scholars further optimized the above sufficient condition to δ s + 1 < 1 / s + 1 and showed that OMP cannot recover any s-sparse signal when δ s + 1 = 1 / s + 1 .
Another way to give the recovery guarantee for OMP to recover s-signals is to directly analyze the selection mechanism of OMP and give the minimum number of sampling points. Tropp [18] et al. showed that if the measurement matrix is an admissible measurement matrix, such as Gaussian random matrix or Bernoulli random matrix, when the number of noiseless sampling points satisfies m s ln n , OMP can recover any s-sparse signals. However, since the admissible measurement matrix requires high independence among the elements of the matrix, and the generated measurement matrix usually cannot satisfy its independence requirement in practical applications, the results and the analysis method in [18] are difficult to further generalize.
Other scholars considered giving the sufficient conditions for OMP by the mutual coherence of the measurement matrix. For both cases of sampled data with and without noise, the sufficient conditions for OMP to recover s-sparse vectors are shown in Proposition 1 [9] and Proposition 2 [19], respectively.
Proposition 1.
If the mutual coherence μ ( Φ ) satisfies
μ ( Φ ) < 1 2 s 1 ,
 then OMP can accurately reconstruct arbitrary s-sparse vectors when sampling is free of noise.
Proposition 2.
Let the noise vector ϵ satisfy ϵ 2 < ε , where ε > 0 is the noise bound, and the mutual coherence of the measurement matrix Φ satisfies μ ( Φ ) < 1 / ( 2 s 1 ) . When the stopping rule of OMP is r l 2 ε and all the nonzero elements in the sparse vector c R n satisfy
c j 2 ε 1 ( 2 s 1 ) μ ( Φ ) , j supp ( c ) ,
 where r l means the residual vector generated in the lth iteration of OMP and supp ( c ) denotes the support set of the vector c . Then, OMP can find the exact position of the nonzero elements in c .
Tropp [9] and Cai et al. [20] explained that the sufficient conditions in Proposition 1 and Proposition 2 are sharp by constructing counter-examples. However, in most practical applications, the measurement matrix often has some good properties, such as randomness and column orthogonality. Therefore, we hope to relax the sufficient conditions in Proposition 1 and Proposition 2 with the help of those good properties, so that OMP can also recover sparse vectors with high probability in both the cases of sampled data with and without noise.

1.3. The Recovery Guarantee for OMP to Reconstruct Sparse Polynomials

The recovery guarantee for OMP to reconstruct sparse polynomials is often given by the relationship among the number of basis functions n, sparsity s, and the number of sampling points m.
The applications of OMP to the reconstruction of sparse polynomials are mostly focused on reconstructing sparse trigonometric polynomials, which is due to the good form of trigonometric polynomials. In 2008, Kunis and Rauhut [21] proved that the s-sparse trigonometric polynomial can be reconstructed exactly with high probability by using OMP under random sampling when the number of sampling points m satisfies m s 2 ln ( n ) . In 2011, Xu [4] constructed a set of deterministic samples to reconstruct sparse trigonometric polynomials and provided the recovery guarantee for OMP under the deterministic sampling. However, it is difficult to generalize this type of analysis to the study of reconstructing general sparse polynomials, and the above studies were all performed in the case of sampled data without noise.
Huang et al. [11] applied OMP to reconstruct general sparse polynomials generated by a uniformly bounded orthonormal system. With the help of greedy selection ratio and extensive knowledge in probability theory, they gave the recovery guarantee and success probability for OMP to reconstruct general sparse polynomials. Their results showed that the recovery guarantee for OMP to reconstruct general sparse polynomials is also m s 2 ln ( n ) . However, their analytical method relies on the exact sampled data, and thus cannot be extended to the case of sampled data with noise.
In addition, most scholars gave the reconstruction of general sparse polynomials by solving the 1 -minimization [5,22,23] and the recovery guarantee through the RIC of the measurement matrix. However, the RIC-type recovery guarantees often contain some constants that are difficult to estimate, and solving the 1 -minimization requires more time cost than OMP when the problem size is large [11].
Therefore, we consider applying OMP to reconstruct general sparse polynomials under both cases of sampled data with and without noise, and hope to give some more specific recovery guarantees and success probability for OMP. Moreover, for the case of sampled data with noise, estimating the reconstruction error is necessary.

1.4. Contributions

In this paper, we apply OMP to the reconstruction problem of the sparse polynomial g ( x ) generated by a uniformly bounded orthonormal system { ϕ j ( x ) } j Λ . Here, the measurement matrix Φ R m × n in (3) is a structured random matrix generated by the values of the system { ϕ j ( x ) } j Λ at the independent random sampling points { x i } i = 1 m .
Firstly, although Feng [10] et al. and Rauhut [13] et al. have estimated the upper bound of the mutual coherence for structured random matrix Φ , we use the knowledge of probability theory to further optimize the upper bound to
μ ( Φ ) 2 K 2 ( p + 2 ) ln ( n ) 3 m
with probability at least 1 n p , where p > 0 is a fixed number.
Secondly, combining the selection mechanism of OMP and the condition that the measurement matrix is a structured random matrix as Φ , for both cases of sampled data with and without noise, we give more relaxed sufficient conditions for OMP to recover the sparse coefficient vector of the sparse polynomial g ( x ) , respectively. In addition, we also prove that the 2 -norm of the recovered error can be controlled by the noise bound ε when the sampled data contain noise.
Finally, by (4) and those sufficient conditions, we show that the recovery guarantees for OMP to reconstruct sparse polynomials is m s 2 ln n regardless if the sampled data contain noise or not, which is consistent with the recovery guarantee for OMP to reconstruct sparse trigonometric polynomials given in [21] for the case of sampled data without noise.
The rest of this paper is organized as follows: Section 2 introduces some preliminary knowledge required for this paper. Section 3 gives the recovery guarantees for OMP to reconstruct sparse polynomials in both cases of sampled data with and without noise, and gives the error estimation of the recovered coefficient vector by OMP when the sampled data contain noise. Section 4 contains the numerical experiments. Section 5 contains the conclusion.

2. Preparation of Manuscript

In this section, we will introduce some knowledge required for this paper.

2.1. Uniformly Bounded Orthonormal System

Let Ω R be endowed with a probability measure ω ( x ) . Suppose that { ϕ j ( x ) } j Λ is a set of basis functions on Ω with Λ = { 1 , 2 , , n } , which are orthonormal with respect to the probability measure ω ( x ) [13], i.e.,
Ω ϕ j ( x ) ϕ k ( x ) ω ( x ) d x = δ j , k = 0 , j k , 1 , j = k .
If { ϕ j ( x ) } j Λ have a uniform bound K > 0 on Ω , i.e.,
max j Λ ϕ j ( x ) K ,
then we say that the system { ϕ j ( x ) } j Λ is a uniformly bounded orthonormal system on Ω . When we apply compressed sensing to recover the sparse coefficient vector c of the sparse polynomial g ( x ) , the measurement matrix Φ R m × n in model (2) is a structured random matrix generated by taking values of { ϕ j ( x ) } j Λ at the sampling point { x i } i = 1 m , i.e.,
Φ = ϕ 1 ( x 1 ) ϕ 2 ( x 1 ) ϕ n ( x 1 ) ϕ 1 ( x 2 ) ϕ 2 ( x 2 ) ϕ n ( x 2 ) ϕ 1 ( x m ) ϕ 2 ( x m ) ϕ n ( x m ) = [ Φ 1 Φ 2 Φ n ] ,
where { x i } i = 1 m are sampled independently according to the probability measure ω ( x ) , Φ j , j = 1 , 2 , , n denote the jth column of the matrix Φ . The observation vector is
b = [ g ( x 1 ) , , g ( x m ) ] R m .

2.2. Orthogonal Matching Pursuit Algorithm

The greedy algorithm is an important class of methods for solving the model (3) [10,24], of which the orthogonal matching pursuit (OMP for short) algorithm is one of the most commonly used methods [9]. As shown in Algorithm 1, OMP first calculates the orthogonal projection complement of b in the space spanned by the currently selected column of Φ (the second line in the ‘update’ session), and then calculates the absolute value of the inner product between the orthogonal projection complement and the column in Φ (the ‘match’ session). We usually select the column that maximizes the absolute value of the inner product each time (the ‘identify’ session) [8].
Algorithm 1. Orthogonal matching pursuit algorithm.
Input: Measurement matrix Φ , observation vector b , sparsity s, tolerance ε
Output: Recovered vector c *
Initialization:  r 0 = b , a 0 = 0 , Λ 0 = , l = 0
while  l s or r 2 > ε  do
  match: h l = Φ r l
  identify: Λ l + 1 = Λ l { argmax j | h l ( j ) | }
  update: a l + 1 = argmin z : supp ( z ) Λ l + 1 b Φ z 2
     r l + 1 = b Φ a l + 1
     l = l + 1
end while
c * = a s + 1
In the case of sampled data without noise, based on the selection mechanism of OMP, Tropp [9] et al. gave a sufficient condition for the exact recovery of s-sparse vectors when the columns of measurement matrix have normalized 2 -norm.
Proposition 3.
Let Φ opt R m × s be a matrix consisting of the columns of the measurement matrix Φ whose index lies in the support set of the s-sparse vector c . Then, OMP can recover the s-sparse vector c exactly when
max Ψ Φ opt Ψ 1 < 1 ,
 where Ψ consists of the columns of the measurement matrix whose index is not in the support set of the vector c , Φ opt = Φ opt Φ opt 1 Φ opt is the pseudo-inverse of the matrix Φ opt , Φ opt is the transpose of the matrix Φ opt .
Equation (7) is often referred to as the ERC (exact recovery condition) of OMP.
Remark 1.
It is not difficult to verify that when the columns of the measurement matrix do not have normalized 2 -norm, the ERC still guarantees that OMP can accurately recover the s-sparse vector c when the sampled data do not contain noise.

2.3. Mutual Coherence and Cumulative Coherence Function

Mutual coherence and cumulative coherence function of the matrix are important parameters to measure the orthogonality of the matrix. They are defined as follows [13].
Definition 1.
The mutual coherence of a matrix Φ R m × n is defined as
μ ( Φ ) : = max j k Φ j , Φ k Φ j 2 · Φ k 2 ,
 where Φ j , Φ k represent the jth and kth column of the matrix, respectively.
Definition 2.
For a positive integer s n , the cumulative coherence function μ 1 ( s ) of the matrix Φ R m × n is defined as
μ 1 ( s ) : = max | Λ | = s max Ψ j λ Λ Φ λ , Ψ j Φ λ 2 · Ψ j 2
 where Λ is an index set with cardinality s. Φ λ and Ψ j denote the columns of Φ whose index are in and not in Λ, respectively.
Remark 2.
According to the Schwarz inequality, the mutual coherence μ ( Φ ) obviously satisfies μ ( Φ ) [ 0 , 1 ] .
Mutual coherence μ ( Φ ) and cumulative coherence function μ 1 ( s ) of a matrix have the following relationship [13].
Proposition 4.
Suppose that the mutual coherence of a matrix Φ is μ ( Φ ) , then
μ 1 ( s ) s μ ( Φ )
 holds for any natural number s.

2.4. Bernstein Inequality

Bernstein inequality is a classic inequality for variance estimation and higher-order moment estimation for independent bounded or unbounded random variables. Its form on bounded random variables is as follows.
Proposition 5.
Suppose X 1 , , X m are independent random variables with zero mean. Then, for i { 1 , , m } , there exists a constant M > 0 such that | X i | M hold almost surely. For constants γ i > 0 , i { 1 , , m } , suppose further that E | X i | 2 γ i 2 , then for all ε > 0 , it holds that
Prob i = 1 m X i ε 2 exp ε 2 / 2 γ 2 + M ε / 3 ,
 here γ 2 : = i = 1 m γ i 2 .
For other forms of Bernstein inequality, see [25].

3. The Recovery Guarantee and Reconstruction Error for OMP to Reconstruct Sparse Polynomials

In this section, we first give a more accurate estimation of the upper bound for the mutual coherence of the structured random matrix. Then, for both cases of sampled data with and without noise, we obtain the sufficient conditions for OMP to recover sparse vectors when the measurement matrix is a structured random matrix. Finally, combining the above results, we derive the recovery guarantees and the success probability for OMP to reconstruct sparse polynomials. Moreover, for the case of sampled data with noise, we also show that the reconstruction error between the recovered and original coefficient vector can be controlled by the noise bound.

3.1. The Estimation for the Upper Bound of the Mutual Coherence

In this subsection, we give an upper bound estimation for the mutual coherence of the structured random measurement matrix (5) with the help of the definition of mutual coherence, the law of large numbers, and Proposition 5.
Lemma 1.
Suppose that the matrix Φ R m × n is a structured random matrix generated by the values of { ϕ j ( x ) } j = 1 n at the sampling points { x i } i = 1 m , which is shown in (5), where { x i } i = 1 m are independently sampled according to the corresponding probability measure ω ( x ) . When m is sufficiently large, the mutual coherence μ ( Φ ) satisfies
μ ( Φ ) 2 K 2 ( p + 2 ) ln ( n ) 3 m ,
 with probability at least 1 n p ; here p > 0 is a fixed number, and K > 0 is the uniform upper bound of the basis functions { ϕ j ( x ) } j = 1 n .
Proof. 
According to the orthonormality of basis functions, for any j = 1 , 2 , , n , we have
E ϕ j 2 ( x ) = Ω ϕ j 2 ( x ) ω ( x ) d x = 1 .
Then, by the law of large numbers, when m , it holds that
1 m Φ j 2 2 = 1 m i = 1 m ϕ j 2 ( x i ) E ϕ j 2 ( x ) = 1 ,
where Φ j represents the jth column of the matrix Φ . Therefore, when m , we have Φ j m . From the definition of the mutual coherence, it is clear that as m increases, it holds that
μ ( Φ ) = max j k Φ j , Φ k Φ j 2 · Φ k 2 max j k 1 m Φ j , Φ k = max j k 1 m Φ j , Φ k .
Let X i = ϕ j ( x i ) ϕ k ( x i ) , i = 1 , , m , j k , then 1 m Φ j , Φ k = 1 m i = 1 m X i , and by the orthonormality of { ϕ j ( x ) } j = 1 n , we have
E [ X i ] = Ω ϕ j ( x ) ϕ k ( x ) ω ( x ) d x = 0 , j k ,
that is, for any i = 1 , , m , it holds that E [ X i ] = 0 . In addition, since { ϕ j ( x ) } j Λ are uniformly bounded and orthonormal,
E X i 2 = E ϕ j ( x i ) ϕ k ( x i ) 2 K 2 E ϕ k ( x i ) 2 = K 2
holds. Because of the independence of { x i } i = 1 m , { X i } i = 1 m are also independent of each other and have the uniform bound X i K 2 . Therefore, according to Proposition 5 and Remark 2, for any δ [ 0 , 1 ] , we have
Prob 1 m i = 1 m X i δ = Prob i = 1 m X i m δ 2 exp m 2 δ 2 / 2 m K 2 + m K 2 δ / 3 = 2 exp 3 2 · m δ 2 3 K 2 + K 2 δ 2 exp 3 m 8 K 2 · δ 2 .
Based on Boole’s inequality [13], there are
Prob max j k 1 m Φ j , Φ k δ 1 2 n ( n 1 ) · 2 exp 3 m 8 K 2 · δ 2 n 2 exp 3 m 8 K 2 · δ 2 .
Let δ = 2 K 2 ( p + 2 ) ln ( n ) 3 m , p > 0 is a fixed number, then we have
Prob max j k 1 m Φ j , Φ k δ n p ,
which means
Prob μ ( Φ ) δ n p .
Furthermore, it holds that
Prob μ ( Φ ) δ 1 n p .
In summary, the mutual coherence μ ( Φ ) of Φ satisfies
μ ( Φ ) 2 K 2 ( p + 2 ) ln ( n ) 3 m ,
with probability at least 1 n p . □
Remark 3.
From Definition 1, it is easy to find that the column normalization of the matrix does not affect its mutual coherence and the value of its cumulative coherence function.

3.2. The Recovery Guarantee for OMP under the Noiseless Condition

In this subsection, we first give a sufficient condition for OMP to exactly recover the s-sparse vectors by the mutual coherence μ ( Φ ) of the measurement matrix (5). Firstly, some assumptions and notations are introduced. Without loss of generality, assume that the first s elements of the original coefficient vector c R n are nonzero elements, and use the sets Λ s and Λ n s to denote the support and nonsupport sets of the vector c , respectively. Clearly, there are | Λ s | = s and | Λ n s | = n s . Then, partition the measurement matrix Φ as
Φ = [ Φ opt | Ψ ] = [ Φ opt | Ψ 1 , , Ψ n s ] ,
where Φ opt R m × s and Ψ R m × ( n s ) are the submatrices consisting of the first s columns and the last ( n s ) columns of the measurement matrix Φ , respectively, Ψ j , j Λ n s denotes the jth column of the submatrix Ψ .
Theorem 1.
If the measurement matrix Φ R m × n is the structured random matrix given in Lemma 1 and the the observation vector b = [ g ( x 1 ) , , g ( x m ) ] R m × 1 , then, when m is sufficiently large, and the mutual coherence μ ( Φ ) satisfies
μ ( Φ ) < 1 s ,
 solving model (3) by OMP can exactly recover arbitrary s-sparse vector c with high probability.
Remark 4.
Since the law of large numbers is used in the proof of Theorem 1, the result in Theorem 1 holds with high probability.
Proof. 
Starting from the (ERC) condition in Proposition 3, expanding the pseudo-inverse by definition, we have
max j Λ n s Φ opt Ψ j 1 = max j Λ n s Φ opt Φ opt 1 Φ opt Ψ j 1 Φ opt Φ opt 1 1 · max j Λ n s Φ opt Ψ j 1 .
Firstly, consider the first term on the right-hand side of Equation (10). Since { ϕ j ( x ) } j = 1 n is a uniformly bounded orthonormal system, it follows from the law of large numbers that when m , it holds that
1 m Φ opt Φ opt = 1 m i = 1 m ϕ 1 2 ( x i ) 1 m i = 1 m ϕ 2 ( x i ) ϕ 1 ( x i ) 1 m i = 1 m ϕ s ( x i ) ϕ 1 ( x i ) 1 m i = 1 m ϕ 1 ( x i ) ϕ 2 ( x i ) 1 m i = 1 m ϕ 2 2 ( x i ) 1 m i = 1 m ϕ s ( x i ) ϕ 2 ( x i ) 1 m i = 1 m ϕ 1 ( x i ) ϕ s ( x i ) 1 m i = 1 m ϕ 2 ( x i ) ϕ s ( x i ) 1 m i = 1 m ϕ s 2 ( x i ) E [ ϕ 1 2 ( x ) ] E [ ϕ 2 ( x ) ϕ 1 ( x ) ] E [ ϕ s ( x ) ϕ 1 ( x ) ] E [ ϕ 1 ( x ) ϕ 2 ( x ) ] E [ ϕ 2 2 ( x ) ] E [ ϕ s ( x ) ϕ 2 ( x ) ] E [ ϕ 1 ( x ) ϕ s ( x ) ] E [ ϕ 2 ( x ) ϕ s ( x ) ] E [ ϕ s 2 ( x ) ] = 1 0 0 0 1 0 0 0 1 = I ,
that is,
Φ opt Φ opt m I .
Thus,
Φ opt Φ opt 1 ( m I ) 1 = 1 m I .
Then, it holds that
Φ opt Φ opt 1 1 1 m I 1 = 1 m I 1 = 1 m .
Secondly, we analyze the second term on the right-hand side of Equation (10). Similarly, by the law of large numbers, when m , we have
Φ j 2 m , j = 1 , , n .
Then, it holds that
max j Λ n s λ Λ s Φ λ Φ λ 2 , Ψ j Ψ j 2 = max j Λ n s λ Λ s 1 Φ λ 2 · Ψ j 2 Φ λ , Ψ j 1 m max j Λ n s λ Λ s Φ λ , Ψ j .
Since | Λ s | = s , it is clear from the definition of the cumulative coherence function that we have
μ 1 ( s ) 1 m max j Λ n s λ Λ s Φ λ , Ψ j = 1 m max j Λ n s Φ opt Ψ j 1 ,
that is,
max j Λ n s Φ opt Ψ j 1 m μ 1 ( s ) .
In summary, substituting (11) and (12) into (10) yields
max j Λ n s Φ opt Ψ j 1 Φ opt Φ opt 1 1 · max j Λ n s Φ opt Ψ j 1 μ 1 ( s ) .
Finally, by the assumption μ ( Φ ) < 1 / s of this theorem, according to Proposition 4, it follows that
μ 1 ( s ) < s · μ ( Φ ) < 1
holds. Furthermore, the (ERC) condition holds. Based on Proposition 3, the conclusion of this theorem is proved. □
In the case of sampled data without noise, Theorem 1 gives the sufficient condition for OMP to exactly recover s-sparse vectors with high probability through the mutual coherence of the structured random matrix. Then, with the help of Lemma 1 and Theorem 1, the recovery guarantee and success probability for OMP to reconstruct sparse polynomials is given in the following theorem.
Theorem 2.
Suppose that g ( x ) = j Λ , | Λ | = n c j ϕ j ( x ) is an s-sparse polynomial, where { ϕ j ( x ) } j Λ is a uniformly bounded orthonormal system defined on Ω with probability measure ω ( x ) and a uniform upper bound K > 0 . Suppose that the measurement matrix Φ R m × n in the model (3) is a structured random matrix generated by the value of { ϕ j ( x ) } j Λ at the sampling points { x i } i = 1 m , where { x i } i = 1 m are independently sampled according to the probability measure ω ( x ) . The observation vector b = [ g ( x 1 ) , , g ( x m ) ] R m × 1 . Then, when
m > 8 3 K 2 ( p + 2 ) s 2 ln ( n ) ,
 solving model (3) by OMP can reconstruct the s-sparse polynomial g ( x ) with probability at least 1 n p ; here, p > 0 is a fixed number.
Proof. 
Based on the assumptions, it is easy to verify that the measurement matrix Φ satisfies the conditions in Lemma 1, so the mutual coherence μ ( Φ ) satisfies
μ ( Φ ) 2 K 2 ( p + 2 ) ln ( n ) 3 m
with probability at least 1 n p . Then, when the number of sampling points satisfies
m > 8 3 K 2 ( p + 2 ) s 2 ln ( n ) ,
it can be proved that the right-hand side of Equation (13) is less than 1 / s . Therefore, from Lemma 1, it is obvious that solving model (3) by OMP can exactly recover the s-sparse coefficient vector c , and further reconstruct the s-sparse polynomial g ( x ) . □
Remark 5.
From Theorem 2, it is easy to find that when the constant p is taken as a large number, it will make the recovery guarantee m too large or even exceed the number of basis functions n, which leads to it not being able to reconstruct the sparse polynomial g ( x ) by fewer sampling points. However, if the constant p is small, the lower bound of the theoretical success probability will be too small, making it useless. Therefore, to balance the relationship between the recovery guarantee and the lower bound of the theoretical success probability, the constant p is usually taken as p [ 0.2 , 0.4 ] .

3.3. The Recovery Guarantee and the Reconstruction Error for OMP under the Noisy Condition

In this subsection, we discuss the recovery guarantee, the success probability, and the reconstruction error for OMP to reconstruct s-sparse polynomials in the case of the sampling data with noise. Before that, we firstly perform column normalization of the measurement matrix Φ , i.e.,
Φ ˜ = Φ 1 Φ 1 2 Φ 2 Φ 2 2 Φ n Φ n 2 .
Similar to (9), here, we partition the measurement matrix Φ ˜ as
Φ ˜ = Φ ˜ opt | Ψ ˜ 1 , , Ψ ˜ n s .
By the law of large numbers, when m , we have Φ j 2 m , j = 1 , 2 , , n . Thus, as m increases, there are Φ ˜ 1 m Φ and
g ˜ ( x i ) = j = 1 n c j ϕ j ( x i ) Φ j 2 1 m j = 1 n c j ϕ j ( x i ) = 1 m g ( x i ) , i = 1 , 2 , , m .
Therefore, the corresponding observation vector in noisy condition is
b ˜ = 1 m b + ϵ Φ ˜ c + ϵ ,
where ϵ = [ ϵ 1 , ϵ 2 , , ϵ m ] R m is the noise vector and satisfies ϵ 2 ε , and ε > 0 is the noise bound. Based on the above analysis, the model (3) can be rewritten as
min c ˜ R n Φ ˜ c ˜ b ˜ 2 s . t . c ˜ 0 s ,
where the measurement matrix Φ ˜ R m × n and the observation vector b ˜ R m × 1 are shown in (14) and (15), respectively. Solving model (16) by OMP can recover the sparse coefficient c ˜ , and then reconstruct the s-sparse polynomial g ( x ) .
We next analyze OMP. For this purpose, we introduce some notations here. Suppose that OMP selects k indices located in the support set after k iterations, and let the set formed by these indices be Λ k . Since Λ k Λ s , we denote Λ s k = Λ s / Λ k . Let Φ ˜ ( k ) R m × k be the submatrix consisting of the columns in the measurement matrix Φ ˜ whose indicators lie in Λ k . Let
Proj ( b ˜ ) = Φ ˜ ( k ) Φ ˜ ( k ) · b ˜
be the projection of the observation b ˜ onto the space spanned by the columns of Φ ˜ ( k ) , and denote the projection operator as P k . Thus, the residual vector r k of OMP in the kth iteration is
r k = b ˜ Proj ( b ˜ ) = ( I P k ) b ˜ = ( I P k ) Φ ˜ c + ( I P k ) ϵ .
For convenience, let α k = ( I P k ) Φ ˜ c , β k = ( I P k ) ϵ , and introduce the following notations from reference [19]:
M = max j Λ n s Φ ˜ opt Ψ ˜ j 1 , M k , 1 = max λ Λ s Φ ˜ λ α k , M k , 2 = max λ Λ n s Ψ ˜ j α k , N k = max j Λ Φ ˜ j β k .
Reference [19] showed that OMP can select an element in the index set Λ s k in the current iteration if
M k , 1 > 2 1 M N k .
Furthermore, according to the (ERC) condition (7), when μ ( Φ ˜ ) < 1 / s , combining with Equation (12) and Proposition 4, we obtain that
M k , 1 > 2 1 s μ ( Φ ˜ ) N k
is also a sufficient condition for OMP to select an element in the index set Λ s k in the current iteration. We then give the following theorem.
Theorem 3.
Suppose that the measurement matrix Φ ˜ and the observation vector b ˜ are given in (14) and (15), respectively, and the noise vector ϵ satisfies ϵ 2 ε . Then, when m is sufficiently large and μ ( Φ ˜ ) < 1 / s , if the nonzero elements in the s-sparse vector c satisfy
c j 2 ε ( 1 s μ ( Φ ˜ ) ) ( 1 ( s 1 ) μ ( Φ ˜ ) ) , j supp ( c ) ,
 then solving model (16) by OMP with the stop rule r 2 ε can accurately find the positions of the nonzero elements in vector c .
Before proving Theorem 3, we first give the following proposition [19] which is important in the proof.
Proposition 6.
If μ ( Φ ˜ ) < 1 s 1 , then all the eigenvalues of the matrix ( Φ ˜ ( s k ) ) ( I P k ) Φ ˜ ( s k ) are located in the interval
1 ( s 1 ) μ ( Φ ˜ ) , 1 + ( s 1 ) μ ( Φ ˜ ) ,
 where Φ ˜ ( s k ) denotes the submatrix consisting of the columns of the measurement matrix Φ ˜ whose index lie in Λ s k .
With the help of Proposition 6, we give the proof of Theorem 3.
Proof. 
Let c ( s k ) be the vector consisting of the elements of the vector c whose index lie in Λ s k , and by the definition of M k , 1 , the relationship between the -norm and the 2 -norm, and the properties of the eigenvalues, we have
M k , 1 = ( Φ ˜ ( s k ) ) ( I P k ) Φ ˜ ( s k ) c ( s k ) ( s k ) 1 2 ( Φ ˜ ( s k ) ) ( I P k ) Φ ˜ ( s k ) c ( s k ) 2 ( s k ) 1 2 λ min c ( s k ) 2 ,
where λ min is the smallest eigenvalue of the matrix ( Φ ˜ ( s k ) ) ( I P k ) Φ ˜ ( s k ) . According to Proposition 6, it holds that
M k , 1 ( s k ) 1 2 ( 1 ( s 1 ) μ ( Φ ˜ ) ) c ( s k ) 2 .
Combining with (17), if
c ˜ ( s k ) 2 > 2 s k N k ( 1 s μ ( Φ ˜ ) ) ( 1 ( s 1 ) μ ( Φ ˜ ) ) ,
then OMP can select an element in the index set Λ s k in the current iteration.
Furthermore, for any j Λ s , according to ϵ 2 ε and Schwarz inequality, we have
Φ ˜ j β k Φ ˜ j 2 β k 2 = ( I P k ) ϵ 2 ϵ 2 ε ,
that is, N k ε . Thus, (19) holds when | c j | , j supp ( c ) satisfies (18), i.e., OMP can select an index located in Λ s k in the current iteration.
We next consider the stopping rule r k 2 ε . Here, we show that OMP does not stop when k < s under this rule. Recall that r k is the residual vector at the kth step of OMP, then from the triangle inequality and the definition of r k , we have
r k 2 = ( I P k ) Φ ˜ c + ( I P k ) ϵ 2 ( I P k ) Φ ˜ c 2 ( I P k ) ϵ 2 ( I P k ) Φ ˜ ( s k ) c ( s k ) 2 ε .
Furthermore, by Proposition 6 and the assumption in this theorem, it holds that
( I P k ) Φ ˜ ( s k ) c ( s k ) 2 λ min c ( s k ) 2 2 ε 1 s μ ( Φ ˜ ) > 2 ε .
Therefore,
r k 2 > 2 ε ε = ε ,
that is, when k < s , the 2 -norm of the residual vector r k does not satisfy the stopping rule, and OMP does not stop at the current iteration. □
Remark 6.
Compared with the requirement of the column orthogonality of the measurement matrix limited by the upper bound of the mutual coherence given in Proposition 1 and Proposition 2, the requirement of that given in Theorem 1 and Theorem 3 is significantly relaxed, allowing more matrices to be used as measurement matrix that satisfy the requirement.
Based on the above theorem, similar to Theorem 2, we can derive the following theorem.
Theorem 4.
Suppose that g ( x ) = j Λ , | Λ | = n c j ϕ j ( x ) is an s-sparse polynomial, where { ϕ j ( x ) } j Λ is a uniformly bounded orthonormal system defined on Ω with probability measure ω ( x ) and a uniform upper bound K > 0 . Let Φ ˜ R m × n be the measurement matrix with normalized columns, as shown in (14), which is generated by the value of { ϕ j ( x ) } j Λ at the sampling points { x i } i = 1 m , where { x i } i = 1 m are independently sampled according to the probability measure ω ( x ) . The observation vector b ˜ = ( 1 / m ) [ g ( x 1 ) , , g ( x m ) ] + ϵ , where ϵ = [ ϵ 1 , , ϵ m ] is the noise vector with ϵ 2 ε . Further, assume that the coefficients of g ( x ) satisfy
c j 2 ε ( 1 s μ ( Φ ˜ ) ) ( 1 ( s 1 ) μ ( Φ ˜ ) ) , j supp ( Λ ) .
Then, when
m > 8 3 K 2 ( p + 2 ) s 2 ln ( n ) ,
 solving model (16) by OMP with stopping rule r 2 ε can accurately find the positions of the nonzero terms of the sparse polynomials g ( x ) with probability at least 1 n p , where p > 0 is a fixed number. Furthermore, when OMP succeeds, the reconstruction error between the original coefficient vector c and the recovered vector c ˜ satisfies
c c ˜ 2 C · ε ,
 where C > 0 is a constant independent of ε.
Proof. 
The proof of Theorem 4 is divided into two parts: first, we give the recovery guarantee and the success probability for OMP to accurately find the positions of nonzero terms of g ( x ) ; second, we estimate the reconstruction error between the original and recovered coefficient vector.
The first part of the proof is similar to that of Theorem 2. Since the column normalization of the matrix does not affect the mutual coherence of the matrix, the estimation of the recovery guarantee and the success probability can be derived directly with the help of Lemma 1 and Theorem 3.
The second part is proved below. Without loss of generality, assume that Λ = { 1 , , n } and Λ s = supp ( c ) = { 1 , , s } , i.e., the first s elements of the original coefficient vector c are nonzero elements. The analysis in the first part shows that OMP can exactly find all elements in the set Λ s with probability at least 1 n p if the assumptions of this theorem hold.
Decompose the original coefficient vector c and the recovered coefficient vector c ˜ into
c = [ c 1 0 ] , c ˜ = [ c ˜ 1 0 ] ,
where c 1 and c ˜ 1 denote the first s elements of the vectors c and c ˜ , respectively. At this time, the nonzero part of the recovered vector is
c ˜ 1 = Φ ˜ opt b ˜ ,
where Φ ˜ opt denotes the pseudo-inverse of the matrix Φ ˜ opt . Since b ˜ = Φ ˜ c + ϵ = Φ ˜ opt c 1 + ϵ , we have
c ˜ 1 = Φ ˜ opt b ˜ = Φ ˜ opt ( Φ ˜ opt c 1 + ϵ ) = c 1 + Φ ˜ opt ϵ .
Finally, let C = Φ ˜ opt 2 , then the reconstruction error of the coefficient vector is
c c ˜ 2 = c 1 c ˜ 1 2 = Φ ˜ opt ϵ 2 Φ ˜ opt 2 · ϵ 2 = C · ε .
Combining the above two parts, the conclusion of the theorem is proved. □
Remark 7.
Equation (22) shows that if (20) and (21) hold, then OMP can recover the coefficient vector c accurately with probability at least 1 n p when ε = 0 .
Remark 8.
Through a similar analysis to Remark 5, the constant p is still taken as p [ 0.2 , 0.4 ] when the sampled data contain noise.
Remark 9.
The conclusion of Theorem 2 and Theorem 4 shows that regardless of whether the sampled data contain noise or not, the recovery guarantee for OMP to reconstruct sparse polynomials generated by uniformly orthonormal systems is consistent with that of OMP to reconstruct sparse trigonometric polynomials.

4. Numerical Experiments

In this section, we first introduce three commonly used uniformly bounded orthonormal systems, and then apply OMP to reconstruct the sparse polynomials generated by these three types of uniformly bounded orthonormal system. The first experiments verify the validity of the recovery guarantees and the lower bound of the success probability given in Theorem 2 and Theorem 4. The second experiments verify the accuracy of the estimation of the reconstruction error between the recovered and original vector given in (22). The last experiments show that even if the coefficient vector has a small disturbance, OMP can gain a good recovery of it.

4.1. Commonly Used Uniformly Bounded Orthonormal Systems

In this subsection, we introduce three commonly used uniformly bounded orthonormal systems: preconditioned Legendre polynomial system, Chebyshev polynomial system, and trigonometric polynomial system.
Preconditioned Legendre polynomial system: The standard univariate Legendre polynomials [26] are
L 0 ( x ) = 1 , L j ( x ) = 2 j + 1 2 j j ! d j d x j x 2 1 j , x [ 1 , 1 ] , j = 1 , 2 , .
They are orthonormal with respect to the uniform measure ω ( x ) = 1 / 2 on [ 1 , 1 ] and their L -norm are
L j ( x ) = | L j ( 1 ) | = | L j ( 1 ) | = 2 j + 1 , j = 0 , 1 , , n .
It is obvious that the standard Legendre polynomials are not uniformly bounded on [ 1 , 1 ] . Therefore, we consider the following function system [13]:
Q j ( x ) = π 2 1 x 2 1 4 L j ( x ) , j = 0 , 1 , n .
{ Q j ( x ) } j N 0 are orthonormal with respect to the Chebyshev measure ω ( x ) = π 1 1 x 2 1 / 2 on [ 1 , 1 ] and have the uniform upper bound K = 3 . The function system { Q j ( x ) } j N 0 is called the preconditioned Legendre polynomial system.
Chebyshev polynomial system: The standard univariate Chebyshev polynomials are [27]
T 0 ( x ) = 1 T j ( x ) = 2 cos ( j · arccos x ) , j = 1 , 2 , , x [ 1 , 1 ] .
They are orthonormal with respect to Chebyshev measure ω ( x ) = π 1 1 x 2 1 / 2 on [ 1 , 1 ] with a uniform upper bound K = 2 .
Trigonometric polynomial system: The real univariate trigonometric polynomials are [28]
F 0 ( x ) = 1 , F n ( x ) = cos n + 1 2 x , n = 1 , 3 , 5 , x [ π , π ] , F n ( x ) = sin n 2 x , n = 0 , 2 , 4 , x [ π , π ] ,
They are orthonormal with respect to uniform measure ω ( x ) = π on [ π , π ] and have the uniform upper bound K = 1 , obviously.

4.2. The Verification of Recovery Guarantee and Success Probability of OMP Algorithm

In this subsection, for the two cases of sampled data with and without noise, we take the uniformly bounded orthonormal systems in Section 4.1 as examples to verify the validity of the recovery guarantees and the lower bounds of the success probability given by Theorem 2 and Theorem 4, respectively. Here, the sparsity s = 5 and s = 10 , the parameter p = 0.2 , 0.3 , and 0.4 , and the noise bound ε = 10 5 are taken, respectively. The main steps of this experiment are as follows:
Step 1: Randomly generate an n-dimensional s-sparse coefficient vector c R n with a support set Λ s .
Step 2: Taking the three types of uniformly bounded orthonormal system mentioned in Section 4.1 as examples, according to the corresponding probability measure, randomly select m sampling points { x i } i = 1 m on corresponding domain.
Step 3: Generate b i = g ( x i ) = j Λ c j ϕ j ( x i ) or b i = ( 1 / m ) g ( x i ) + ϵ i .
Step 4: Use OMP to solve model (3) or (16).
Step 5: Compare the obtained results with the original coefficient vector and polynomial.
According to Theorem 2 and Theorem 4, we calculate the lower bounds of the number of sampling points m required for the three types of uniformly bounded orthonormal systems with different numbers of basis functions n and parameters p, which are the recovery guarantees for OMP to exactly reconstruct sparse polynomials. The recovery guarantees for the cases of s = 5 and s = 10 are shown in Table 1 and Table 2, respectively.
The first column of Table 1 and Table 2 denotes the number of basis functions n, and the remaining columns denote the lower bound of the number of sampling points m required for different constants p at sparsity s = 5 and s = 10 for the three types of the uniformly bounded orthonormal systems, respectively. From Table 1 and Table 2, it is easy to see that the lower bound of the number of sampling points m increases with the increase of that of basis functions n, but the growth rate of the former is much smaller than the latter, and for the same n and s, the larger the parameter p is, the larger the lower bound of the number of sampling points m is. In addition, for the same number of basis functions n, sparsity s, and parameter p, the recovery guarantee of the trigonometric polynomial system is the smallest.
Next, for the sparse polynomials generated by the three types of uniformly bounded orthonormal system mentioned before, we take the number of basis functions n and sampling points m given in Table 1 and Table 2, and we solve model (3) by OMP for 1000 independent repeated experiments and record the frequency of exact reconstruction as the actual success probability of OMP. It is clear that for different number of basis functions n and parameter p, the theoretical success probability is 1 n p . For the two cases of sampled data with and without noise, Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 show the comparison between the theoretical and actual success probability for OMP to exactly reconstruct the three types of sparse polynomials for different numbers of basis functions n, parameter p, and sparsity s.
From Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6, it is easy to see that the theoretical success probability (red line) of OMP to reconstruct the three types of sparse polynomials with different parameters p gradually increases and tends to the actual success probability as the number of basis functions n increases, and the actual success probability is higher than the theoretical success probability in both noisy and noiseless cases. When the number of basis functions n is the same, the theoretical success probability also increases with the increase of parameter p, which illustrates the correctness and validity of the conclusions of Theorem 2 and Theorem 4. In addition, Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 also show that when the sampled data contain noise of magnitude ε = 10 5 , the success probability for OMP is comparable to that of the case when the sampled data do not contain noise, which also illustrates the robustness of OMP.

4.3. Verification of the Accuracy of the Reconstruction Error Estimation When Sampled Data Contain Noise

In this subsection, for the sampled data containing noise, we verify the accuracy of the reconstruction error estimation for the sparse coefficient vector. The number of basis functions n, sampling points m, and the noise bound are still taken as Table 1, Table 2, and ε = 10 5 , respectively. The experimental procedure is similar to that in Section 4.2, where we still perform 1000 independent repeated experiments by using OMP to reconstruct the three types of sparse polynomials. Different from the experiments in Section 4.2, we randomly generate a noise vector ϵ with ϵ 2 ε and record the values of constant C and the real noise bound ε ˜ = ϵ 2 when OMP succeeds. For the two cases of sparsity s = 5 and s = 10 , the average reconstruction error and the upper bound of it are shown in Table 3 and Table 4 when the parameter p = 0.2 , 0.3 , and 0.4 and OMP succeeds.
The columns of ‘upper bound’ in Table 3 and Table 4 are the estimation of the upper bound of the reconstruction error, i.e., C · ε ˜ , and the columns of ‘error’ mean the average reconstruction error when OMP succeeds, i.e., c c ˜ 2 . From Table 3 and Table 4, it is easy to see that for the three types of sparse polynomials, with different numbers of basis functions n and parameters p, all the upper bounds of the reconstruction errors are larger than the average reconstruction errors, which shows the correctness of the conclusion in Theorem 4.

4.4. Verification of the Accuracy of OMP to Recover Coefficient Vectors When Sampled Data Contain Noise

The case of sampled data g ( x i ) , i = 1 , 2 , , m with noise can be regarded as the case that the coefficient vector contains noise, which can be expressed as
g ˜ ( x i ) = j = 1 n c j ϕ j ( x i ) Φ j 2 + ε i = j = 1 n c ^ j ϕ j ( x i ) Φ j 2 , i = 1 , 2 , , m .
It can be written in the matrix vector form as
b ^ = Φ ˜ c ^ ,
where Φ ˜ and b ^ = b ˜ are given as (14) and (15), respectively, and the vector c ^ = c ^ 1 , , c ^ n R n × 1 . At this point, only s main term in the vector c ^ has a large absolute value, and the absolute values of the remaining terms are small. Therefore, in this subsection, for the case of coefficient vector with noise, we apply OMP to recover the noisy coefficient vector c ^ and then verify the recovery effect of OMP. In this experiment, the noise bound is still taken as ε = 10 5 , and the number of basis functions n and sampling points m are still taken as those in Table 1 and Table 2 for s = 5 and s = 10 , respectively. We solve model (16) by OMP for 1000 independent repeated experiments and record the actual error c ^ c ˜ 2 when OMP succeeds. The average actual errors of s = 5 and s = 10 are shown in Table 5 and Table 6, respectively.
The first column of Table 5 and Table 6 indicates the number of basis functions, the second column indicates the value of parameter p, and the remaining columns indicate the average actual error when OMP succeeds. From Table 5 and Table 6, it is easy to see that when OMP succeeds, the actual error of its recovery is smaller than the error bound ε , which indicates the accuracy of OMP in recovering the noisy coefficient vector, and also shows the noise resistance of OMP.

5. Conclusions

The main work of this paper is to give the sufficient conditions for OMP to reconstruct sparse polynomials generated by uniformly bounded orthonormal systems in both cases of sampled data with and without noise, and give the recovered error for the sparse coefficient vectors when the sampled data contain noise. The work in this paper can be regarded as a generalization of the study of OMP to reconstruct sparse trigonometric polynomials in [21].
For the structured random matrix generated by a uniformly bounded orthonormal system and random sampling points, a more accurate estimation for the upper bound of the mutual coherence of this matrix is given firstly in this paper. Then, for both cases of sampled data with and without noise, when the measurement matrix is a structured random matrix, more relaxed sufficient conditions for OMP to recover sparse coefficient vectors with high probability are given by the mutual coherence of the measurement matrix, which further relax the condition μ ( Φ ) < 1 / ( 2 s 1 ) given in [9,19]. Meanwhile, the requirement on the sparse coefficient vector for OMP to exactly find the positions of nonzero elements is given when the sampled data contain noise. Finally, combining the results of the above two parts, it is proved that regardless of whether the sampled data contain noise or not, when the number of sampling points satisfies m s 2 ln n , OMP can reconstruct general sparse polynomials with probability at least 1 n p in both cases. Furthermore, with a simple calculation, the fact that reconstruction error between the original and recovered coefficient vector can be controlled by the noise bound is also illustrated in this paper. In addition, the research methods and conclusions of this paper can be extended to the study of multivariate sparse polynomial reconstruction problems.
However, it is easy to see from the experiments in Section 4.2 that the actual success probability of OMP is higher than the theoretical success probability, which indicates that the recovery guarantee given in this paper is not optimal. Perhaps some more advanced probability tools can be used to give a more accurate upper bound estimation for the mutual coherence of the measurement matrix, which in turn can further optimize the recovery guarantee and the success probability for OMP.

Author Contributions

Conceptualization, R.F. and A.H.; methodology, R.F. and A.H.; software, A.H. and A.W.; validation, A.H. and A.W.; formal analysis, A.H.; investigation, A.H.; resources, R.F., A.H., and A.W.; data curation, A.H. and A.W.; writing—original draft preparation, A.H.; writing—review and editing, R.F. and A.W.; visualization, A.H.; supervision, R.F.; project administration, R.F.; funding acquisition, R.F. All authors have read and agreed to the published version of the manuscript.

Funding

The work is supported by the National Natural Science Foundation of China (Grant No. 12071019).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Foucart, S.; Lai, M.-J. Sparsest solutions of underdetermined linear systems via q-minimization for 0 < q ≤ 1. Appl. Comput. Harmon. Anal. 2009, 26, 395–407. [Google Scholar]
  2. Rauhut, H.; Ward, R. Sparse Legendre expansions via l(1)-minimization. J. Approx. Theory. 2012, 164, 517–533. [Google Scholar] [CrossRef] [Green Version]
  3. Rauhut, H.; Ward, R. Interpolation via weighted 1 minimization. Appl. Comput. Harmon. Anal. 2016, 40, 321–351. [Google Scholar] [CrossRef]
  4. Xu, Z. Deterministic sampling of sparse trigonometric polynomials. J. Complex. 2011, 27, 133–140. [Google Scholar] [CrossRef] [Green Version]
  5. Xu, Z.; Zhou, T. On Sparse Interpolation and the Design of Deterministic interpolation points. SIAM J. Sci. Comput. 2014, 36, 1752–1769. [Google Scholar] [CrossRef] [Green Version]
  6. Ho, L.; Schaeffer, H.; Tran, G.; Ward, R. Recovery guarantees for polynomial coefficients from weakly dependent data with outliers. J. Approx. Theory 2020, 259, 105472. [Google Scholar] [CrossRef]
  7. Candes, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef] [Green Version]
  8. Xu, Z. Compressed sensing: A survey. Sci. Sin. Math. 2012, 42, 865–877. (In Chinese) [Google Scholar] [CrossRef]
  9. Tropp, J.A. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 2004, 50, 2231–2242. [Google Scholar] [CrossRef] [Green Version]
  10. Feng, R.; Huang, A.; Lai, M.J.; Shen, Z. Reconstruction of Sparse Polynomials via Quasi-Orthogonal Matching Pursuit Method. J. Comput. Math. 2021. [Google Scholar] [CrossRef]
  11. Huang, A.; Feng, R.; Zheng, S. Recovery Guarantee for Orthogonal Matching Pursuit Method to Reconstruct Sparse Polynomials. Numer. Math. Theor. Meth. Appl. 2022, 15, 793–818. [Google Scholar] [CrossRef]
  12. Lin, J.; Li, S. Nonuniform support recovery from noisy random measurements by orthogonal matching pursuit. J. Approx. Theory 2013, 165, 20–40. [Google Scholar] [CrossRef] [Green Version]
  13. Fornasier, M. Theoretical Foundations and Numerical Methods for Sparse Recovery; DE GRUYTER: Berlin, Germany, 2010. [Google Scholar]
  14. Mo, Q.; Shen, Y. A remark on the restricted isometry property in orthogonal matching pursuit. IEEE Trans. Inf. Theory 2012, 58, 3654–3656. [Google Scholar] [CrossRef] [Green Version]
  15. Wang, J.; Shim, B. On the recovery limit of sparse signals using orthogonal matching pursuit. IEEE Trans. Signal Process 2012, 60, 4973–4976. [Google Scholar] [CrossRef]
  16. Mo, Q. A sharp restricted isometry constant bound of orthogonal matching pursuit. arXiv 2015, arXiv:1501.01708. [Google Scholar]
  17. Wen, J.; Zhou, Z.; Wang, J.; Tang, X.; Mo, Q. A sharp condition for exact support recovery with orthogonal matching pursuit. IEEE Trans. Signal Process 2017, 65, 1370–1382. [Google Scholar] [CrossRef] [Green Version]
  18. Tropp, J.A.; Gilbert, A.C. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef] [Green Version]
  19. Cai, T.T.; Wang, L. Orthogonal matching pursuit for sparse signal recovery with noise. IEEE Trans. Inf. Theory 2011, 57, 4680–4688. [Google Scholar] [CrossRef]
  20. Cai, T.T.; Wang, L.; Xu, G. Stable recovery of sparse signals and an oracle inequality. IEEE Trans. Inf. Theory 2010, 56, 3516–3522. [Google Scholar] [CrossRef]
  21. Kunis, S.; Rauhut, H. Random sampling of sparse trigonometric polynomials ii: Orthogonal matching pursuit versus basis pursuit. Found. Comput. Math. 2008, 6, 737–763. [Google Scholar] [CrossRef] [Green Version]
  22. Random sampling of sparse trigonometric polynomials. Appl. Comput. Harmon. Anal. 2007, 22, 16–42. [CrossRef] [Green Version]
  23. Xu, Z.; Zhou, T. A gradient-enhanced 1 approach for the recovery of sparse trigonometric polynomials. Commun. Comput. Phys. 2018, 24, 286–308. [Google Scholar] [CrossRef]
  24. Davis, G.; Mallat, S.; Avellaneda, M. Adaptive greedy approximations. Constr. Approx. 1997, 13, 57–98. [Google Scholar] [CrossRef]
  25. Foucart, S.; Rauhut, H. A Mathematical Introduction to Compressive Sensing; Springer: New York, NY, USA, 2013. [Google Scholar]
  26. Dunkl, C.F.; Xu, Y. Orthogonal Polynomials of Several Variables; Cambridge University Press: Cambridge, UK, 2001. [Google Scholar]
  27. Rivlin, T.J. The Chebyshev Polynomials; Dover: Mignola, NY, USA, 1974. [Google Scholar]
  28. Zygmund, A. Trigonometric Series; Cambridge University Press: Cambridge, UK, 1959. [Google Scholar]
Figure 1. Comparison between the theoretical and actual success probability of OMP to reconstruct three types of sparse polynomials when the sampled data are noiseless and noisy, s = 5 , p = 0.2 .
Figure 1. Comparison between the theoretical and actual success probability of OMP to reconstruct three types of sparse polynomials when the sampled data are noiseless and noisy, s = 5 , p = 0.2 .
Mathematics 10 03703 g001
Figure 2. Comparison between the theoretical and actual success probability of OMP to reconstruct three types of sparse polynomials when the sampled data are noiseless and noisy, s = 5 , p = 0.3 .
Figure 2. Comparison between the theoretical and actual success probability of OMP to reconstruct three types of sparse polynomials when the sampled data are noiseless and noisy, s = 5 , p = 0.3 .
Mathematics 10 03703 g002
Figure 3. Comparison between the theoretical and actual success probability of OMP to reconstruct three types of sparse polynomials when the sampled data are noiseless and noisy, s = 5 , p = 0.4 .
Figure 3. Comparison between the theoretical and actual success probability of OMP to reconstruct three types of sparse polynomials when the sampled data are noiseless and noisy, s = 5 , p = 0.4 .
Mathematics 10 03703 g003
Figure 4. Comparison between the theoretical and actual success probability of OMP to reconstruct three types of sparse polynomials when the sampled data are noiseless and noisy, s = 10 , p = 0.2 .
Figure 4. Comparison between the theoretical and actual success probability of OMP to reconstruct three types of sparse polynomials when the sampled data are noiseless and noisy, s = 10 , p = 0.2 .
Mathematics 10 03703 g004
Figure 5. Comparison between the theoretical and actual success probability of OMP to reconstruct three types of sparse polynomials when the sampled data are noiseless and noisy, s = 10 , p = 0.3 .
Figure 5. Comparison between the theoretical and actual success probability of OMP to reconstruct three types of sparse polynomials when the sampled data are noiseless and noisy, s = 10 , p = 0.3 .
Mathematics 10 03703 g005
Figure 6. Comparison between the theoretical and actual success probability of OMP to reconstruct three types of sparse polynomials when the sampled data are noiseless and noisy, s = 10 , p = 0.4 .
Figure 6. Comparison between the theoretical and actual success probability of OMP to reconstruct three types of sparse polynomials when the sampled data are noiseless and noisy, s = 10 , p = 0.4 .
Mathematics 10 03703 g006
Table 1. The lower bound of the number of sampling points m for the three types of the uniformly bounded orthonormal system when the sparsity s = 5 .
Table 1. The lower bound of the number of sampling points m for the three types of the uniformly bounded orthonormal system when the sparsity s = 5 .
nLower Bound of the Number of Sampling Points m
Preconditioned LegendreChebyshevTrigonometric
p0.20.30.40.20.30.40.20.30.4
5000374839184089249926122726125013061363
10,000405342374421270228252948135114131474
15,000423144244616282129493078141114751539
20,000435845564754290630383170145315191585
25,000445646594861297131063241148615531621
30,000453647434949302431623299151215811650
35,000460448145023307032093349153516051675
40,000466348755087310932503391155516251696
45,000471549295143314332863429157216431715
50,000476149785194317433193463158716601731
Table 2. The lower bound of the number of sampling points m for the three types of the uniformly bounded orthonormal system when the sparsity s = 10 .
Table 2. The lower bound of the number of sampling points m for the three types of the uniformly bounded orthonormal system when the sparsity s = 10 .
nLower Bound of the Number of Sampling Points m
Preconditioned LegendreChebyshevTrigonometric
p0.20.30.40.20.30.40.20.30.4
20,00017,43118,22319,01511,62112,14912,677581160756339
25,00017,82318,63419,44411,88212,42312,963594162126482
30,00018,14418,96919,79412,09612,64613,196604863236589
35,00018,41619,25320,09012,27712,83513,393613964186697
40,00018,61519,49820,34612,43412,99913,564621765006782
45,00018,85819,71520,57212,57213,14413,715628665726858
50,00019,04319,90920,77412,69613,27313,850634866376925
Table 3. The actual reconstruction error and the upper bound of reconstruction error for OMP to recover the coefficient vectors of the three types of sparse polynomials, s = 5 .
Table 3. The actual reconstruction error and the upper bound of reconstruction error for OMP to recover the coefficient vectors of the three types of sparse polynomials, s = 5 .
npPreconditioned LegendreChebyshevTrigonometric
Upper BoundErrorUpper BoundErrorUpper BoundError
20,0000.25.9420  × 10 6 1.8612  × 10 7 5.9226  × 10 6 2.3162  × 10 7 5.9950  × 10 6 3.3156  × 10 7
0.35.9346  × 10 6 1.8189  × 10 7 5.9223  × 10 6 2.2176  × 10 7 5.9860  × 10 6 3.2614  × 10 7
0.45.9347  × 10 6 1.7854  × 10 7 5.9195  × 10 6 2.1832  × 10 7 5.9863  × 10 6 3.1439  × 10 7
40,0000.25.9415  × 10 6 1.7713  × 10 7 5.9179  × 10 6 2.2273  × 10 7 5.9893  × 10 6 3.2066  × 10 7
0.35.9288  × 10 6 1.7402  × 10 7 5.9160  × 10 6 2.1478  × 10 7 5.9808  × 10 6 3.1279  × 10 7
0.45.9308  × 10 6 1.7416  × 10 7 5.9150  × 10 6 2.1179  × 10 7 5.9810  × 10 6 3.0817  × 10 7
50,0000.25.9307  × 10 6 1.7897  × 10 7 5.9201  × 10 6 2.2058  × 10 7 5.9869  × 10 6 3.1696  × 10 7
0.35.9301  × 10 6 1.7361  × 10 7 5.9149  × 10 6 2.1592  × 10 7 5.9777  × 10 6 3.0936  × 10 7
0.45.9258  × 10 6 1.7054  × 10 7 5.9106  × 10 6 2.0830  × 10 7 5.9787  × 10 6 3.0729  × 10 7
Table 4. The actual reconstruction error and the upper bound of reconstruction error for OMP to recover the coefficient vectors of the three types of sparse polynomials, s = 10 .
Table 4. The actual reconstruction error and the upper bound of reconstruction error for OMP to recover the coefficient vectors of the three types of sparse polynomials, s = 10 .
npPreconditioned LegendreChebyshevTrigonometric
Upper BoundErrorUpper BoundErrorUpper BoundError
20,0000.25.9530  × 10 6 1.3369  × 10 7 5.8490  × 10 6 1.1562  × 10 7 5.9638  × 10 6 2.2914  × 10 7
0.35.9462  × 10 6 1.3542  × 10 7 5.8489  × 10 6 1.1564  × 10 7 5.9590  × 10 6 2.3235  × 10 7
0.45.9176  × 10 6 1.2626  × 10 7 5.8494  × 10 6 1.0587  × 10 7 5.9622  × 10 6 2.2498  × 10 7
40,0000.25.9412  × 10 6 1.2278  × 10 7 5.8480  × 10 6 1.0481  × 10 7 5.9567  × 10 6 2.2734  × 10 7
0.35.9063  × 10 6 1.2800  × 10 7 5.8443  × 10 6 1.1113  × 10 7 5.9534  × 10 6 2.1706  × 10 7
0.45.9037  × 10 6 1.2962  × 10 7 5.8419  × 10 6 9.9783  × 10 8 5.9500  × 10 6 2.1759  × 10 7
50,0000.25.9082  × 10 6 1.3328  × 10 7 5.8454  × 10 6 1.0770  × 10 7 5.9634  × 10 6 2.2618  × 10 7
0.35.9097  × 10 6 1.2813  × 10 7 5.8436  × 10 6 1.0537  × 10 7 5.9548  × 10 6 2.1822  × 10 7
0.45.9085  × 10 6 1.3133  × 10 7 5.8403  × 10 6 1.0581  × 10 7 5.9489  × 10 6 2.1657  × 10 7
Table 5. The average actual error for OMP to recover the coefficients of the three types of sparse polynomials, s = 5 .
Table 5. The average actual error for OMP to recover the coefficients of the three types of sparse polynomials, s = 5 .
npActual Error
Preconditioned LegendreChebyshevTrigonometric
20,0000.25.7726  × 10 6 5.7793  × 10 6 5.7755  × 10 6
0.35.7735  × 10 6 5.7718  × 10 6 5.7818  × 10 6
0.45.7751  × 10 6 5.7731  × 10 6 5.7799  × 10 6
40,0000.25.7864  × 10 6 5.7733  × 10 6 5.7849  × 10 6
0.35.7796  × 10 6 5.7784  × 10 6 5.7782  × 10 6
0.45.7814  × 10 6 5.7792  × 10 6 5.7792  × 10 6
50,0000.25.7791  × 10 6 5.7772  × 10 6 5.7792  × 10 6
0.35.7802  × 10 6 5.7801  × 10 6 5.7798  × 10 6
0.45.7773  × 10 6 5.7774  × 10 6 5.7840  × 10 6
Table 6. The average actual error for OMP to recover the coefficients of the three types of sparse polynomials, s = 10 .
Table 6. The average actual error for OMP to recover the coefficients of the three types of sparse polynomials, s = 10 .
npActual Error
Preconditioned LegendreChebyshevTrigonometric
20,0000.25.7782  × 10 6 5.7756  × 10 6 5.7753  × 10 6
0.35.7681  × 10 6 5.7793  × 10 6 5.7703  × 10 6
0.45.7734  × 10 6 5.7900  × 10 6 5.7753  × 10 6
40,0000.25.7788  × 10 6 5.7762  × 10 6 5.7760  × 10 6
0.35.7771  × 10 6 5.7810  × 10 6 5.7816  × 10 6
0.45.7781  × 10 6 5.7659  × 10 6 5.7760  × 10 6
50,0000.25.7765  × 10 6 5.7721  × 10 6 5.7711  × 10 6
0.35.7734  × 10 6 5.7763  × 10 6 5.7856  × 10 6
0.45.7765  × 10 6 5.7808  × 10 6 5.7711  × 10 6
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Huang, A.; Feng, R.; Wang, A. The Sufficient Conditions for Orthogonal Matching Pursuit to Exactly Reconstruct Sparse Polynomials. Mathematics 2022, 10, 3703. https://doi.org/10.3390/math10193703

AMA Style

Huang A, Feng R, Wang A. The Sufficient Conditions for Orthogonal Matching Pursuit to Exactly Reconstruct Sparse Polynomials. Mathematics. 2022; 10(19):3703. https://doi.org/10.3390/math10193703

Chicago/Turabian Style

Huang, Aitong, Renzhong Feng, and Andong Wang. 2022. "The Sufficient Conditions for Orthogonal Matching Pursuit to Exactly Reconstruct Sparse Polynomials" Mathematics 10, no. 19: 3703. https://doi.org/10.3390/math10193703

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop