Next Article in Journal
Experimental Study on the Performance of GFRP–GFRP Slip-Critical Connections with and without Stainless-Steel Cover Plates
Previous Article in Journal
Use of Biofuel Industry Wastes as Alternative Nutrient Sources for DHA-Yielding Schizochytrium limacinum Production
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Precise Channel Estimation Approach for a mmWave MIMO System

by
Prateek Saurabh Srivastav
1,2,
Lan Chen
1,* and
Arfan Haider Wahla
1,2
1
Institute of Microelectronics of Chinese Academy of Sciences, Beijing 100029, China
2
School of Electronic, Electrical and Communication, University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(12), 4397; https://doi.org/10.3390/app10124397
Submission received: 7 May 2020 / Revised: 18 June 2020 / Accepted: 24 June 2020 / Published: 26 June 2020
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

:
Channel estimation is a formidable challenge in mmWave Multiple Input Multiple Output (MIMO) systems due to the large number of antennas. Therefore, compressed sensing (CS) techniques are used to exploit channel sparsity at mmWave frequencies to calculate fewer dominant paths in mmWave channels. However, conventional CS techniques require a higher training overhead for efficient recovery. In this paper, an efficient extended alternation direction method of multipliers (Ex-ADMM) is proposed for mmWave channel estimation. In the proposed scheme, a joint optimization problem is formulated to exploit low rank and channel sparsity individually in the antenna domain. Moreover, a relaxation factor is introduced which improves the proposed algorithm’s convergence. Simulation experiments illustrate that the proposed algorithm converges at lower Normalized Mean Squared Error (NMSE) with improved spectral efficiency. The proposed algorithm also ameliorates NMSE performance at low, mid and high Signal to Noise (SNR) ranges.

1. Introduction

In accordance with recent research trends, millimeter-wave (mmWave) communication has been found to be a potential candidate for next-generation Wireless Local Area Network (WLAN) and 5G cellular systems [1]. mmWave has a large amount of unused bandwidth, which can provide higher frequencies, higher spectral efficiency, and a relevant rise in channel capacity and support to connect millions of devices with high reliability and low latency [2,3,4,5,6,7]. However, the poor scattering nature of the mmWave channel causes path loss problems; as a result, fewer dominant paths remain in the mmWave channel [8,9,10]. Therefore, the channel matrix can be reconstructed by exploiting information from the remaining paths. In order to reconstruct the mmWave channel, the few dominant paths remaining in the channel matrix have to be estimated at the receiver’s end. Conventional channel estimators, such as least-square (LS) estimators and maximum likelihood (ML) estimators, require a higher training overhead, which is not possible for any hybrid mmWave MIMO system. Considering the hardware architecture complexities of mmWave MIMO systems, channel estimation becomes a difficult task. In the literature, two types of approaches are used for mmWave channel estimation. In the first approach, the mmWave channel is estimated by exploiting the sparsity of the channel matrix in the virtual beamspace domain, whereas, in the second approach, the estimation is performed by exploiting the low rank properties of the channel matrix in the antenna domain. In [11,12,13,14], a compressive sensing (CS)-based channel estimation approach was used for a mmWave MIMO system. The basic idea behind this approach is based on the technique in which estimators have to search for a pair angle in a predefined dictionary matrix, which further depends upon the training information. Such estimators exploit sparsity in a predefined dictionary matrix. For high resolution, another asymmetric approach was proposed in [15]. Under this method, atomic norm is used to formulate the angles of arrival (AoAs)/angles of departure (AoDs), finding problems. A beam codebook design-based approach was proposed in [16], in which, based on the information in static dictionaries, a codebook is designed. All the aforementioned techniques are CS-based approaches. For the case of high-dimensional training information, CS-based estimators suffer from high computational load, which can further reduce the spectral efficiency. A matrix completion (MC)-based approach was proposed in [17,18], in which the estimator exploits sparsity and low-rank properties of mmWave channels by two independent procedures.
In this paper, an extended alternating direction method of multipliers (Ex-ADMM) mmWave massive MIMO channel estimation technique is proposed for more accurate estimation rates. mmWave exhibits some important and peculiar effects over the wireless channel, i.e., the channel experiences an unspecified amount of spread over the angular domain due to its easy scattering nature [19]. As a result, a jointly sparse and low-rank channel matrix in the angular domain can be obtained. This the main motivation behind the formulation of a joint optimization problem for efficient recovery of the channel matrix proposed in this paper. ADMM was recently proposed in [20], has received attention for its easy implementation. Fundamentally, ADMM can exploit both the sparsity and low rank properties of any data matrix. Researchers have found that, in many real-world applications, additional information in a data matrix (known as side information) helps complete the matrix with few entries to obtain more accurate estimations. Therefore, the use of side information for matrix completion and factorization has been introduced in various research areas, like statistical signal processing, image processing, statistical learning, computer vision and so on [21,22,23,24]. Side information is very useful to describe the row and column entries of any matrix, and it has been further shown to reduce the complexity of completing a matrix [25,26]. In this work, side information theory is used to obtain the optimal solution for a joint optimization problem. The main contributions of this work are as follows:
  • A joint optimization estimation problem for a mmWave massive MIMO system based on the Ex-ADMM algorithm is formulated. With the help of side information in matrix completion theory, a training procedure compatible with the hybrid beamforming (HBF) structure, leveraging the low rank and sparsity in angular domains, is designed. In Ex-ADMM, the joint optimization problem is further divided into several subproblems, which are then solved individually.
  • An Ex-ADMM is proposed for better estimation of the channel matrix. This algorithm is originally derived from [18,26,27,28], and it is more efficient than the other iterative algorithms described in the results section. In the proposed Ex-ADMM algorithm, a nuclear norm is used for the low-rank approximations of the channel matrix, and l 1 n o r m is used to enforce the sparsity, consecutively. In addition, a relaxation factor is introduced to enhance the system performance.
Simulation results demonstrate the performance of the proposed Ex-ADMM algorithm, along with orthogonal matching pursuit (OMP) [16], two-stage sparse representation (TSSR) [29], vector approximate message passing (VAMP) [30] and the alternating direction method of multipliers (ADMM) [18] in terms of normalized mean squared error (NMSE), achievable spectral efficiency (ASE) and convergence.
This paper is organized as follows. The mmWave channel model is presented in Section 2. In Section 3, a mmWave channel estimation problem based on Ex-ADMM is formulated. Section 4 provides the training procedure details for the proposed Ex-ADMM. A detailed description of the proposed algorithm is provided in Section 5. Section 6 illustrates the computational complexities of proposed algorithm with respect to OMP, TSSR, VAMP and ADMM. Simulation results and discussion, describing the superiority of proposed algorithm over the benchmark algorithms in terms of NMSE, ASE and convergence, are provided in Section 7. Finally, the conclusion is presented in Section 8.
Notation: We use the following notation throughout the paper. α ,   a ,   and   A denote a scalar, a vector and a matrix, respectively. A T ,   A H   and   A * indicate A’s transpose, conjugate transpose and conjugate, respectively. ( . ) F , ( . ) * and ( . ) 1 indicate the Frobenius norm, nuclear norm (sum of all singular values), and l 1 norm , respectively. The operands   and   represent matrix Hadamard and Kronecker products. vec(.) and unvec(.) represent the concatenation of columns of the matrix into vectors and the reverse operation, respectively. Expected value is represented by E { . } . A { 0 , 1 } M × N indicates that A’s elements are taken independently with equal probability from the binary set {0,1}.

2. mmWave Channel Model

Let us consider a N R × N T mmWave massive MIMO system based on the recent models described in [5,6,7]. This system is equipped with N T transmitters at the base station (BS) and N R receivers at the mobile station (MS), N S parallel data streams, and radio frequency (RF) chains, such that N RF min ( N T , N R ) at the transmitter and receiver. At the transmitter, N RF   chains are present, such that N S < N RF   < N T . The mmWave MIMO system shown in Figure 1 is a combination of two consecutive joint segments: a digital MIMO baseband F BB N RF × N S and analog RF precoder F RF N T × N RF . Similarly, at the receiver, the signal is processed by the consecutive joint segments of the RF combiner W RF N R × N RF and baseband combiner W BB N RF × N S , respectively. In order to get the closest estimation, the transmitter employs N T Beam N T pilot beam patterns, denoted as { f a N T × 1 : f a 2 2 = 1 } . Meanwhile, at the receiver end, the receiver employs N R Beam N R pilot beam patterns, denoted as { w b N R × 1 : w b 2 2 = 1 } [16], where a and b are the transmitter’s training precoding vector and receiver’s training combining vector, respectively. For enabling communication, an analog HBF [11] is used, which opens a path for the transmitter to apply a baseband precoder by steering the beam to the receiver. The b-th received vector for the a-th transmitted beam is determined by
y b , a = w b H Hf a x a + w b H v ˜
Here, x a is the pilot vector symbol emitted from the transmitter, v ˜ is the observation noise, with zero mean and σ v ˜ 2 K N R variance, i.e., C N ( 0 , σ v ˜ 2 K N R ) , and H N R × N T is the channel matrix of the mmWave MIMO system. The received vector y in Equation (1), which is expressed as y [ y 1 , b y N T Beam , b ] T N R Beam × 1 ] , can be written in generalized form as follows:
y = w H Hfx + q
Here, the combiner noise vector q = w b H v ˜ N R Beam × 1 .
Therefore, the received signal matrix Y can be expressed as
Y = W H HFX + Q
Here, the received signal matrix Y [ y 1 , ,   y N T Beam ] N R Beam × N T Beam , complex combining matrix W [ w 1 , ,   w N R Beam ] N R × N R Beam , and precoding matrix F [ f 1 , ,   f N T Beam ] N T × N T Beam . X N T Beam × N T Beam and Q N R Beam × N T Beam are independent and identically distributed (I.I.D) complex additive white gaussian noise (AWGN), with zero mean and σ q 2   variance   C N ( 0 , σ q 2 ) . In this paper, we assume pilot symbols are identical, so X = P t K N T Beam   , where P t is the average transmitted pilot power [12,16].
According to the hybrid mmWave MIMO structure, the transmitted and received matrices are decomposed at the transmitter and receiver ends, such that F = F BB F RF , and W = W RF W BB , respectively.
Hence, Equation (3) of the received signal matrices Y can be re-written as
Y   P t   W BB   H W RF   H HF BB F RF + Q Y   P t   W H HF + Q
Here, F RF N T × N T and W RF N R × N R are the transmitted and received beamforming matrices, respectively. F BB N T × N T Beam and W BB N R × N R Beam are the transmitted and received baseband processing matrices, respectively. W is the combiner, such that W { 0 , 1 } N R , and F is the precoder, such that   F { 0 , 1 } N T .
We adopted the geometric virtual model (GV) of mmWave MIMO system described in [12,17]. According to the GV model, channel H is described as
H l = 1 L p α l   a R ( Φ R ( l ) , θ R ( l ) ) a T H ( Φ T ( l ) , θ T ( l ) )
Here, L p is the total number of propagation paths, α l is the complex channel gain of the l-th path, which can be obtained from the random complex gaussian distributions, and C N ( 0 , 1 2 ) . a T H ( Φ T ( l ) , θ T ( l ) ) N T and a R ( Φ R ( l ) , θ R ( l ) ) N R are the array response vectors (ARV) at the transmitters and receivers, respectively. Φ T ( l ) , θ T ( l ) and Φ R ( l ) , θ R ( l ) are the elevation and azimuth AoA and AoD angles at the transmitters and receivers, respectively. These angles are generated by Laplacian distributions, whose means are uniformly distributed over (0,2 π ) . The ARV of a uniform linear array (ULA) [31,32] is described by a ( θ ) = 1 N [ 1 , e j 2 π λ dcos ( θ ) ,   e j 2 π λ ( N 1 ) dcos ( θ ) ] T . Here, λ is the wavelength, d is the antenna spacing and θ is the even function of the ARV. The channel model described in Equation (5) can be further written in matrix form (described in a virtual beamspace representation model [33,34]) as follows:
H = A R ZA T H
Here, A R N R × N R and A T N T × N T are the unitary matrices with the ARV of the receivers and transmitters [33], respectively, where A R [ a R   ( Φ 1 , θ 1 ) , a R ( Φ 2 , θ 2 ) a R ( Φ L p , θ L p ) ]   and   A T [ a T ( Φ 1 , θ 1 ) , a T ( Φ 2 , θ 2 ) a T ( Φ L p , θ L p ) ] . By using matrix properties, we can say that A R H A R = K N R and A T H A T = K N T , with I N being the N × N identity matrix. In Equation (6), Z is the sparse matrix (which contains few virtual channel gains of greater amplitude) with dimensions Z N R × N T .

3. mmWave Channel Estimation Problem Formulation

In this section, a channel estimation problem for a mmWave MIMO system is formulated. Without loss of generality, matrix decomposition methods are used for the completion of a low-rank matrix with the help of partially observed data [35,36]. By following the given conditions, Equation (6) can be written in decomposed form as H = A R DA T H   , where D is the submatrix of Z. The optimization problem for the joint recovery of channel state information (CSI) matrix H and the decomposed beamspace for the unknown sparse matrix D thereof can be described as
minmize H , D   Γ H H * + Γ D D 1 Subject   to   Ψ H = H Ψ   and   H = A R DA T H
Here, the side information of A T   and   A R is used to find the missing values of matrix H. Originally, H = A R DA T H is the decomposed form of Equation (6). Here, the sum of the singular values of a matrix is obtained by imposing a nuclear norm, which also represents the tightest convex lower bound for any matrix. In this way, the low rank property of matrix H is exhibited by the nuclear norm, however, l 1 norm bounds D to enforce the sparsity. In contrast, the weighting factors Γ H and Γ D , which depend upon the number of propagation paths, are always considered as positive integers [35].
The positions of non-zero-unit values, which are distributed uniformly, such that Ψ = { 1 , 2 , 3 , N R N T } [28,37] in matrix Ψ of Equation (7), are chosen in a random fashion. In this way, one can say that the matrix Ψ has M ones and ( N R N T M ) zeros. The position of non-zero values in H Ψ is the same as the position of non-zero values in matrix Ψ , where matrix H Ψ is the subsampled estimated matrix, followed by matrix Ψ . The estimation error of H depends upon the estimation accuracy of H Ψ ’s elements and the M non-zero values of H Ψ .

4. Proposed Extended ADMM (Ex-ADMM)-Based Channel Estimation Scheme

In the following section, the proposed Ex-ADMM is described in detail. From Section 3, Equation (7) is clearly a convex function with two objectives. Therefore, there are many possible ways to get the global optimal solution. However, the best ways to solve the convex problem are first-order methods, which only require the first-order information of the optimization problem. Since these methods are computationally expensive, alternating optimization techniques (AOTs) [26] are used to obtain the optimal solution to a convex optimization problem, due to their less complex structure and easy handling. ADMM [20] is one very popular and efficient AOT. To solve the convex optimization problem described in Equation (7), an extension of ADMM [28] known as extended ADMM (Ex-ADMM) is proposed for the channel estimation of a mmWave MIMO system.
In order to solve the optimization problem described in Equation (7) by Ex-ADMM, we reformulate the optimization problem, and two auxiliary matrices, I N R × N T and J I A R DA T H   , are introduced. As a result, the reformulated optimization problem can be re-written as,
minmize H , I , D , J   Γ H H * + Γ D D 1 + 1 2 J F 2 + 1 2 Ψ I H Ψ F 2 Subject   to   H = I   and   J =   I A R DA T H
The first term in the optimization problem described in Equation (8) is the side information of the matrix, i.e.,   A R and A T , along with the virtual channel gain in matrix Z. The second term is the information on subsampled virtual channel gain. The third term in the optimization problem is described by Equation (8), and considers the discretization error, and the fourth term is the AWGN noise. Afterwards, Equation (8) can be re-written with consideration of the augmented Lagrangian function (ALF) as follows:
A ( H , I , D , J , P 1 , P 2 ) Γ H H * + Γ D D 1 + 1 2 J F 2 + 1 2 Ψ I H Ψ F 2 + tr ( P 1 H ( H I ) ) + β 2 H I F 2 + tr ( P 2 H ( I A R DA T H J ) ) + β 2 I A R DA T H J F 2
Here, P 1   and   P 2 N R × N T are Lagrange multipliers. β is the step size of Ex-ADMM and β > 0 . For the next iteration, the Ex-ADMM can split Equation (9) into six sub-equations and solve it alternatively, i.e.,
H ( l + 1 ) = argmin H   A ( H , P 1 ( l ) , P 2 ( l ) , I ( l ) , D ( l ) , J ( l ) )
P 1 ( l + 1 ) = P 1 ( l ) + β ( H ( l + 1 ) I ( l ) )
P 2 ( l + 1 ) = P 2 ( l ) + β ( I ( l ) A R D ( l ) A T H J ( l ) )
I ( l + 1 ) = argmin I   A ( H ( l + 1 ) , P 1 ( l + 1 ) , P 2 ( l + 1 ) , I , D ( l ) , J ( l ) )
D ( l + 1 ) = argmin D   A ( H ( l + 1 ) , P 1 ( l + 1 ) , P 2 ( l + 1 ) , I ( l + 1 ) , D , J ( l ) )
J ( l + 1 ) = argmin J   A ( H ( l + 1 ) , P 1 ( l + 1 ) , P 2 ( l + 1 ) , I ( l + 1 ) , D l + 1 , J )
In Equations (10)–(15), H ( l + 1 ) is involved in every step; henceforth, H ( l + 1 ) is known as an intermediate variable. In contrast, variables I, D and J are known as essential variables, and Lagrange multipliers P 1   and   P 2 are recognized as dual variables. Another ADMM algorithm mentioned in [18] is also able to solve the optimization problem in (9) but the proposed Ex-ADMM algorithm converges at a lower NMSE as compared to the ADMM and provides better ASE performance as well.
Basically, the Ex-ADMM algorithm is a combination of cyclical ADMM and a relaxation factor [28,38,39,40]. In general, ADMM updates its order in an arbitrary way, i.e., H I D J P 1 P 2 . In contrast, the Ex-ADMM updates its order in a cyclical way, i.e., H P 1 P 2 I D J . In the proposed Ex-ADMM, reordering of the ADMM is only done to ensure the channel matrix H satisfies the first-order optimality conditions [39]; therefore, right after the updating of H ( l + 1 ) , dual variables P 1   and   P 2 are updated and then, at last, the essential variables I ,   D   and   J are updated. A relaxation technique, along with a relaxation parameter, is used to relax the essential variables [41,42]. Consequently, for the relaxation of the essential variables, we can assume that the cyclical order of dual and essential variables in Equations (11)–(15) is a block variable, i.e., z ˜ ( l ) = ( P ˜ 1 ( l ) , P ˜ 2 ( l ) , I ˜ ( l ) , D ˜ ( l ) , J ˜ ( l ) ,   ) . Thus, the final relaxed variable, z ( l + 1 ) = ( P 1 ( l + 1 ) , P 2 ( l + 1 ) , I ( l + 1 ) , D ( l + 1 ) , J ( l + 1 ) ) , can be generated as
z ( l + 1 ) = z l γ ( z l z ˜ ( l ) )
Here, roughly speaking, the tilde ( ˜ ) variables (i.e., P ˜ 1 ( l + 1 ) , P ˜ 2 ( l + 1 ) , I ˜ ( l + 1 ) , D ˜ ( l + 1 )   and   J ˜ ( l + 1 ) ) are auxiliary variables, and they can be updated using the main variables ( i . e . ,   P 1 ( l + 1 ) , P 2 ( l + 1 ) , I ( l + 1 ) , D ( l + 1 ) and   J ( l + 1 ) ) , and these variables can be obtained using the Equations (11)–(15) as follows:
P ˜ 1 ( l + 1 ) = P 1 ( l ) + β ( H ( l + 1 ) I ( l ) )
P ˜ 2 ( l + 1 ) = P 2 ( l ) + β ( I ( l ) A R D l A T H J l )
I ˜ ( l + 1 ) = argmin I   A ( H ( l + 1 ) , P 1 ( l + 1 ) , P 2 ( l + 1 ) , I , D ( l ) , J ( l ) )
D ˜ ( l + 1 ) = argmin D   A ( H ( l + 1 ) , P 1 ( l + 1 ) , P 2 ( l + 1 ) , I ( l + 1 ) , D , J ( l ) )
J ˜ ( l + 1 ) = argmin J   A ( H ( l + 1 ) , P 1 ( l + 1 ) , P 2 ( l + 1 ) , I ( l + 1 ) , D l + 1 , J )
Therefore, the final relaxed variable can be expressed as
( P 1 ( l + 1 ) P 2 ( l + 1 ) I ( l + 1 ) D ( l + 1 ) J ( l + 1 ) ) = ( P 1 ( l ) P 2 ( l ) I ( l ) D ( l ) J ( l ) ) γ [ ( P 1 ( l ) P 2 ( l ) I ( l ) D ( l ) J ( l ) ) ( P ˜ 1 ( l ) P ˜ 2 ( l ) I ˜ ( l ) D ˜ ( l ) J ˜ ( l ) ) ]
From the above discussion, it is clear that the Equations (10) and (21) have several advantages: i.e., the optimization problem described in Equation (9) turns into six individual subproblems, and these can be solved further without any strict conditions. Hence, the global optimal solution can be derived effortlessly. This helps reduce the computational complexity and storage requirements. For the next iteration, the values of the subproblems described in Equations (10) and (21) have to be updated. In order to do so, first of all, the closed-form solutions of Equations (10)–(15) need to be obtained. The advantages of Ex-ADMM over ADMM are described further in Section 5.

4.1. Procedure for Updating H ( l + 1 )

In order to obtain the closed-form solution for H, reformulate A   to   1 and consider all terms with respect to H in Equation (9). Keeping only the terms that are related to it,
1 argmin H   Γ H H * + tr ( P 1 H ( H I ) ) + β 2 H I F 2 = Γ H H * + β 2 H ( I ( l ) 1 β P 1 ( l ) ) F 2
Here, Equation (22) is the solution of Equation (10). To get the closed-form solution, a singular value thresholding (SVT) operator [43] is implemented on Equation (22):
H ( l + 1 ) = Udiag ( { sign ( h i ) max ( h i , 0 ) } 1 i r ) V H
Here, U N r × r   and   V N r × r are the side singular vector matrices of the matrices ( I ( l ) 1 β P 1 ( l ) ) and h i μ i Γ β , respectively, where Γ is the SVT threshold operator and μ i denotes the r singular values.

4.2. Procedure for Updating I ( l + 1 )

To update I ( l + 1 ) , reformulate A   to   2 ,   and , to get the closed-form solutions for I, consider all terms of I in Equation (9) and differentiate with respect to I:
2 arg   min I   1 2 Ψ I H Ψ F 2 + tr ( P 1 H ( H I ) ) + β 2 H I F 2 + tr ( P 2 H ( I A R DA T H J ) ) + β 2 I A R DA T H J F 2
2 = A I = I ( 1 2 Ψ I H Ψ F 2 + tr ( P 1 H ( H I ) ) + β 2 H I F 2 + tr ( P 2 H ( I A R DA T H J ) ) + β 2 I A R DA T H J F 2 )
2   = Ψ I H Ψ P 1 β ( H I ) P 2 β ( J I + A R DA T H )
Setting 2 ( I ) to zero, i . e . ,   2 = 0 ,
Ψ I H Ψ P 1 β ( H I ) P 2 β ( J I + A R DA T H ) = 0
Ψ I + 2 β I = H Ψ + P 1 + β ( H ) + P 2 + β J + A R DA T H
I = ( A + 2   β K ) 1 ( P 1 + β ( H ) + P 2 + β J + β B D )
where K is the identity matrix, A i = 1 N R diag ( [ Ψ ] k ) T E kk [ Ψ ] i exhibits the k-th row, and E kk is derived by inserting unit values in the N R × N R zero matrix at its (k,k)-th position and B A T * A R .
Therefore, the closed-form solution of i for the (l + 1) iteration is
i ( l + 1 ) = ( A H A + 2 β K ) 1 ( p 1 ( l ) + β h ( l + 1 ) + A H h Ψ + p 2 ( l ) + β j ( l ) + β Bd ( l ) )  
To get the final solution, unvectorized i ( l + 1 ) , i.e.,
  I ( l + 1 ) = unvec ( i ( l + 1 )

4.3. Procedure for Updating D ( l + 1 )

To get the solution for D, reformulate A   to   3 and consider all terms corresponding to D in Equation (9). Keeping only the terms that are related to it,
3 argmin D   Γ D D 1 + tr ( P 2 H ( I A R DA T H J ) ) + β 2 I A R DA T H J F 2 = Γ D D 1 + β 2 A R H ( 1 β P 2 ( l + 1 ) J ( l ) + I ( l + 1 ) ) A T F 2
Here, A R   and   A T are the unitary matrices. To get the closed-form solution of 3 , Equation (28) is transformed into the standard least absolute shrinkage and selection operator (LASSO) problem [44]. In order to do this, vectorization is performed on Equation (28):
argmin D   Γ D D 1 + β 2 A R H ( 1 β P 2 ( l + 1 ) J ( l ) + I ( l + 1 ) ) A T F 2
Let us assume V ( l + 1 ) = A R H A T ( 1 β ( P 2 ( l + 1 ) J ( l + 1 ) + I ( l + 1 ) ) and v ( l + 1 ) = vec ( V ( l + 1 ) ) . Therefore, Equation (29) can be written as
argmin d   Γ d d 1 + β 2 d v ( l + 1 ) F 2
Thus, to obtain the estimate of d in Equation (30), a soft thresholding operator is imposed for (l + 1) iterations:
d ( l + 1 ) = sign ( Re ( v ( l + 1 ) ) ) max ( | Re ( v ( l + 1 ) | Γ d , 0 ) + sign ( Im ( v ( l + 1 ) ) ) max ( | Im ( v ( l + 1 ) | Γ d , 0 )
Γ d is the scaled version of Γ d and Γ d Γ d β . Hence, the value of D ( l + 1 ) is obtained by
D ( l + 1 ) = unvec   ( d ( l + 1 ) )

4.4. Procedure for Updating J ( l + 1 )

To update J ( l + 1 ) , reformulate A   to   4 , and, to obtain the closed-form solution for J, take all terms corresponding to J in Equation (9) and differentiate with respect to J:
4 argmin J   1 2 J F 2 + ( P 2 H ( I A R DA T H J ) ) + β 2 I A R DA T H J F 2
4 = A J = J ( 1 2 J F 2 + ( P 2 H ( I A R DA T H J ) ) + β 2 I A R DA T H J F 2 )
4 = ( 1 + β ) J β ( I A R DA T H + P 2 H β )
By setting 4 to zero, i . e . ,   4 = 0 ,
J = β 1 + β (   I A R DA T H + P 2 H β )
Hence, for iteration (l + 1),
J ( l + 1 ) = β β + 1 ( I ( l + 1 ) A R D ( l + 1 ) A T H + 1 β P 2 ( l ) )
The dual variables P 1   and   P 2 can be directly updated using the H, I, D and J variables.

5. Algorithm Description

Algorithm 1 demonstrates the proposed Ex-ADMM-based mmWave channel estimation scheme, which is originally derived from the ADMM described in [18,28]. In the proposed Ex-ADMM algorithm, first of all, the parameters H ( 0 ) =   P 1 ( 0 ) =   P 2 ( 0 ) =   I ( 0 ) =   D ( 0 ) =   J ( 0 ) =   P ˜ 1 ( 0 ) =   P ˜ 2 ( 0 ) =   I ˜ ( 0 ) =   D ˜ ( 0 ) =   J ˜ ( 0 ) = 0 are initialized [18].
The main objective of the proposed algorithm is to update H ( l + 1 )   ( in   step   2 ) . This step is the most crucial task, which is done by deriving the Lagrangian A   to   1 , and then implementing the SVT operator on Equation (22). In order to efficiently update H ( l + 1 ) , the SVT operator computes the μ i singular values of h i . At every instant, the proposed algorithm is required to run the subsampled version of H as an input, which is known as H Ψ . For the training procedure, the mmWave MIMO model detailed in Section 2, inspired from [12,45], has been adopted. For the general case, let us assume that only a single set of transmitter and receiver antennas are operational at each illustration of t. According to the adopted hardware structure described in Section 2, the transmitter’s training precoding vectors and the receiver’s training combining vectors have non-zero (unit) values at their respective ij-th position in matrix Ψ . At any t-th training illustration, followed by the matrix Ψ , the subsampled matrix H Ψ also has the estimated non-zero values in the ij-th positions. In contrast, the length of the training symbols is equal to the position of the non-zero elements, i.e., T = M and M N R N T . This factor is the stopping criterion of the proposed Ex-ADMM algorithm. In step (3) of the proposed Ex-ADMM algorithm, to maintain the cyclical order, dual variables   P ˜ 1 ( l + 1 )   and   P ˜ 2 ( l + 1 ) are updated first, with the help of Equations (16) and (17). The values of the dual variables P 1   and   P 2 can be directly calculated using Equations (23), (26), (31) and (34). In step (4), the first subproblem,   I ˜ ( l + 1 ) ,   has   been   updated using Equation (18). Steps (5) and (6) are used to update the second and third subproblems,   D ˜ ( l + 1 )   and   J ˜ ( l + 1 ) , using Equations (19) and (20), respectively. (   P 1 ( l + 1 ) , P 2 ( l + 1 ) I ( l + 1 ) , D ( l + 1 )   and   J ( l + 1 ) ) have been updated in step (7) using Equation (21)). The relaxation factor γ provides better NMSE performance and a slightly improved convergence rate for all training lengths. Although Ex-ADMM is better than the other benchmark algorithms described in this paper, there is a major drawback regarding the knowledge produced during its implementation. When the proposed Ex-ADMM algorithm is implemented to solve sparse optimization problems on I ( l + 1 ) , D ( l + 1 ) ,   J ( l + 1 ) ,   P 1 ( l + 1 )   and   P 2 ( l + 1 ) , these parameters start losing their sparse nature. The reason behind this problem is that I ( l + 1 ) ,   D ( l + 1 ) ,   J ( l + 1 ) ,   P 1 ( l + 1 )   and   P 2 ( l + 1 ) are the algebraic combinations of ( I ( l )   and   I ˜ ( l ) ) ,   ( D ( l )   and   D ˜ ( l ) ) , ( J ( l )   and   J ˜ ( l ) ) ,   ( P 1 ( l )   and   P ˜ 1 ( l ) )   and   ( P 2 ( l )   and   P ˜ 2 ( l ) ) , respectively. The most expedient way to overcome this problem is, when the proposed algorithm reaches stopping criterion, to run only the cyclical part of the algorithm for one final iteration to ensure the sparsity (i.e., step 8).
Algorithm 1. Ex-ADMM based mmWave MIMO Channel Estimation Scheme
Require:Subsampled matrix H Ψ , side information matrices A R and A T , and the set of indices of observed entries in Ψ
Input: H Ψ , Ψ ,   A R , A T ,   ρ , γ , Γ H ,   Γ Z and I max
Output:Estimated output channel matrix H ^ = H ( I m a x )
Initialization:
H ( 0 ) =   P 1 ( 0 ) =   P 2 ( 0 ) =   I ( 0 ) =   D ( 0 ) =   J ( 0 ) =   P ˜ 1 ( 0 ) =   P ˜ 2 ( 0 ) =   I ˜ ( 0 ) =   D ˜ ( 0 ) =   J ˜ ( 0 ) = 0
Step 1:for l = 0,1,2… I max 1 do
Step 2:Update   H ( l + 1 ) by using Equation (23).
Step 3:Update P ˜ 1 ( l + 1 )   and   P ˜ 2 ( l + 1 ) by using Equations (16) and (17).
Step 4:Update I ˜ ( l + 1 ) by using the Equation (18).
Step 5:Update D ˜ ( l + 1 ) by using the Equation (19).
Step 6:Update J ˜ ( l + 1 ) by using the Equation (20).
Step 7:Update ( P 1 ( l + 1 ) , P 2 ( l + 1 ) , I ( l + 1 ) , D ( l + 1 )   a n d   J ( l + 1 ) ) by using Equation (21), where γ is the relaxation factor.
Step 8:meets stopping criterion do
i = l + 1
repeat step 2 to 6.
Step 9:        end do
Step10:end for

6. Computational Complexity

In this section, four benchmark algorithms, namely, TSSR [46], VAMP [47], OMP [16] and ADMM [18], are compared with the proposed Ex-ADMM algorithm in terms of complexity.
For the general case, TSSR is much faster than any one-stage method. The computational complexity of TSSR is dependent upon the calculation of its maximum diagonal elements and the smaller number of off-diagonal elements in its sparse matrix. The complexity order of the TSSR algorithm is O ( nlog ( n ) ) , [46], where n is the full column rank of the targeted sparse matrix.
In VAMP, the complexity is dominated by the matrix vector multiplication. The computational complexity of VAMP is the order of O ( N R N T Llog ( N R N T L ) ) [47], where N R   and   N T is the number of receivers and transmitters, respectively, and L is number of channel paths or the sparsity level of the channel.
In OMP, the computational complexities depend upon the sparsity of the dictionary matrix and the number of grids. Mathematically, this can be expressed as O ( LlnG 2 )   . Here, L is the number of paths or the sparsity level of the channel and G is the grid of the dictionary matrix in which the AoAs and AoDs are distributed uniformly [16]. The computational complexity of ADMM reflects the number of iterations, I max , and the number of transmitting and receiving antennas, i.e., N T   and   N R , respectively. In ADMM, appropriately, matrix factorization can be performed offline, and only the matrix vector product has to be calculated online. This reduces the computational complexities of the ADMM in a significant way. The complexity of ADMM can be expressed as O ( N R 2 N T ) [18], where N R   and   N T are the numbers of receivers and transmitters, respectively. The proposed Ex-ADMM algorithm is originally derived from ADMM. Therefore, the complexity of the proposed Ex-ADMM algorithm remains the same as that of ADMM, i.e., O ( N R 2 N T ) .

7. Simulation Results and Discussion

In this section, the numerical results are explained to showcase the accomplishment and performance of the proposed algorithm (Ex-ADMM) with different standard algorithms.

7.1. System Model

The HBF system architecture was adopted as described in [45]. A total of 64 transmitting and receiving antennas, i.e., N T = N R = 64 , were considered at the BS and MS. The antennas were assumed to be in a ULA configuration. Laplacian distributions with 55° standard deviation were used to produce the Azimuth angle of the AoAs and AoDs. Phase shifters had the quantized phase, whereas spacing between antennas was determined by d = λ 2 .

7.2. Channel Model

A mmWave channel with two paths and two clusters is considered here. Over the range of [0,2π], AoAs and AoDs were uniformly distributed. The noise is complex additive white gaussian noise (AWGN), with zero mean and σ q 2 variance. A signal-to-noise Ratio (SNR) of 30 dB, mathematically defined as SNR σ q 2 , was used for simulation. The frequency of the mmWave channel model used for simulation was 90GHz.

7.3. Simulation Environment

In this section, we interpret the results of a comparison between the proposed Ex-ADMM algorithm and the TSSR, OMP, VAMP and ADMM algorithms, in terms of ASE, NMSE, the convergence of Ex-ADMM with respect to ADMM and SVT, and the effect of NMSE on the proposed Ex-ADMM, TSSR, OMP, VAMP and ADMM algorithms for multiple paths. An average of 100 independent iterations and 100 Monte Carlo realizations were considered for the simulations [48]. Three different training symbol lengths were considered for training purposes, i.e., T = 400, 800 and 1200.

7.4. Results and Discussions

OMP determines the sparsity of the channel or dictionary matrix; VAMP also uses this concept with a few differences. On the other hand, TSSR individually exploits the low rank property of the channel matrix along with the sparsity. At first, an SVT operator was implemented to recapture the channel matrix H. The SVT threshold operator Γ = β H ψ ,   while   β = 3 M N R N T was fixed to exploit the low rank property. Channel sparsity depends upon the number of paths in the mmWave channel, i.e., the sparsity of the channel is fixed with the number of propagation paths. For the case of the Ex-ADMM algorithm, the weighting factor of the channel matrix is Γ H = β H ψ ,   where   β   is   set   to   be   0.005 , and the weighting factor with respect to the sparse matrix D is obtained by Γ D = 0.1 ( 1 10 log ( σ q 2 ) ) . Under the aforementioned constraints, an analysis was conducted, and results are explained as follows.

7.4.1. Comparison of ASE

In Figure 2a–c, the ASE (in bits/sec/Hz) at the SNR points for OMP, TSSR, VAMP, ADMM and the proposed Ex-ADMM algorithm with perfect CSI is shown. The following expression is used to calculate the ASE [49,50],
ASE =   E { log 2 det ( K N R + ( N R N T ( σ q 2 + NMSE ) 1 HH H ) }
The performance of the OMP in terms of ASE at low-to-mid SNR points is moderate, and it gets worse at mid-to-high SNR points as the training symbols’ length is increased, i.e., T < 400. As is clear from Figure 2a, for all SNR points, VAMP performs very poorly for smaller training symbol lengths; i.e., T < 400. When the training symbol length is increased, i.e., T ≥ 800, the performance of VAMP increases significantly, as depicted in Figure 2b,c. It was found that, for different T values, the performance of TSSR is very bad at almost every SNR point relative to OMP, TSSR, VAMP, ADMM and Ex-ADMM. ADMM, at all SNR points, performed better than OMP, TSSR and VAMP for all training symbol lengths. In the case of Ex-ADMM, for T = 400, Ex-ADMM outperformed the OMP, TSSR, VAMP and ADMM algorithms at all SNR points. For T = 800 and 1200, at low-to-mid SNR points, ADMM and Ex-ADMM performed similarly, but, as the SNR range increased from mid to high, Ex-ADMM outperformed ADMM. Moreover, for high training symbol lengths, Ex-ADMM was very close to achieving the perfect CSI.

7.4.2. Comparison of NMSE

The performance of OMP, TSSR, VAMP, ADMM and the proposed Ex-ADMM algorithm in terms of NMSE, at different SNR points with respect to different training symbol lengths T, is depicted in Figure 3a–c. The following relation is used to calculate NMSE:
NMSE E ( 10 log 10 H ^ H F 2 H F 2 )
where H ^ is the estimated channel matrix and H is the true channel matrix. H ^ is determined by H ( I max ) . The performance of OMP at different SNR points is almost constant for all training symbol lengths, and, due to its huge dictionary matrix, does not change for any increment of T. Thus, OMP may suffer from a discretization problem. One can therefore say that the OMP is not capable enough to recover small training symbols.
On the other side, it was found that VAMP exhibited poor performance at T < 800, but, as depicted in Figure 3b,c, the performance of VAMP increases as the training symbol length increases. VAMP is used to calculate statical information on sparse signals, which is impossible in case of small training symbol lengths, but, at low-to-mid SNR points, as the training symbol length increases to T > 800, it exhibits a significant improvement in NMSE performance compared to other algorithms. TSSR is used for two-stage estimation. In the simulation results, TSSR is not capable of recovering the estimated values for low and high training lengths. TSSR exploits the low rank and sparsity of any channel matrix individually. In contrast, ADMM has the capacity to exploits low rank and sparsity together, with the channel matrix for any training symbol length.
Estimation, in the proposed Ex-ADMM algorithm, is done by SVT implemented on H, which exploits the low rank property of H and l 1 norm , enforced on the submatrix to ensure the sparsity. Estimation of H ψ becomes noisier, as the proposed Ex-ADMM algorithm and ADMM lack array gain, but the noise is not severe enough to cause a major effect on the estimated values. The proposed Ex-ADMM algorithm performed better than OMP, TSSR, VAMP and ADMM in terms of NMSE at different SNR points with respect to different training symbol lengths.

7.4.3. Comparison of Convergence

In Figure 4a–c, three values of γ , i.e., γ = 0.5 ,   1   and   1.5 are used for simulation to demonstrate the effect of the relaxation factor on convergence with respect to the NMSE of the proposed Ex-ADMM algorithm. As the value of γ increases from 0.5 to 1 and then to 1.5, the convergence of proposed algorithm gets slightly better and converges to smaller NMSE values for all training symbol lengths (i.e., T = 400, 800 and 1200) as compared to ADMM and SVT. Effects on NMSE for different algorithms of multiple paths are shown in Figure 4d. Due to the poor scattering nature of mmWave, the performance of all algorithms gets worse with escalation of the number of paths, L P . However, the performance of the proposed algorithm is still better than that of the others.

8. Conclusions

In this paper, an extended version of ADMM (Ex-ADMM) was proposed for mmWave channel estimation. In the proposed scheme, a joint optimization problem was formulated, exploiting the sparsity and low rank properties of the channel matrix. The proposed Ex-ADMM algorithm exploits both the properties of targeted optimization problems by breaking them into several subproblems. These subproblems are then solved effortlessly, by acquiring their closed-form solutions independently. A relaxation factor was introduced to converge the proposed algorithm to smaller NMSE values for all training symbol lengths. Comprehensive simulation experiments were performed to validate the performance of Ex-ADMM. The proposed Ex-ADMM algorithm outperformed other benchmark algorithms in terms of ASE, NMSE and convergence rate.

Author Contributions

Conceptualization, P.S.S.; methodology, P.S.S. and A.H.W.; validation, P.S.S. and A.H.W.; writing, review and editing, L.C., A.H.W. and P.S.S.; funding acquisition, L.C.; supervision, L.C. and A.H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Major Special Funding (NMSF) under grant 2018ZX03001006-002.

Acknowledgments

The first author, P.S.S., hereby acknowledges the University of the Chinese Academy of Sciences (CAS) and the World Academy of Sciences (TWAS) for financial support for Ph.D. studies under the CAS-TWAS Fellowship. The first author would also like to acknowledge Pankaj Chaturvedi, Post-Doctoral Fellow, Y.M.C., Tsinghua University, Beijing, China for his help in mathematical expressions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cheng, X.; Tang, C.; Zhang, Z. Accurate Channel Estimation for Millimeter-Wave MIMO Systems. IEEE Trans. Veh. Technol. 2019, 68, 5159–5163. [Google Scholar] [CrossRef]
  2. Kolawole, O.Y.; Biswas, S.; Singh, K.; Ratnarajah, T. Transceiver Design for Energy-Efficiency Maximization in mmWave MIMO IoT Networks. IEEE Trans. Green Commun. Netw. 2020, 4, 109–123. [Google Scholar] [CrossRef]
  3. Prasad, R.; Vandendorpe, L. An overview of millimeter wave indoor wireless communication systems. In Proceedings of the Proceedings of 2nd IEEE International Conference on Universal Personal Communications, Ottawa, ON, Canada, 12–15 October 1993; Volume 2, pp. 885–889. [Google Scholar]
  4. Luo, F.-L.; Zhang, C. 5G Millimeter-wave Communication Channel and Technology Overview. In Signal Processing for 5G: Algorithms and Implementations; IEEE: Piscataway, NJ, USA, 12–15 October 2016; pp. 354–371. ISBN 9781119116479. [Google Scholar]
  5. Torkildson, E.; Madhow, U.; Rodwell, M. Indoor Millimeter Wave MIMO: Feasibility and Performance. IEEE Trans. Wirel. Commun. 2011, 10, 4150–4160. [Google Scholar] [CrossRef] [Green Version]
  6. Akoum, S.; El Ayach, O.; Heath, R.W. Coverage and capacity in mmWave cellular systems. In Proceedings of the 2012 Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), Pacific Grove, CA, USA, 4–7 November 2012; pp. 688–692. [Google Scholar]
  7. Smolders, A.B.; Reniers, A.C.F.; Johannsen, U.; Herben, M.H.A.J. Measurement and calibration challenges of microwave and millimeter-wave phased-arrays. In Proceedings of the 2013 International Workshop on Antenna Technology (iWAT), Karlsruhe, Germany, 4–6 March 2013; pp. 358–361. [Google Scholar]
  8. Alkhateeb, A.; Mo, J.; Gonzalez-Prelcic, N.; Heath, R.W. MIMO Precoding and Combining Solutions for Millimeter-Wave Systems. IEEE Commun. Mag. 2014, 52, 122–131. [Google Scholar] [CrossRef]
  9. Wei, L.; Hu, R.Q.; Qian, Y.; Wu, G. Key elements to enable millimeter wave communications for 5G wireless systems. IEEE Wirel. Commun. 2014, 21, 136–143. [Google Scholar]
  10. Akdeniz, M.R.; Liu, Y.; Samimi, M.K.; Sun, S.; Rangan, S.; Rappaport, T.S.; Erkip, E. Millimeter Wave Channel Modeling and Cellular Capacity Evaluation. IEEE J. Sel. Areas Commun. 2014, 32, 1164–1179. [Google Scholar] [CrossRef]
  11. Méndez-Rial, R.; Rusu, C.; González-Prelcic, N.; Alkhateeb, A.; Heath, R.W. Hybrid MIMO Architectures for Millimeter Wave Communications: Phase Shifters or Switches? IEEE Access 2016, 4, 247–267. [Google Scholar] [CrossRef]
  12. Alkhateeb, A.; El Ayach, O.; Leus, G.; Heath, R.W. Channel Estimation and Hybrid Precoding for Millimeter Wave Cellular Systems. IEEE J. Sel. Top. Signal Process. 2014, 8, 831–846. [Google Scholar] [CrossRef] [Green Version]
  13. Cao, Z.; Geng, H.; Chen, Z.; Chen, P. Sparse-Based Millimeter Wave Channel Estimation With Mutual Coupling Effect. Electronics 2019, 8, 358. [Google Scholar] [CrossRef] [Green Version]
  14. Lu, X.; Yang, W.; Cai, Y.; Guan, X. Comparison of CS-Based Channel Estimation for Millimeter Wave Massive MIMO Systems. Appl. Sci. 2019, 9, 4346. [Google Scholar] [CrossRef] [Green Version]
  15. Han, Y.; Lee, J. Asymmetric channel estimation for multi-user millimeter wave communications. In Proceedings of the 2016 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Korea, 19–21 October 2016; pp. 4–6. [Google Scholar]
  16. Lee, J.; Gil, G.; Lee, Y.H. Channel Estimation via Orthogonal Matching Pursuit for Hybrid MIMO Systems in Millimeter Wave Communications. IEEE Trans. Commun. 2016, 64, 2370–2386. [Google Scholar] [CrossRef]
  17. Li, X.; Fang, J.; Li, H.; Wang, P. Millimeter Wave Channel Estimation via Exploiting Joint Sparse and Low-Rank Structures. IEEE Trans. Wirel. Commun. 2018, 17, 1123–1133. [Google Scholar] [CrossRef]
  18. Vlachos, E.; Alexandropoulos, G.C.; Thompson, J. Massive MIMO Channel Estimation for Millimeter Wave Systems via Matrix Completion. IEEE Signal Process. Lett. 2018, 25, 1675–1679. [Google Scholar] [CrossRef] [Green Version]
  19. Schniter, P.; Sayeed, A. Channel estimation and precoder design for millimeter-wave communications: The sparse way. In Proceedings of the 2014 48th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 2–5 November 2014; pp. 273–277. [Google Scholar]
  20. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  21. Abernethy, J.; Bach, F.; Evgeniou, T.; Vert, J.-P. A new approach to collaborative filtering: Operator estimation with spectral regularization. J. Mach. Learn. Res. 2009, 10, 803–826. [Google Scholar]
  22. Natarajan, N.; Dhillon, I.S. Inductive matrix completion for predicting gene–disease associations. Bioinformatics 2014, 30, i60–i68. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Chen, T.; Zhang, W.; Lu, Q.; Chen, K.; Zheng, Z.; Yu, Y. SVDFeature: A toolkit for feature-based collaborative filtering. J. Mach. Learn. Res. 2012, 13, 3619–3622. [Google Scholar]
  24. Menon, A.K.; Chitrapura, K.-P.; Garg, S.; Agarwal, D.; Kota, N. Response prediction using collaborative filtering with hierarchies and side-information. In Proceedings of the 17th ACM Sigkdd International Conference on Knowledge Discovery and Data Mining, Manchester Grand Hyatt, San Diego, CA, USA, 21–24 August 2011; pp. 141–149. [Google Scholar]
  25. Chiang, K.-Y.; Hsieh, C.-J.; Dhillon, I. Robust principal component analysis with side information. In Proceedings of the 33rd International Conference on Machine Learning (ICML 2016), New York, NY, USA, 19–24 June 2016; pp. 2291–2299. [Google Scholar]
  26. Lu, J.; Liang, G.; Sun, J.; Bi, J. A sparse interactive model for matrix completion with side information. In Proceedings of the 30th Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 4071–4079. [Google Scholar]
  27. Du, J.; Li, J.; He, J.; Guan, Y.; Lin, H. Low-Complexity Joint Channel Estimation for Multi-User mmWave Massive MIMO Systems. Electronics 2020, 9, 301. [Google Scholar] [CrossRef] [Green Version]
  28. Ma, F.; Ni, M.; Zhang, X.; Yu, Z. Solving Lasso: Extended ADMM is more efficient than ADMM. In Proceedings of the 2015 Chinese Automation Congress (CAC), Wuhan, China, 27–29 November 2015; pp. 55–58. [Google Scholar]
  29. Peng, C.; Cheng, H.; Ko, M. An efficient two-stage sparse representation method. Int. J. Pattern Recognit. Artif. Intell. 2016, 30, 1651001. [Google Scholar] [CrossRef] [Green Version]
  30. Schniter, P.; Rangan, S.; Fletcher, A.K. Vector approximate message passing for the generalized linear model. In Proceedings of the 2016 50th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 6–9 November 2016; pp. 1525–1529. [Google Scholar]
  31. Ayach, O.E.; Rajagopal, S.; Abu-Surra, S.; Pi, Z.; Heath, R.W. Spatially Sparse Precoding in Millimeter Wave MIMO Systems. IEEE Trans. Wirel. Commun. 2014, 13, 1499–1513. [Google Scholar] [CrossRef] [Green Version]
  32. Forenza, A.; Love, D.J.; Heath, R.W. Simplified Spatial Correlation Models for Clustered MIMO Channels With Different Array Configurations. IEEE Trans. Veh. Technol. 2007, 56, 1924–1934. [Google Scholar] [CrossRef]
  33. Sayeed, A.M. Deconstructing multiantenna fading channels. IEEE Trans. Signal Process. 2002, 50, 2563–2579. [Google Scholar] [CrossRef] [Green Version]
  34. Brady, J.; Behdad, N.; Sayeed, A.M. Beamspace MIMO for Millimeter-Wave Communications: System Architecture, Modeling, Analysis, and Measurements. IEEE Trans. Antennas Propag. 2013, 61, 3814–3827. [Google Scholar] [CrossRef]
  35. Keshavan, R.H.; Montanari, A.; Oh, S. Matrix completion from a few entries. IEEE Trans. Inf. Theory 2010, 56, 2980–2998. [Google Scholar] [CrossRef]
  36. Lin, Z.; Chen, M.; Ma, Y. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv 2010, arXiv:1009.5055. [Google Scholar]
  37. Chiang, K.-Y.; Hsieh, C.-J.; Dhillon, I.S. Matrix completion with noisy side information. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, December 2015; pp. 3447–3455. [Google Scholar]
  38. Nishihara, R.; Lessard, L.; Recht, B.; Packard, A.; Jordan, M.I. A general analysis of the convergence of ADMM. arXiv 2015, arXiv:1502.02009. [Google Scholar]
  39. Cai, X.; Gu, G.; He, B.; Yuan, X. A proximal point algorithm revisit on the alternating direction method of multipliers. Sci. China Math. 2013, 56, 2179–2186. [Google Scholar] [CrossRef]
  40. Fortin, M.; Glowinski, R. Augmented Lagrangian Methods: Applications to the Numerical Solution of Boundary-Value Problems; Elsevier: Amsterdam, The Netherlands, 2000; ISBN 008087536X. [Google Scholar]
  41. Xu, Z.; Figueiredo, M.A.T.; Yuan, X.; Studer, C.; Goldstein, T. Adaptive Relaxed ADMM: Convergence Theory and Practical Implementation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 7234–7243. [Google Scholar]
  42. He, B.-S.; Xu, M.-H.; Yuan, X.-M. Block-Wise ADMM with a Relaxation Factor for Multiple-Block Convex Programming. J. Oper. Res. Soc. China 2018, 6, 485–505. [Google Scholar] [CrossRef]
  43. Cai, J.-F.; Candes, E.; Shen, Z. A Singular Value Thresholding Algorithm For Matrix Completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  44. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
  45. Molisch, A.F.; Ratnam, V.V.; Han, S.; Li, Z.; Nguyen, S.L.H.; Li, L.; Haneda, K. Hybrid Beamforming for Massive MIMO: A Survey. IEEE Commun. Mag. 2017, 55, 134–141. [Google Scholar] [CrossRef] [Green Version]
  46. He, R.; Hu, B.; Zheng, W.-S.; Guo, Y. Two-stage sparse representation for robust recognition on large-scale database. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, Atlanta, GA, USA, 11–15 July 2010. [Google Scholar]
  47. Rangan, S.; Schniter, P.; Fletcher, A.K. Vector approximate message passing. IEEE Trans. Inf. Theory 2019, 65, 6664–6684. [Google Scholar] [CrossRef] [Green Version]
  48. Bucklew, J.A.; Radeke, R. On the Monte Carlo simulation of digital communication systems in Gaussian noise. IEEE Trans. Commun. 2003, 51, 267–274. [Google Scholar] [CrossRef]
  49. Berriche, L.; Abed-Meraim, K.; Belfiore, J. Investigation of the channel estimation error on MIMO system performance. In Proceedings of the 2005 13th European Signal Processing Conference, Antalya, Turkey, 4–8 September 2005; pp. 1–4. [Google Scholar]
  50. Yoo, T.; Goldsmith, A. Capacity and power allocation for fading MIMO channels with channel estimation error. IEEE Trans. Inf. Theory 2006, 52, 2203–2214. [Google Scholar]
Figure 1. Block structure of a typical mmWave Multiple Input Multiple Output (MIMO) system.
Figure 1. Block structure of a typical mmWave Multiple Input Multiple Output (MIMO) system.
Applsci 10 04397 g001
Figure 2. (ac) ASE for different transmit SNRs for a 64 × 64 mmWave MIMO channel at T = 400, T = 800 and T = 1200.
Figure 2. (ac) ASE for different transmit SNRs for a 64 × 64 mmWave MIMO channel at T = 400, T = 800 and T = 1200.
Applsci 10 04397 g002aApplsci 10 04397 g002b
Figure 3. (ac) NMSE for different transmit SNRs for a 64 × 64 mmWave MIMO channel at T = 400, T = 800 and T = 1200.
Figure 3. (ac) NMSE for different transmit SNRs for a 64 × 64 mmWave MIMO channel at T = 400, T = 800 and T = 1200.
Applsci 10 04397 g003aApplsci 10 04397 g003b
Figure 4. (ac) NMSE at a 30-db transmit SNR for a 64 × 64 mmWave MIMO channel at T = 400, T = 800 and T = 1200, with respect to algorithmic iteration for different γ values. (d) The effect of NMSE for multiple paths N p   at   T = 2000 .
Figure 4. (ac) NMSE at a 30-db transmit SNR for a 64 × 64 mmWave MIMO channel at T = 400, T = 800 and T = 1200, with respect to algorithmic iteration for different γ values. (d) The effect of NMSE for multiple paths N p   at   T = 2000 .
Applsci 10 04397 g004aApplsci 10 04397 g004b

Share and Cite

MDPI and ACS Style

Srivastav, P.S.; Chen, L.; Wahla, A.H. Precise Channel Estimation Approach for a mmWave MIMO System. Appl. Sci. 2020, 10, 4397. https://doi.org/10.3390/app10124397

AMA Style

Srivastav PS, Chen L, Wahla AH. Precise Channel Estimation Approach for a mmWave MIMO System. Applied Sciences. 2020; 10(12):4397. https://doi.org/10.3390/app10124397

Chicago/Turabian Style

Srivastav, Prateek Saurabh, Lan Chen, and Arfan Haider Wahla. 2020. "Precise Channel Estimation Approach for a mmWave MIMO System" Applied Sciences 10, no. 12: 4397. https://doi.org/10.3390/app10124397

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop