Next Article in Journal
The Kp,q-Compactness and Kp,q-Null Sequences, and the KKp,q-Approximation Property for Banach Spaces
Next Article in Special Issue
Collision-Based Window-Scaled Back-Off Mechanism for Dense Channel Resource Allocation in Future Wi-Fi
Previous Article in Journal
A Developed Frequency Control Strategy for Hybrid Two-Area Power System with Renewable Energy Sources Based on an Improved Social Network Search Algorithm
Previous Article in Special Issue
Internet of Drones: Routing Algorithms, Techniques and Challenges
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Analysis of Regularized Convex Relaxation for Complex-Valued Data Detection

1
Department of Electrical Engineering, College of Engineering, University of Ha’il, P.O. Box 2440, Ha’il 81441, Saudi Arabia
2
Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, UK
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(9), 1585; https://doi.org/10.3390/math10091585
Submission received: 31 March 2022 / Revised: 26 April 2022 / Accepted: 30 April 2022 / Published: 7 May 2022

Abstract

:
In this work, we study complex-valued data detection performance in massive multiple-input multiple-output (MIMO) systems. We focus on the problem of recovering an n-dimensional signal whose entries are drawn from an arbitrary constellation K C from m noisy linear measurements, with an independent and identically distributed (i.i.d.) complex Gaussian channel. Since the optimal maximum likelihood (ML) detector is computationally prohibitive for large dimensions, many convex relaxation heuristic methods have been proposed to solve the detection problem. In this paper, we consider a regularized version of this convex relaxation that we call the regularized convex relaxation (RCR) detector and sharply derive asymptotic expressions for its mean square error and symbol error probability. Monte-Carlo simulations are provided to validate the derived analytical results.

1. Introduction

Detection of complex-valued data from noisy linear measurements appears often in many communication applications, such as massive multiple-input multiple-output (MIMO) signal detection [1], multiuser detection [2], decoding of space-time codes [3], etc. In this work, we consider the problem of recovering an n-dimensional complex-valued signal whose entries are drawn from an arbitrary constellation K C from m noisy linear measurements in a massive MIMO application. Although the maximum likelihood (ML) detector can achieve excellent performance, its computational complexity becomes prohibitive as the problem size increases [2]. To achieve an acceptable performance with a low computing complexity, various convex optimization-based heuristics have been developed. One popular convex relaxation approach of the ML is the box-relaxation used in [4,5]. Regularization-based techniques have been employed in [6,7] to further improve the performance of the box-relaxation in massive MIMO applications. Sparsity-aware regularization methods based on ideas from compressed sensing were used in [8,9,10]. Sum of absolute values (SOAV) optimization [11] uses a similar sparsity-based approach, and has been applied in many wireless communication problems [12,13,14].
There are a few theoretical analysis approaches for the convex optimization-based signal reconstruction problems. One of the main technical tool used in the sharp asymptotic analysis of such problems is the convex Gaussian min-max theorem (CGMT) framework [15,16]. The CGMT has been used to analyze the performance of various optimization problems [4,6,7,8,12,15,17]. However, prior performance analysis using the CGMT was only established for real-valued constellations such as binary phase-shift keying (BPSK) and pulse amplitude modulation (PAM) using the box-relaxation method [4]. For the complex-valued constellations, to the best of our knowledge, the CGMT was only applied in one work [5], which considers the performance of convex-relaxation, but without regularization.
The heuristic method proposed in this work is to relax the discrete set K to a convex and continuous set V , and then solve the detection problem using a regularized convex optimization program followed by hard-thresholding. We call this method the regularized convex relaxation (RCR) detector, and we sharply analyze its asymptotic performance in terms of its mean square error (MSE) and symbol error probability (SEP) in the limit of m and n with a fixed rate. Furthermore, we will show the additional performance gains attained by adding the convex relaxation constraint. We assume an independent and identically distributed (i.i.d.) complex Gaussian channel matrix and additive white Gaussian noise. Our results also enable us to select the optimal regularization factor to further improve the detection performance. As a concrete example, we focus our attention on studying the performance of the RCR detector for phase-shift keying (PSK) and quadrature amplitude modulation (QAM) constellations. Monte-Carlo simulations are provided to validate our analytical expressions.

2. Problem Formulation

2.1. Notation

The basic notations used throughout this article are gathered here. We use R and C to represent the sets of real and complex numbers, respectively. In addition, Z + is used to indicate the set of positive integers. For a complex scalar z C , z R , and z I represent the real and imaginary parts of z, respectively, and | z | = z R 2 + z I 2 . We use the letter j to denote the complex unit, i.e., j 2 = 1 . A real Gaussian distribution with mean μ and variance σ 2 is indicated by N ( μ , σ 2 ) . Similarly, CN ( μ , σ 2 ) indicates a complex Gaussian distribution with real and imaginary parts drawn independently from N ( μ R , σ 2 2 ) and N ( μ I , σ 2 2 ) , respectively. X p X implies that the random variable X has a density p X . Bold lower-case letters are reserved for vectors, e.g., x , with x i denoting its i-th entry. The Euclidean norm of a vector is denoted by · . Matrices are represented by bold upper-case letters, e.g., A , while ( · ) represents the transpose operator. We reserve the letters G and Z to denote independent real standard Gaussian random variables. Similarly, G c is reserved to denote a complex CN ( 0 , 2 ) Gaussian random variable. Notations E [ · ] and P ( · ) indicate the expectation and probability operators, respectively. Symbols “ = d ” and “ P ” are used to designate equivalence in distribution, and convergence in probability, respectively. Finally, for a closed and nonempty convex set V C , and for any vector x C n , we define its distance and projection functions, respectively, as follows
D ( x ; V ) = min a V n x a ,
Π ( x ; V ) = arg min a V n x a .

2.2. Problem Setup

We need to recover an n-dimensional complex-valued transmit vector s 0 K n C n , where K is the discrete transmit modulation constellation (e.g., PSK, QAM, etc.). The received signal vector r C m is given by
r = H s 0 + v ,
where H C m × n is the MIMO channel matrix that has CN ( 0 , 1 n ) i.i.d. entries and v C m is the noise vector with CN ( 0 , σ 2 ) i.i.d. entries. Under the current setup, the signal-to-noise ratio (SNR) is SNR = 1 / σ 2 .
Detector: The optimum detector that minimizes the error rate is proven to be the maximum likelihood (ML) detector [2], which solves the following optimization problem
s ^ ML : = arg min s K n 1 2 H s r 2 .
This is computationally prohibitive in the considered massive MIMO setup; due to the discreteness nature of the constraints set K . Instead, in this paper, we consider the regularized convex relaxation (RCR) detector. The RCR recovers s following the next two steps:
s ^ : = arg min s V n 1 2 H s r 2 + ζ 2 s 2 ,
s i : = arg min c K | c s ^ i | , i = 1 , 2 , , n ,
where in the first step (5a), the discrete set K is relaxed to a convex set V , and then we solve a regularized version of this relaxed problem with ζ > 0 being the regularization factor. In the second step (5b), each entry of s ^ is mapped to its closest point in K to produce the final estimate s .

2.3. Performance Measures

In this paper, we provide sharp performance analysis of the RCR detector in terms of the problem parameters such as SNR , ζ , m , n , K and V . We will consider two different performance measures, namely the MSE and the SEP discussed next.
Mean Square Error (MSE): this metric is used to quantify the performance of the estimation step in (5a). It is defined as follows:
MSE : = 1 n s 0 s ^ 2 .
Another important performance metric is the symbol error probability.
Symbol Error Probability (SEP): the symbol error rate (SER) analyzes the performance of the second (detection) step in (5b), and it is defined as:
SER : = 1 n i = 1 n 𝟙 { s i s 0 , i } ,
where 𝟙 { · } represents the indicator function.
Another interesting related measure is the symbol error probability (SEP) that is given as
SEP : = E [ SER ] = 1 n i = 1 n P s i s 0 , i ,
where the expectation is taken over the channel, the noise and the signal constellation.
Next, we introduce the notation V x for x K , as the set of all points in V that will be mapped to x in (5b). Equivalently
V x : = b V : a K , | b x | < | b a | .
With this notation at hand, we can rewrite the SEP in (8) as
SEP = 1 n i = 1 n P s ^ i V s 0 , i ,
where s ^ i is a minimizer of (5a).

2.4. Technical Assumptions

We assume that the entries of s 0 are sampled i.i.d. from a probability density function p s 0 , with P ( b 1 + j b 2 ) = P ( b 2 + j b 1 ) , b 1 , b 2 R . Furthermore, we assume that s 0 is normalized to have zero-mean and unit-variance, i.e., E [ S 0 ] = 0 , and E [ S 0 2 ] = 1 . The convex set V is assumed to be symmetric, i.e., if ( b 1 + j b 2 ) V , then ( b 2 + j b 1 ) V as well. Finally, we assume a high-dimensional regime in which m , and n with a proportional rate κ : = m n ( 0 , ) .

3. Asymptotic Performance Analysis

3.1. Main Asymptotic Results

This section provides our main results on the performance evaluation of the RCR detector in the considered high dimensional setting.
Theorem 1
(Asymptotics of the RCR). Let MSE , and SEP be the mean square error and symbol error probability of the RCR detector in (5), respectively, for an unknown signal s 0 K n with entries sampled i.i.d. from a distribution p s 0 . Let V be a convex relaxation of K that satisfies the assumption of Section 2.4. For fixed ζ 0 , and κ > 0 , if following optimization problem
min α > 0 max β > 0 κ α β β 2 2 α β 2 β + 2 ζ α + β 2 α σ 2 + 1 β 2 2 α β + 4 ζ α 2 + β 2 α + ζ E D 2 β β + 2 ζ α S 0 α G c ; V
has a unique solution ( α * , β * ) , then it holds in probability that
lim n MSE = 2 κ α * 2 σ 2 ,
and
lim n SEP = P Π β * β * + 2 ζ α * S 0 α * G c ; V V S 0 ,
where the expectation and probability in the above expressions are taken over S 0 p s 0 and G c CN ( 0 , 2 ) . Here, D ( · ) and Π ( · ) are the distance and projection functions defined in (1) and (2), respectively. In addition, the set V S 0 is as defined in (9).
Proof. 
See Section 5. □
Theorem 1 provides high dimensional asymptotic expressions to calculate the MSE and SEP of the RCR detector under an arbitrary complex-valued constellation.
Remark 1.
The objective function in (11) is convex-concave and only includes scalar variables. Thus, first-order optimality conditions may be used to efficiently calculate α * and β * numerically.

3.2. Modulation Schemes

Using Theorem 1, we can sharply characterize the performance of the RCR detector for a general complex-valued constellation K , which can be relaxed to an arbitrary convex set V . To better comprehend our result and demonstrate how to adapt it to different schemes, we will concentrate on two conventional schemes, PSK and QAM constellations, which will be addressed next.

3.2.1. M-PSK Constellation

In an M-PSK constellation, where M = 2 k , for some k Z + , each entry of s 0 is randomly drawn from the set
K = exp 2 π i M , i = 0 , 1 , , M 1 , M 4 .
Note that this work focuses on complex-constellations, but the “ M = 2 ”-case corresponds to the real-valued BPSK constellation, which has already been studied in [6]. The elements of K are uniformly distributed over the unit circle in the complex plane, and therefore we suggest the use of the so-called circular-relaxation (CR), where we choose the set
V CR = { x C : | x | 1 }
as the convex relaxation set in (5a). The projection function on this set has the following form:
Π ( x ; V CR ) = x , if | x | 1 x | x | , otherwise .
Due to the symmetric nature of the M-PSK constellation, the asymptotic SEP can be derived in the following closed-form:
lim n SEP = P Z G 1 α * tan π M ,
where Z and G are i.i.d. N ( 0 , 1 ) random variables.

3.2.2. M-QAM Constellation

In this paper, we will only consider square QAM constellations, where M = 2 2 k , such as 4-QAM, 16-QAM, 64-QAM, etc. Then, the constellation set is given by
K = ( a + j b ) C : a , b ( M 1 ) E avg , ( M 3 ) E avg , , M 1 E avg ,
where we normalize the constellation points by E avg : = 2 ( M 1 ) 3 ; to have unit average power. ( E avg represents the average power of the non-normalized M-QAM symbols.) The convex relaxation that is often used for this modulation is known as the box-relaxation (BR) [4] which is given as
V BR = ( a + j b ) C : | a | M 1 E avg , | b | M 1 E avg .
Similar to the preceding subsection, to apply Theorem 1, we must first construct the projection and distance functions of V BR , which is straightforward for a box set. Then, the SEP can be calculated using (13). In this scenario, unlike in the M-PSK case, the SEP of the detector is not the same for various symbols in K ; this is due to the fact that an M-QAM constellation includes distinct types of points, namely inner, edge, and corner points.

4. Numerical Simulation Results

In Figure 1, we plotted the MSE and SEP performances as functions of the regularizer ζ for a 16-QAM constellation with BR. These figures verify the accuracy of the prediction of Theorem 1 when compared to Monte-Carlo (MC) empirical simulations. In addition, from those figures we can see a clear minimum value of ζ that gives the best MSE and SEP performance. Thus, Theorem 1 can be used to select the optimum value of the regularizer.
Furthermore, Figure 2 verifies the accuracy of the MSE and SEP predictions of Theorem 1 as functions of the SNR, for a 16-PSK modulation scheme with circular relaxation. It is worth noting that, while the theorem requires m and n , the theoretical predictions already are precise for moderate size of the problem, in this simulation n = 128 . In this figure, we also plotted the unconstrained regularized least-squares (RLS) (without convex relaxation, i.e., V = R ). Besides, it can be seen that the RCR detector outperforms the RLS.
Furthermore, under a box-relaxation, we apply Theorem 1 to sharply characterize the MSE and SEP of a 16-QAM modulated system as functions of the SNR. The result is shown in Figure 3, which again illustrates the high accuracy of our results, as well as that RCR detector outperforms the RLS.
Finally, we should point out that in all of the numerical simulations illustrated above, we considered overdetermined systems, i.e., κ > 1 . We conclude this section by presenting another simulation example for an underdetermined system, with κ = 0.8 , while all other parameters are the same as in the previous example. The MSE performance is summarized in Table 1. This table confirms the sharpness of our predictions for underdetermined systems as well.

5. Sketch of the Proof

In this section, we provide a sketch of the proof of Theorem 1. For the reader’s convenient, we summarize the main tool used in our analysis, namely the CGMT, in the next subsection.

5.1. CGMT: An Analysis Tool

We must first identify the analysis’s key component, which is the CGMT. We only summarize the theorem’s formulation here, and we recommend the reader to [4,15] for the full technical prerequisites. In particular, let us consider the following (stochastic) optimization problems, which are called the primal optimization (PO) problem and the auxiliary optimization (AO) problem, respectively.
Ψ ( X ) : = min a S a max b S b b X a + T ( a , b ) ,
ψ ( g 1 , g 2 ) : = min a S a max b S b a g 1 b + b g 2 a + T ( a , b ) ,
where X R m ˜ × n ˜ , g 1 R m ˜ , and g 2 R n ˜ all have i.i.d. standard Gaussian elements. The sets S a R n ˜ , S b R m ˜ are assumed to be be convex and compact sets, and T : R n ˜ × R m ˜ R . Moreover, we assume that the function T is independent X . Let a Ψ : = a Ψ ( X ) , and a ψ : = a ψ ( g 1 , g 2 ) be any optimizers of (19a) and (19b), respectively. In addition, let T ( a , b ) be convex-concave continuous on S a × S b .
Based on the assumptions stated above, the CGMT shows an asymptotic equivalence between the PO and AO problems, which is explicitly expressed in the following theorem. This theorem’s proof may be found in [15].
Theorem 2
(CGMT [4]). Suppose S is an arbitrary open subset of S a , and S c = S a \ S . Let ψ S c ( g 1 , g 2 ) be the optimal objective of the optimization in (19b), when the minimization over a is constrained over a S c . Suppose that there exist positive constants η < δ , such that in the limit of n ˜ : ψ ( g 1 , g 2 ) P η , and ψ S c ( g 1 , g 2 ) P δ . Then, it holds
lim n ˜ P ( a Ψ S ) = 1 .
It also holds that lim n ˜ P ( a ψ S ) = 1 .

5.2. Asymptotic Analysis

In this part, we provide an outline of the ideas used to prove our main results based on the CGMT framework. We start by rewriting (5a) by a simple change of variables from s to the error vector w : = s s 0 , to get:
w ^ : = arg min w V n s 0 1 2 H w v 2 + ζ 2 w + s 0 2 .
Next, let H ˜ : = H R H I H I H R R 2 m × 2 n , and v ˜ : = v R v I R 2 m , where H R , v R ( H I , v I ) are the real (imaginary) parts of H and v , respectively. With this, and normalizing (21) by 1 n , we have
w ^ : = arg min w ˜ R 2 n w ˜ i + j w ˜ i + n V s 0 , i 1 2 n 1 2 n H ˜ w ˜ v ˜ 2 + ζ 2 n w ˜ + s ˜ 0 2 ,
where s ˜ 0 : = s 0 , R s 0 , I R 2 n . Because of the dependence between the entries of H ˜ , the above optimization is difficult to analyze and the CGMT framework cannot be used directly here. However, as discussed in [5], one can use Lindeberg methods as in [18] to replace H ˜ with a Gaussian matrix that has i.i.d. entries without affecting the asymptotic performance, then we obtain
w ^ = arg min w ˜ R 2 n w ˜ i + j w ˜ i + n V s 0 , i 1 2 n 1 2 n A w ˜ v ˜ 2 + ζ 2 n w ˜ + s ˜ 0 2 ,
where A R 2 m × 2 n has i.i.d. N ( 0 , 1 ) components, and v ˜ has i.i.d. N ( 0 , σ 2 2 ) elements. Next, we proceed to apply the CGMT by rewriting (23) as the following min-max optimization:
min w ˜ R 2 n w ˜ i + j w ˜ i + n V s 0 , i max u R 2 m u A w ˜ 2 n 2 n u v ˜ 2 n u 2 8 n + ζ 2 n w ˜ + s ˜ 0 2 .
A remaining technical caveat is that the maximization over u appears unconstrained, i.e., the feasibility set over u is not compact. For this, we can follow the approach in [15] (Appendix A), by assuming that the maximizer of (24) satisfies u ^ B u , for a sufficiently large constant B u > 0 , that is independent of n. This will not affect the optimization problem with high probability. Thus, we get the following problem
min w ˜ R 2 n w ˜ i + j w ˜ i + n V s 0 , i max u B u u A w ˜ 2 n 2 n u v ˜ 2 n u 2 8 n + ζ 2 n w ˜ + s ˜ 0 2 .
The above problem is in the form of a PO of the CGMT. Thus, the associated simplified AO problem is given as:
min w ˜ R 2 n w ˜ i + j w ˜ i + n V s 0 , i max u B u w ˜ g 1 u 2 n 2 n + u g 2 w ˜ 2 n 2 n u v ˜ 2 n u 2 8 n + ζ 2 n w ˜ + s ˜ 0 2 ,
where g 1 R 2 m and g 2 R 2 n have i.i.d. N ( 0 , 1 ) entries. With some abuse of notation on g 1 , we can see that
w ˜ 2 n g 1 v ˜ u = d g 1 u w ˜ 2 2 n + σ 2 2 .
Hence, (26) becomes
min w ˜ R 2 n w ˜ i + j w ˜ i + n V s 0 , i max u B u g 1 u w ˜ 2 2 n + σ 2 2 + u g 2 w ˜ 2 n 2 n u 2 8 n + ζ 2 n w ˜ + s ˜ 0 2 .
Fixing β : = u 2 n , the optimization over u simplifies to
min w ˜ R 2 n w ˜ i + j w ˜ i + n V s 0 , i max β > 0 β g 1 2 n w ˜ 2 2 n + σ 2 2 + β g 2 w ˜ 2 n β 2 4 + ζ 2 n w ˜ + s ˜ 0 2 .
The square root in the above problem can be expressed using the following identity (Note that at optimality, τ * = χ .)
χ = min τ > 0 1 2 χ 2 τ + τ , for χ > 0 ,
which yields the following optimization problem
min τ > 0 max β > 0 τ β g 1 2 2 n + σ 2 g 1 β 4 τ 2 n β 2 4 + min w ˜ R 2 n w ˜ i + j w ˜ i + n V s 0 , i β g 1 2 τ 2 n w ˜ 2 2 n + β g 2 w ˜ 2 n + ζ 2 n w ˜ + s ˜ 0 2 .
Using the weak law of large numbers (WLLN): g 1 2 n P κ , then the previous problem reduces to
min τ > 0 max β > 0 τ β κ 2 + σ 2 β κ 4 τ β 2 4 + min w ˜ R 2 n w ˜ i + j w ˜ i + n V s 0 , i β κ 2 τ w ˜ 2 2 n + β g 2 w ˜ 2 n + ζ 2 n w ˜ + s ˜ 0 2 .
Defining α : = τ κ , and by a completion of squares in the minimization over w ˜ , and using the WLLN, we obtain the following scalar (deterministic) optimization problem
min α > 0 max β > 0 α β κ 2 + σ 2 β 4 α β 2 4 β 2 2 β α + 4 ζ + 1 2 β 2 α β 2 4 α 2 β 2 α + ζ + 1 2 β 2 α + ζ E D 2 β 2 α β 2 α + ζ S 0 β β α + 2 ζ G c ; V ,
where the expectation in the above expression is taken over S 0 p s 0 and G c CN ( 0 , 2 ) .The SEP of w ^ in (23) can be derived in a similar way to the proof of [5] to get
SEP P P Π β * 2 α * β * 2 α * + ζ S 0 β * β * α * + 2 ζ G c ; V V S 0 ,
where α * and β * are optimizers of (33). Simplifying (33) and (34) concludes the proof of the SEP part of Theorem 1.
The MSE expression can be proven in a similar way by noting that
w ˜ 2 2 n + σ 2 2 = τ ^ n 2 ,
where τ ^ n is the solution of (32). Hence, using α ^ n = τ ^ n κ , and α ^ n P α * , where α * is the solution of (33), we conclude, by applying the CGMT, that
w ^ 2 n P 2 κ α * 2 σ 2 ,
which completes the proof of the MSE part of Theorem 1.

6. Discussion and Conclusions

In this article, we provided sharp performance analysis of the regularized convex relaxation detector when used in complex-valued data detection. In particular, we studied its MSE and SEP performance in a massive MIMO application with arbitrary constellation schemes such as QAM and PSK. Numerical simulations show a close match to the obtained asymptotic results. In addition, the derived results can be used to optimally select the detector’s parameters such as the regularization factor. Furthermore, note that the asymptotic results of Theorem 1 depend only on the parameters of the problem such as the regularization factor ζ , the SNR, the ratio of receive to transmit antennas κ , etc. Thus, given the derived asymptotic MSE or SEP expressions, one may predict the error performance of a wireless communication system as a function of these hyper-parameters in advance. This may lead to the design of efficient communication systems in an optimal manner. Furthermore, we showed that this convex relaxation outperforms the unconstrained regularized least-squares detector.

Author Contributions

Conceptualization, A.M.A.; Data curation, A.M.A.; Formal analysis, A.M.A. and H.S.; Investigation, A.M.A.; Methodology, A.M.A.; Validation, A.M.A. and H.S.; Visualization, A.M.A.; Writing—original draft, A.M.A.; Writing—review & editing, A.M.A. and H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ngo, H.Q.; Larsson, E.G.; Marzetta, T.L. Energy and spectral efficiency of very large multiuser MIMO systems. IEEE Trans. Commun. 2013, 61, 1436–1449. [Google Scholar]
  2. Verdu, S. Multiuser Detection; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  3. Mohammed, S.K.; Zaki, A.; Chockalingam, A.; Rajan, B.S. High-rate space–time coded large-MIMO systems: Low-complexity detection and channel estimation. IEEE J. Sel. Top. Signal Process. 2009, 3, 958–974. [Google Scholar] [CrossRef] [Green Version]
  4. Thrampoulidis, C.; Xu, W.; Hassibi, B. Symbol error rate performance of box-relaxation decoders in massive mimo. IEEE Trans. Signal Process. 2018, 66, 3377–3392. [Google Scholar] [CrossRef] [Green Version]
  5. Abbasi, E.; Salehi, F.; Hassibi, B. Performance analysis of convex data detection in mimo. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 4554–4558. [Google Scholar]
  6. Atitallah, I.B.; Thrampoulidis, C.; Kammoun, A.; Al-Naffouri, T.Y.; Hassibi, B.; Alouini, M.S. Ber analysis of regularized least squares for bpsk recovery. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 4262–4266. [Google Scholar]
  7. Alrashdi, A.M.; Kammoun, A.; Muqaibel, A.H.; Al-Naffouri, T.Y. Optimum M-PAM Transmission for Massive MIMO Systems with Channel Uncertainty. arXiv 2020, arXiv:2008.06993. [Google Scholar]
  8. Atitallah, I.B.; Thrampoulidis, C.; Kammoun, A.; Al-Naffouri, T.Y.; Alouini, M.S.; Hassibi, B. The BOX-LASSO with application to GSSK modulation in massive MIMO systems. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 1082–1086. [Google Scholar]
  9. Aissa-El-Bey, A.; Pastor, D.; Sbai, S.M.A.; Fadlallah, Y. Sparsity-based recovery of finite alphabet solutions to underdetermined linear systems. IEEE Trans. Inf. Theory 2015, 61, 2008–2018. [Google Scholar] [CrossRef] [Green Version]
  10. Hajji, Z.; Aïssa-El-Bey, A.; Amis, K. Simplicity-based recovery of finite-alphabet signals for large-scale MIMO systems. Digit. Signal Process. 2018, 80, 70–82. [Google Scholar] [CrossRef] [Green Version]
  11. Sasahara, H.; Hayashi, K.; Nagahara, M. Multiuser detection based on MAP estimation with sum-of-absolute-values relaxation. IEEE Trans. Signal Process. 2017, 65, 5621–5634. [Google Scholar] [CrossRef]
  12. Hayakawa, R.; Hayashi, K. Asymptotic Performance of Discrete-Valued Vector Reconstruction via Box-Constrained Optimization With Sum of 1 Regularizers. IEEE Trans. Signal Process. 2020, 68, 4320–4335. [Google Scholar] [CrossRef]
  13. Hayakawa, R.; Hayashi, K. Convex optimization-based signal detection for massive overloaded MIMO systems. IEEE Trans. Wirel. Commun. 2017, 16, 7080–7091. [Google Scholar] [CrossRef]
  14. Hayakawa, R.; Hayashi, K. Reconstruction of complex discrete-valued vector via convex optimization with sparse regularizers. IEEE Access 2018, 6, 66499–66512. [Google Scholar] [CrossRef]
  15. Thrampoulidis, C.; Abbasi, E.; Hassibi, B. Precise error analysis of regularized M-estimators in high dimensions. IEEE Trans. Inf. Theory 2018, 64, 5592–5628. [Google Scholar] [CrossRef] [Green Version]
  16. Stojnic, M. A framework to characterize performance of lasso algorithms. arXiv 2013, arXiv:1303.7291. [Google Scholar]
  17. Thrampoulidis, C.; Oymak, S.; Hassibi, B. Regularized linear regression: A precise analysis of the estimation error. In Proceedings of the Conference on Learning Theory, PMLR, Paris, France, 3–6 July 2015; pp. 1683–1709. [Google Scholar]
  18. Oymak, S.; Tropp, J.A. Universality laws for randomized dimension reduction, with applications. Inf. Inference J. IMA 2018, 7, 337–446. [Google Scholar] [CrossRef]
Figure 1. MSE and SEP versus the regularizer for 16-QAM with Box-Relaxation (BR), with κ = 1.5 , n = 128 , SNR = 15 dB . The analytical curve is based on Theorem 1. The data are averaged over 50 independent MC trials. (a) MSE performance vs. the regularizer. (b) SEP performance vs. the regularizer.
Figure 1. MSE and SEP versus the regularizer for 16-QAM with Box-Relaxation (BR), with κ = 1.5 , n = 128 , SNR = 15 dB . The analytical curve is based on Theorem 1. The data are averaged over 50 independent MC trials. (a) MSE performance vs. the regularizer. (b) SEP performance vs. the regularizer.
Mathematics 10 01585 g001
Figure 2. Performance of the Circular-Relaxation (CR) for a 16-PSK signal vector as a function of the SNR. The analytical curve is based on Theorem 1. For the empirical simulations, we used κ = 2 , n = 128 , and data are averaged over 50 independent MC iterations. (a) MSE performance. (b) SEP performance.
Figure 2. Performance of the Circular-Relaxation (CR) for a 16-PSK signal vector as a function of the SNR. The analytical curve is based on Theorem 1. For the empirical simulations, we used κ = 2 , n = 128 , and data are averaged over 50 independent MC iterations. (a) MSE performance. (b) SEP performance.
Mathematics 10 01585 g002
Figure 3. Box-Relaxation (BR) performance for 16-QAM. The analytical prediction is based on Theorem 1. We used κ = 2 , n = 128 and data are averaged over 50 independent MC trials. (a) MSE performance vs. SNR. (b) SEP performance vs. SNR.
Figure 3. Box-Relaxation (BR) performance for 16-QAM. The analytical prediction is based on Theorem 1. We used κ = 2 , n = 128 and data are averaged over 50 independent MC trials. (a) MSE performance vs. SNR. (b) SEP performance vs. SNR.
Mathematics 10 01585 g003
Table 1. MSE performance of RCR and RLS detectors. We used a 16-QAM signal vector with BR. We set κ = 0.8 , n = 128 , and the data are averaged over 50 independent MC iterations.
Table 1. MSE performance of RCR and RLS detectors. We used a 16-QAM signal vector with BR. We set κ = 0.8 , n = 128 , and the data are averaged over 50 independent MC iterations.
SNR (dB)MSE (RCR): AnalyticalMSE (RCR): EmpiricalMSE (RLS): Empirical
0 0.4932 0.4894 0.6446
5 0.3323 0.3411 0.3801
10 0.2104 0.2085 0.2993
15 0.1212 0.1295 0.2395
20 0.0492 0.0513 0.2227
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alrashdi, A.M.; Sifaou, H. Performance Analysis of Regularized Convex Relaxation for Complex-Valued Data Detection. Mathematics 2022, 10, 1585. https://doi.org/10.3390/math10091585

AMA Style

Alrashdi AM, Sifaou H. Performance Analysis of Regularized Convex Relaxation for Complex-Valued Data Detection. Mathematics. 2022; 10(9):1585. https://doi.org/10.3390/math10091585

Chicago/Turabian Style

Alrashdi, Ayed M., and Houssem Sifaou. 2022. "Performance Analysis of Regularized Convex Relaxation for Complex-Valued Data Detection" Mathematics 10, no. 9: 1585. https://doi.org/10.3390/math10091585

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop