Next Article in Journal
Risk Diffusion and Control under Uncertain Information Based on Hypernetwork
Previous Article in Journal
Evaluation of Educational Interventions Based on Average Treatment Effect: A Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convexity of the Capacity of One-Bit Quantized Additive White Gaussian Noise Channels

1
School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
2
School of Electronics Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(22), 4343; https://doi.org/10.3390/math10224343
Submission received: 20 October 2022 / Revised: 15 November 2022 / Accepted: 17 November 2022 / Published: 18 November 2022

Abstract

:
In this study, the maximum error-free transmission rate of an additive white Gaussian noise channel with a symmetric analog-to-digital converter (ADC) was derived as a composite function of the binary entropy function, Gaussian Q-function, and the square root function, assuming that the composite function was convex on the set of all non-negative real numbers. However, because mathematically proving this convexity near zero is difficult, studies in this field have only presented numerical results for small values in the domain. Because the low-signal-to-noise (SNR) regime is considered to be a major application area for one-bit ADCs in wireless communication, deriving a concrete proof of the convexity of the composite function on small SNR values (non-negative values near zero) is important. Therefore, this study proposes a novel proof for convexity, which is satisfied for all non-negative values, based on the continuity of the involved functions.

1. Introduction

The capacity of a communication channel is defined as the maximum data rate that can be transmitted over the channel with an arbitrarily small error probability. In information theory, the channel capacity is given by the supremum of the mutual information between the input and output of the channel [1]. For example, consider the channel Y = X + N , where X R and Y R are input and output random variables, respectively; N R is an additive noise independent of X and follows a zero-mean Gaussian distribution; and R denotes the set of real numbers. This type of channel is referred to as an additive white Gaussian noise (AWGN) channel, and its capacity is given by 1 2 log 2 ( 1 + γ ) , where γ is the signal-to-noise ratio (SNR) defined as γ E [ | X | 2 ] E [ | N | 2 ] . In recent decades, extensive research on wireless communications has been conducted based on the aforementioned channel capacity and its convexity [2,3].
To satisfy the consistently increasing demand for data rates, recent standards for wireless communication have focused on a large amount of unused bandwidth in the millimeter-wave (mmWave) and terahertz frequency bands [4,5,6,7]. This is because using a significantly wider bandwidth available in a higher-frequency band, such as the mmWave band, is a good solution to satisfy the fast-growing demand for higher data rates. However, such an increase in the bandwidth can cause dissipation of an extensive amount of power when using conventional high-resolution analog-to-digital converters (ADCs) because the power consumption of ADCs linearly increases with increasing sampling rates [8]. This power consumption issue is critical for mobile devices because it is directly related to the battery lifetime as well as the available power resources for uplink communications.
Receiver structures based on low-resolution ADCs have attracted considerable attention as a useful solution for exploiting the abundant resources available in extremely high-frequency bands while achieving low power consumption [9]. In particular, an extreme case that uses one-bit ADCs at the receivers has been widely studied because it is the most power-efficient solution. Extensive studies have been devoted to the evaluation of the performance of communication channels with one-bit ADCs [9,10,11,12,13,14,15,16,17,18,19,20,21,22]. The capacity of a single-input and single-output (SISO) real-valued additive white Gaussian noise (AWGN) channel with one-bit ADCs was derived in [10]. In [10], a symmetric one-bit ADC was assumed, such that the channel output was given by Y Q = f Q ( X + N ) , where f Q ( · ) denotes the quantization function defined as
f Q ( x ) = 1 , x 0 1 , x < 0 .
The channel capacity of this one-bit quantized AWGN channel was derived as 1 H b ( Q ( γ ) ) [10], where H b and Q denote the binary entropy and Gaussian Q-functions, respectively. Further extending this result, the capacity of complex fading channels with multiple transmit and receive antennas was considered in [11]. For example, assuming perfect channel state information at the transmitter, the capacity of a multiple-input and single-output (MISO) complex fading channel was derived for the given channel components. Moreover, certain upper and lower bounds were presented for multiple-input and multiple-output (MIMO) complex channels. Furthermore, numerous important studies have derived the capacities of various wireless channels with one-bit ADCs [12,13,14,15,16]. In [12], the tradeoffs between the achievable rates and energy rates were considered when one-bit ADC receivers were used. The authors of [13] used a one-bit ADC for an uplink massive MIMO system, and the corresponding throughput was analyzed. The performance of a one-bit quantized channel with limited channel state information at the transmitter was considered in [14]. In addition to using ADCs in receivers, the application of one-bit digital-to-analog converters to transmitters was considered in [15]. The secrecy capacity of a Gaussian wiretap channel with one-bit ADCs was analyzed in [16]. The detection performance of fading channels with one-bit ADCs was also studied [17,18,19,20,21,22].
Several studies so far have been based on the capacity 1 H b ( Q ( γ ) ) of the one-bit quantized SISO AWGN channel, which was first derived in [10]. However, when it was derived in [10], the authors assumed that the composite function H b ( Q ( x ) ) was convex for x 0 , although the convexity was proved by restricting the domain to x > c for a constant c. Thus, the authors did not provide a concrete mathematical proof when x was close to zero; only a numerical result was presented to support their claim of convexity when 0 x c [10,23]. Because a low-SNR regime is an important application of a one-bit ADC [11], we need to prove the convexity on 0 x c for the exact derivation of the capacity for all SNR regions. Systems with one-bit ADCs have been extensively studied for the further evolution of 5G and 6G communication. Thus, providing a concrete proof for the convexity of H b ( Q ( x ) ) , for all x 0 , will be an important supplement to the theoretical completeness of previous results derived assuming this convexity. Furthermore, the corresponding results can be used to derive unknown channel capacities with one-bit ADCs for various important communication applications, such as multiple-input and multiple-output channels. These results can also be used to solve various optimization problems associated with the design of appropriate resource allocation strategies as the convexity of the capacity function can guarantee the existence of a unique solution depending on the system parameters [24,25].
In this regard, this study proves that the composite function H b ( Q ( x ) ) is convex for all possible values of the input SNR, i.e., for all x 0 .

2. Notations and Preliminaries

The function log x denotes a logarithmic function with base e. In addition, we use the following notations throughout this paper for simplicity.
Notation 1. 
D is defined as the set of nonnegative real numbers: D = { x R : x 0 } . The function Q : D R denotes the Gaussian Q-function defined as follows, and Q n ( x ) is defined as the n-th order derivative of Q ( x ) :
Q ( x ) x 1 2 π e r 2 2 d r , Q n ( x ) d n d x n Q ( x ) .
The function H b : [ 0 , 1 ] R denotes the binary entropy function defined as
H b ( x ) = x log 2 x ( 1 x ) log 2 ( 1 x ) .
Notation 2. 
The function g : D R is defined as follows, and g n ( x ) is defined as the n-th order derivative of g ( x ) on D:
g ( x ) log 1 Q ( x ) Q ( x ) , g n ( x ) d n d x n log 1 Q ( x ) Q ( x ) .
Notation 3. 
The functions t : D R and s : D R are defined as follows, and t n ( x ) and s n ( x ) are defined as the n-th order derivatives of t ( x ) and s ( x ) on D, respectively:
t ( x ) Q 1 ( x ) Q ( x ) , s ( x ) Q 1 ( x ) Q ( x ) , t n ( x ) d n t ( x ) d x n , s n ( x ) d n s ( x ) d x n .
Based on direct differentiation using the chain rule, we can easily derive the followings (see Appendix A for the derivation):
Q 1 ( x ) = Q 1 ( x ) = 1 2 π e x 2 2 , Q 2 ( x ) = x Q 1 ( x ) ,
d d x H b ( x ) = log 2 ( 1 x ) log 2 x ,
t 1 ( x ) = t ( x ) ( x + t ( x ) ) , s 1 ( x ) = s ( x ) ( x s ( x ) ) ,
t 2 ( x ) = t 1 ( x ) ( x + t ( x ) ) t ( x ) ( 1 + t 1 ( x ) ) ,
s 2 ( x ) = s 1 ( x ) ( x s ( x ) ) s ( x ) ( 1 s 1 ( x ) ) ,
g 1 ( x ) = t ( x ) s ( x ) , g 2 = ( t ( x ) + s ( x ) ) ( x + t ( x ) s ( x ) ) .
Moreover, the limiting value of t 1 ( x ) and s 1 ( x ) can be obtained as follows.
Lemma 1. 
The functions t 1 ( x ) and s 1 ( x ) have the following limits:
lim x t 1 ( x ) = 1 , lim x 0 s 1 ( x ) = 4 2 π .
Moreover, for all x D , the following inequalities are satisfied:
g ( x ) 0 , g 1 ( x ) 0 ,
t ( x ) 0 , s ( x ) 0 ,
x + t ( x ) 0 ,
t 1 ( x ) 0 , s 1 ( x ) 0 .
Proof. 
See Appendix A. □
Lemma 1 includes a summary of inequalities that are essential for deriving the main results presented in the following section. Each result in this lemma does not have a specific physical meaning. Nevertheless, the results play key roles in proving the convexity of H b ( Q ( x ) as described in the following section.

3. Proof of Convexity

Based on Lemma 1, we can derive the following results.
Lemma 2. 
For all x D , we have
1 t 1 ( x ) 0 , 0 s 1 ( x ) 1 .
Proof. 
By (8) and (12), it suffices to prove that t 1 ( x ) 1 and s 1 ( x ) 1 for all x > 0 .
First, a proof by contradiction is used to prove that t 1 ( x ) 1 for all x > 0 . To this end, suppose that there exists a positive real value a that satisfies t 1 ( a ) < 1 , or equivalently, t 1 ( a ) = 1 δ 0 for some δ 0 > 0 . Because t 1 ( x ) is continuous for x 0 , there must be an open interval ( a , a + ϵ ) inside domain D with some ϵ > 0 on which t 1 ( x ) < 1 . Let ϵ * be the supremum of all possible values of ϵ . If ϵ * is finite, there exists a δ 1 > 0 such that t 1 ( x ) < 1 on ( a , a + ϵ * ) and t 1 ( x ) 1 on [ a + ϵ * , a + ϵ * + δ 1 ) , by the definition of ϵ * . However, by (5), with the inequalities in Lemma 1, t 1 ( x ) < 1 implies that t 2 ( x ) < 0 on ( a , a + ϵ * ) , which further implies that t 1 ( x ) is strictly decreasing on ( a , a + ϵ * ) . As t 1 ( x ) is strictly decreasing on ( a , a + ϵ * ) , we must have lim x a + ϵ * t 1 ( x ) t 1 ( a ) = 1 δ 0 . Because t 1 ( x ) is continuous, this contradicts the assumption that t 1 ( x ) 1 on [ a + ϵ * , a + ϵ * + δ 1 ) , which was induced by assuming a finite value of ϵ * . Similarly, if ϵ * diverges to infinity, then t 1 ( x ) must be strictly decreasing on ( a , ) . As t 1 ( a ) = 1 δ 0 , this implies that lim x t 1 ( x ) 1 δ 0 , but this contradicts (8). Therefore, we conclude that t 1 ( x ) 1 for all x > 0 .
Similarly, a proof by contradiction is used to prove that s 1 ( x ) 1 for all x > 0 . For the sake of contradiction, suppose that there exists a positive real value b that satisfies s 1 ( b ) > 1 , or equivalently, s 1 ( b ) = 1 + ρ 0 , for some ρ 0 > 0 . Because s 1 ( x ) is continuous for x 0 , there must be an open interval ( b τ , b ) inside domain D with some τ > 0 on which s 1 ( b ) > 1 . Let τ * be the supremum of all possible values of τ . If τ * < b , there exists a ρ 1 > 0 such that s 1 ( x ) > 1 on ( b τ * , b ) and s 1 ( x ) 1 on ( b τ * ρ 1 , b τ * ] , by the definition of τ * . However, by (6), with the inequalities in Lemma 1, s 1 ( x ) > 1 implies that s 2 ( x ) < 0 on ( b τ * , b ) , which further implies that s 1 ( x ) is strictly decreasing on ( b τ * , b ) . As s 1 ( x ) is strictly decreasing on ( b τ * , b ) , we must have lim x b τ * s 1 ( x ) s 1 ( b ) = 1 + ρ 0 . Because s 1 ( x ) is continuous, this contradicts the assumption that s 1 ( x ) 1 on ( b τ * ρ 1 , b τ * ] . Similarly, if τ * = b , s 1 ( x ) must be strictly decreasing on [ 0 , b ) . As s 1 ( b ) > 1 , this implies that lim x 0 s 1 ( x ) > 1 , but this contradicts (8). Therefore, we conclude that s 1 ( x ) 1 for all x > 0 . □
Figure 1 verifies Lemma 2, which can be used to derive the following result.
Lemma 3. 
For all x D , the following inequality is true.
g ( x ) + x g 1 ( x ) g 2 ( x ) 0 .
Proof. 
From (7), we have g ( x ) + x g 1 ( x ) g 2 ( x ) = g ( x ) + ( 2 x + t ( x ) s ( x ) ) g 1 ( x ) . Because both g ( x ) and g 1 ( x ) are nonnegative (9), it suffices to prove that 2 x + t ( x ) s ( x ) 0 . By definition, lim x 0 [ t ( x ) s ( x ) ] = 0 , such that lim x 0 [ 2 x + t ( x ) s ( x ) ] = 0 . Thus, it suffices to prove that d ( 2 x + t ( x ) s ( x ) ) d x = 2 + t 1 ( x ) s 1 ( x ) 0 , or equivalently, s 1 ( x ) t 1 ( x ) 2 . Because s 1 ( x ) 0 (by (12)) and t 1 ( x ) 0 (by (10)), the triangle inequality implies that s 1 ( x ) t 1 ( x ) | t 1 ( x ) | + | s 1 ( x ) | , and Lemma 2 further implies that s 1 ( x ) t 1 ( x ) | t 1 ( x ) | + | s 1 ( x ) | 2 . □
In Figure 2, function g ( x ) + x g 1 ( x ) g 2 ( x ) is depicted to demonstrate the result in Lemma 3.
Lemma 4. 
The composite function H b ( Q ( x ) ) is convex on D if and only if
x + 1 x g ( x ) g 1 ( x )
on D.
Proof. 
A real-valued twice differentiable function of one variable is convex if and only if its derivative is nondecreasing (i.e., the second-order derivative is nonnegative). Thus, we investigate the derivative of H b ( Q ( x ) ) , which can be calculated using (2) and (3), as
d d x H b ( Q ( x ) ) = log 2 1 Q ( x ) Q ( x ) 1 2 π e x 2 1 2 x = e x 2 2 ( log 2 ) 2 π x log 1 Q ( x ) Q ( x ) .
Thus, d d x H b ( Q ( x ) ) is nondecreasing if and only if d d x v ( x ) 0 , where v ( x ) = x 1 2 e x 2 log 1 Q ( x ) Q ( x ) . Then, d d x v ( x ) can be calculated as
d d x v ( x ) = 1 2 e x 2 x 3 2 ( x + 1 ) log 1 Q ( x ) Q ( x ) 2 x d d x log 1 Q ( x ) Q ( x ) .
Hence, d d x v ( x ) 0 if and only if
1 2 1 + 1 x log 1 Q ( x ) Q ( x ) = 1 2 1 + 1 x g ( x ) d d x log 1 Q ( x ) Q ( x ) = g 1 ( x ) 2 x ,
or, by multiplying both sides by 2 x , it follows that d d x v ( x ) 0 if and only if
x + 1 x g ( x ) g 1 ( x ) .
Therefore, we conclude that H b ( Q ( x ) ) is convex on D if and only if ( x + 1 x ) g ( x ) g 1 ( x ) on D. Because the function y = x has a one-to-one correspondence from D to D, proving that ( y + 1 y ) g ( y ) g 1 ( y ) for all y D is equivalent to proving that ( x + 1 x ) g ( x ) g 1 ( x ) for all x D ; thus, the proof is complete. □
Lemma 5. 
If g ( x ) x g 1 ( x ) on a subset of D, then
d d x x + 1 x g ( x ) g 2 ( x ) 0
on the subset.
Proof. 
The left-hand side of (16) can be calculated as
d d x x + 1 x g ( x ) g 2 ( x ) = 1 1 x 2 g ( x ) + x + 1 x g 1 ( x ) g 2 ( x ) = 1 x g 1 ( x ) g ( x ) x + g ( x ) + x g 1 ( x ) g 2 ( x ) .
As we know that g ( x ) + x g 1 ( x ) g 2 ( x ) 0 on D by Lemma 3, g 1 ( x ) g ( x ) x 0 implies that d d x x + 1 x g ( x ) g 2 ( x ) 0 . □
Theorem 1. 
The composite function H b ( Q ( x ) ) is convex on D.
Proof. 
For simplicity, let f ( x ) = x + 1 x g ( x ) . First, by applying L’Hôpital’s rule, we have
lim x 0 f ( x ) = lim x 0 g ( x ) x = lim x 0 g 1 ( x ) .
Thus, by Lemma 4, the proof is complete if f ( x ) g 1 ( x ) is true for all x > 0 .
For the sake of contradiction, suppose that there exists a real value a > 0 on which f ( a ) < g 1 ( a ) . Because both f ( x ) and g 1 ( x ) are continuous, this implies that there exists an open interval ( a ϵ , a ) inside D on which f ( x ) < g 1 ( x ) . Moreover, because lim x 0 f ( x ) = lim x 0 g 1 ( x ) , there must exist an ϵ 0 > 0 and a corresponding open interval ( a ϵ 0 , a ) that satisfies
f ( x ) < g 1 ( x ) on ( a ϵ 0 , a )
and
lim x a ϵ 0 f ( x ) = lim x a ϵ 0 g 1 ( x ) .
For example, ϵ 0 = a if f ( x ) is strictly less than g 1 ( x ) for all x in ( 0 , a ) . Combining (17) and (18), we obtain f ( x ) g 1 ( x ) on [ a ϵ 0 , a ) . Because f ( x ) = ( x + 1 x ) g ( x ) and g ( x ) 0 , this implies that g ( x ) x g 1 ( x ) on [ a ϵ 0 , a ) . By Lemma 5, g ( x ) x g 1 ( x ) further implies that
d d x f ( x ) g 1 ( x ) 0 on [ a ϵ 0 , a ) .
Subsequently, (18) and (19) imply that f ( x ) must be greater than or equal to g 1 ( x ) on ( a ϵ 0 , a ) . However, this contradicts (17) and thus, the proof is completed. □

4. Discussion and Conclusions

This letter provides a mathematical proof for the convexity of the capacity function of one-bit quantized AWGN channel. Specifically, the continuity of involved functions are used to provide a proof by contradiction. The corresponding result can be an important supplement for the theoretical completeness of previous studies as numerous important studies have derived the capacities of various wireless channels with one-bit ADCs assuming that the convexity is true without providing a mathematical proof when SNR is near zero.
For example, in the proof of Theorem 2 provided in [10], Jensen’s inequality was applied to the conditional entropy H ( Y Q | X ) , which is derived as
H ( Y Q | X ) = E H b Q | X | 2 σ N 2 ,
if Y Q is the one-bit quantized output of an AWGN channel such that Y Q = f Q ( X + N ) , where X is the transmit signal, N is the zero-mean Gaussian noise with variance σ N 2 , and f Q is the quantization function defined in (1). The convexity of H b ( Q ( x ) ) guarantees the following result based on Jensen’s inequality:
H ( Y Q | X ) H b Q E | X | 2 σ N 2 .
Because there exists a transmit signal distribution that satisfies the equality in (20) (e.g., binary phase shift keying (BPSK)), this result implies that the capacity of a one-bit quantized AWGN channel is given by 1 H b ( Q ( γ ) ) , where γ = E [ | X | 2 σ N 2 ] is the SNR. Although this derivation of the capacity assumes the convexity of H b ( Q ( x ) ) , an analytical proof was presented assuming that x 2 , and only numerical results were provided for the case of 0 x 2 . Thus, our complete proof of convexity can be a theoretical supplement to this result. In short, the proof guarantees that the capacity cannot be larger than 1 H b ( Q ( γ ) ) and that BPSK signaling achieves this upper bound. In Figure 3, the mutual information between X and Y Q , denoted as I ( X , Y Q ) , is compared with the capacity 1 H b ( Q ( γ ) ) . BPSK signaling with equal probabilities is used as the probability distribution of X. The numerical results are consistent with the analysis results. In [11], authors extended this result to the multi-antenna complex fading channels. For MISO cases, the capacity was derived based on the convexity of H b ( Q ( x ) ) on x 0 by directly extending the capacity of the SISO channel with a one-bit ADC. Moreover, an upper bound for the capacity of the MIMO channels was derived, and this derivation was also based on the convexity of H b ( Q ( x ) ) . Thus, our results could supplement these results.
The convexity of the capacity function can also be used for optimization problems to provide appropriate resource allocation. In such cases, the convexity of the objective function can be used to clarify the existence of an optimal solution and identify it. For example, in [25], an optimal power allocation problem was formulated and solved based on the convexity of H b ( Q ( x ) ) ; however, the convexity was assumed with only some numerical results.
With one-bit quantization, deriving the exact capacity is difficult because the quantization function is difficult to analyze. Thus, although analyzing the conventional capacity with infinite-resolution ADCs is feasible, obtaining the capacity for channels with one-bit ADCs may be extremely difficult. For example, the capacity of MIMO channels with one-bit ADCs is generally unknown, although the exact capacity of the MIMO channels with infinite-resolution ADCs is already known. Thus, our results can provide a valid mathematical basis for further derivations of unknown capacities and various optimization problems that use the capacity function of a one-bit quantized AWGN channel.

Author Contributions

Conceptualization, S.L. and M.M.; methodology, S.L.; software, S.L. and M.M.; validation, S.L. and M.M.; formal analysis, S.L. and M.M.; investigation, S.L. and M.M.; resources, M.M.; data curation, S.L.; writing—original draft preparation, S.L.; writing—review and editing, S.L. and M.M.; visualization, S.L.; supervision, M.M.; project administration, S.L. and M.M.; and funding acquisition, S.L. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Research Foundation of Korea (NRF), funded by the Korean government (MSIT) (Grant No. 2020R1F1A1071649), and in part by the BK21 FOUR Project, funded by the Ministry of Education, Korea (4199990113966).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that there is no conflict of interest.

Appendix A. Proof of Lemma 1

Appendix A.1. Derivations of (2) and (3)

The Q-function is defined as
Q ( x ) = lim B 1 2 π x x + B e r 2 2 d r .
Thus, the derivative is
Q 1 ( x ) = d d x Q ( x ) = lim B 1 2 π d d x x x + B e r 2 2 d r = ( a ) lim B 1 2 π e ( x + B ) 2 2 e ( x ) 2 2 = 1 2 π e x 2 2 ,
where (a) follows from Leibniz’s rule. Hence, the second-order derivative is
Q 2 ( x ) = d d x 1 2 π e x 2 2 = x 2 π e x 2 2 = x Q 1 ( x ) .
Similarly, (3) can be proved through direct differentiation using d log x d x = 1 x .

Appendix A.2. Derivations of (4)–(8)

By definition,
t 1 ( x ) = d d x Q 1 ( x ) Q ( x ) = d d x Q 2 ( x ) Q ( x ) ( Q 1 ( x ) ) 2 ( Q ( x ) ) 2 = ( a ) t ( x ) ( x + t ( x ) ) , t 2 ( x ) = d d x t 1 ( x ) = t 1 ( x ) ( x + t ( x ) ) t ( x ) ( 1 + t 1 ( x ) ) , s 1 ( x ) = d d x Q 1 ( x ) Q ( x ) = d d x Q 2 ( x ) Q ( x ) + Q 1 ( x ) Q 1 ( x ) ( Q ( x ) ) 2 = ( b ) s ( x ) ( x s ( x ) ) , s 2 ( x ) = d d x s 1 ( x ) = s 1 ( x ) ( x s ( x ) ) s ( x ) ( 1 s 1 ( x ) ) ,
where Q 2 ( x ) = x Q 1 ( x ) is used for (a), and Q 2 ( x ) = x Q 1 ( x ) and Q 1 ( x ) = Q 1 ( x ) are used for (b). As Q ( x ) 1 2 is symmetric with respect to the origin, so it follows that 1 Q ( x ) = Q ( x ) . Thus, g 1 ( x ) and g 2 ( x ) can be calculated as follows:
g 1 ( x ) = d d x log Q ( x ) Q ( x ) = Q ( x ) Q ( x ) d d x Q ( x ) Q ( x ) = Q ( x ) Q ( x ) Q 1 ( x ) Q ( x ) Q ( x ) Q 1 ( x ) ( Q ( x ) ) 2 = Q 1 ( x ) Q ( x ) Q 1 ( x ) Q ( x ) = ( a ) t ( x ) s ( x ) ,
where Q 1 ( x ) = Q 1 ( x ) is used for (a). By differentiating this,
g 2 ( x ) = t 1 ( x ) s 1 ( x ) = ( a ) t ( x ) ( x + t ( x ) ) + s ( x ) ( x s ( x ) ) = t ( x ) x + [ t ( x ) ] 2 + s ( x ) x [ s ( x ) ] 2 = ( t ( x ) + s ( x ) ) ( x + t ( x ) s ( x ) ) ,
where (4) is used for (a).
Finally, the limiting values in (8) are obtained as follows:
lim x t 1 ( x ) = lim x [ t ( x ) ( x + t ( x ) ) ] = lim x x Q ( x ) Q 1 ( x ) + [ Q 1 ( x ) ] 2 [ Q ( x ) ] 2 = ( a ) lim x Q ( x ) Q 1 ( x ) + x [ Q 1 ( x ) ] 2 + x Q ( x ) Q 2 ( x ) + 2 Q 1 ( x ) Q 2 ( x ) 2 Q ( x ) Q 1 ( x ) = ( b ) lim x ( 1 x 2 ) Q ( x ) x Q 1 ( x ) 2 Q ( x ) = ( c ) lim x 2 x Q ( x ) + ( 1 x 2 ) Q 1 ( x ) Q 1 ( x ) x Q 2 ( x ) 2 Q 1 ( x ) = ( d ) lim x x Q ( x ) Q 1 ( x ) ,
where L’Hôpital’s rule is used for (a) and (c), and Q 2 ( x ) = x Q 1 ( x ) is used for (b) and (d). The authors in [26] showed that x 1 + x 2 Q 1 ( x ) < Q ( x ) < Q 1 ( x ) x , which is equivalent to x 2 1 + x 2 > x Q ( x ) Q 1 ( x ) > 1 as Q 1 ( x ) x < 0 . Because lim x x 2 1 + x 2 = 1 , we have lim x x Q ( x ) Q 1 ( x ) = 1 ; thus, from (A1), we finally have lim x t 1 ( x ) = 1 .
From (2), lim x 0 Q 1 ( x ) = 1 2 π , and by definition, lim x 0 Q ( x ) = 1 2 . Thus,
lim x 0 s 1 ( x ) = lim x 0 [ s ( x ) ( x s ( x ) ) ] = lim x 0 [ x Q 1 ( x ) Q ( x ) [ Q 1 ( x ) ] 2 [ Q ( x ) ] 2 ] = lim x 0 [ Q 1 ( x ) ] 2 [ Q ( x ) ] 2 = 4 2 π .

Appendix A.3. Proofs of (9)–(11)

By definition, Q ( x ) 0 for x R . Moreover, Q 1 ( x ) < 0 and 1 Q ( x ) > 1 for x D . Thus, for all x D , it is clear that t ( x ) = Q 1 ( x ) Q ( x ) 0 , s ( x ) = Q 1 ( x ) Q ( x ) 0 , g ( x ) = log ( 1 Q ( x ) 1 ) 0 , and g 1 ( x ) = t ( x ) s ( x ) 0 . For (11), we use Q ( x ) < Q 1 ( x ) x , which was derived in [26]. That is,
Q ( x ) < Q 1 ( x ) x Q ( x ) Q 1 ( x ) < 1 x ( ) Q 1 ( x ) > 0 Q 1 ( x ) Q ( x ) > x 0 > x + Q 1 ( x ) Q ( x ) = x + t ( x ) .
Then, because t 1 ( x ) = t ( x ) ( x + t ( x ) ) and s 1 ( x ) = s ( x ) ( x s ( x ) ) , the inequalities from (9) to (11) imply (12).

References

  1. Thomas, J.A.; Cover, T.M. Elements of Information Theory; Wiley: Hoboken, NJ, USA, 2001. [Google Scholar]
  2. Tse, D.; Viswanath, P. Fundamentals of Wireless Communication; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  3. Heath, R.W., Jr.; Lozano, A. Foundations of MIMO Communication; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar]
  4. Swindlehurst, A.L.; Ayanoglu, E.; Heydari, P.; Capolino, F. Millimeter-wave massive MIMO: The next wireless revolution? IEEE Commun. Mag. 2014, 52, 56–62. [Google Scholar] [CrossRef]
  5. Larsson, E.G.; Edfors, O.; Tufvesson, F.; Marzetta, T.L. Massive MIMO for next generation wireless systems. IEEE Commun. Mag. 2014, 52, 186–195. [Google Scholar] [CrossRef] [Green Version]
  6. Busari, S.A.; Huq, K.M.S.; Mumtaz, S.; Dai, L.; Rodriguez, J. Millimeter-Wave Massive MIMO Communication for Future Wireless Systems: A Survey. IEEE Commun. Surv. Tut. 2017, 20, 836–869. [Google Scholar] [CrossRef]
  7. Saleem, A.; Cui, H.; He, Y.; Boag, A. Channel propagation characteristics for massive multiple-input/multiple-output systems in a tunnel environment. IEEE Antennas Propag. Mag. 2022, 126–142. [Google Scholar] [CrossRef]
  8. Walden, R. Analog-to-Digital Converter Survey and Analysis. IEEE J. Select. Areas Commun. 1999, 17, 539–550. [Google Scholar] [CrossRef] [Green Version]
  9. Zhang, J.; Dai, L.; Li, X.; Liu, Y.; Hanzo, L. On low-resolution ADCs in practical 5G millimeter-wave massive MIMO systems. IEEE Commun. Mag. 2018, 56, 205–211. [Google Scholar] [CrossRef] [Green Version]
  10. Singh, J.; Dabeer, O.; Madhow, U. On the limits of communication with low-precision analog-to-digital conversion at the receiver. IEEE Trans. Commun. 2009, 57, 3629–3639. [Google Scholar] [CrossRef] [Green Version]
  11. Mo, J.; Heath, R.W., Jr. Capacity Analysis of One-Bit Quantized MIMO Systems with Transmitter Channel State Information. IEEE Trans. Signal Process. 2015, 63, 5498–5512. [Google Scholar] [CrossRef] [Green Version]
  12. Mo, J.; Alkhateeb, A.; Abu-Surra, S.; Heath, R.W., Jr. Hybrid architectures with few-bit ADC receivers: Achievable rates and energy-rate tradeoffs. IEEE Trans. Wirel. Commun. 2017, 16, 2274–2287. [Google Scholar] [CrossRef]
  13. Jacobsson, S.; Durisi, G.; Coldrey, M.; Gustavsson, U.; Studer, C. Throughput analysis of massive MIMO uplink with low-resolution ADCs. IEEE Trans. Wirel. Commun. 2017, 16, 4038–4051. [Google Scholar] [CrossRef]
  14. Mo, J.; Heath, R.W., Jr. Limited feedback in single and multi-user MIMO systems with finite-bit ADCs. IEEE Trans. Wirel. Commun. 2018, 17, 3284–3297. [Google Scholar] [CrossRef] [Green Version]
  15. Nam, Y.; Do, H.; Jeon, Y.-S.; Lee, N. On the Capacity of MISO Channels with One-Bit ADCs and DACs. IEEE J. Sel. Areas Commun. 2019, 37, 2132–2145. [Google Scholar] [CrossRef] [Green Version]
  16. Nam, S.H.; Lee, S.H. Secrecy Capacity of a Gaussian Wiretap Channel With ADCs is Always Positive. IEEE Trans. Inf. Theory 2022, 68, 1186–1196. [Google Scholar] [CrossRef]
  17. Choi, J.; Mo, J.; Heath, R.W., Jr. Near Maximum-Likelihood Detector and Channel Estimator for Uplink Multiuser Massive MIMO Systems with One-Bit ADCs. IEEE Trans. Commun. 2016, 64, 2005–2018. [Google Scholar] [CrossRef] [Green Version]
  18. Mollén, C.; Choi, J.; Larsson, E.G.; Heath, R.W., Jr. Uplink Performance of Wideband Massive MIMO with One-Bit ADCs. IEEE Trans. Wirel. Commun. 2017, 16, 87–100. [Google Scholar] [CrossRef]
  19. Hong, S.-N.; Kim, S.; Lee, N. A Weighted Minimum Distance Decoding for Uplink Multiuser MIMO Systems with Low-Resolution ADCs. IEEE Trans. Commun. 2018, 66, 1912–1924. [Google Scholar] [CrossRef]
  20. Jeon, Y.-S.; Lee, N.; Hong, S.-N.; Heath, R.W., Jr. One-Bit Sphere Decoding for Uplink Massive MIMO Systems with One-Bit ADCs. IEEE Trans. Wirel. Commun. 2018, 17, 4509–4521. [Google Scholar] [CrossRef] [Green Version]
  21. Jeon, Y.-S.; Do, H.; Hong, S.-N.; Lee, N. Soft-Output Detection Methods for Sparse Millimeter Wave MIMO Systems with Low-Precision ADCs. IEEE Trans. Commun. 2019, 67, 2822–2836. [Google Scholar] [CrossRef] [Green Version]
  22. Li, Y.; Tao, C.; Seco-Granados, G.; Mezghani, A.; Swindlehurst, A.L.; Liu, L. Channel Estimation and Performance Analysis of One-Bit Massive MIMO Systems. IEEE Trans. Signal Process. 2017, 65, 4075–4089. [Google Scholar] [CrossRef]
  23. Dabeer, O.; Singh, J.; Madhow, U. On the limits of communication performance with one-bit analog-to-digital conversion. In Proceedings of the 2006 IEEE 7th Workshop on Signal Processing Advances in Wireless Communications, Cannes, France, 2–5 July 2006; pp. 1–5. [Google Scholar]
  24. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  25. Min, M. Optimal Power Allocation for Multiple Channels With One-Bit Analog-to-Digital Converters. IEEE Trans. Vehi. Tech. 2022, 71, 4438–4443. [Google Scholar] [CrossRef]
  26. Borjesson, P.; Sundberg, C.-E. Simple Approximations of the Error Function Q(x) for Communications Applications. IEEE Trans. Commun. 1979, 27, 39–643. [Google Scholar] [CrossRef]
Figure 1. Verification for Lemma 2.
Figure 1. Verification for Lemma 2.
Mathematics 10 04343 g001
Figure 2. Verification for Lemma 3.
Figure 2. Verification for Lemma 3.
Mathematics 10 04343 g002
Figure 3. Verification for the capacity function.
Figure 3. Verification for the capacity function.
Mathematics 10 04343 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, S.; Min, M. Convexity of the Capacity of One-Bit Quantized Additive White Gaussian Noise Channels. Mathematics 2022, 10, 4343. https://doi.org/10.3390/math10224343

AMA Style

Lee S, Min M. Convexity of the Capacity of One-Bit Quantized Additive White Gaussian Noise Channels. Mathematics. 2022; 10(22):4343. https://doi.org/10.3390/math10224343

Chicago/Turabian Style

Lee, Sungmin, and Moonsik Min. 2022. "Convexity of the Capacity of One-Bit Quantized Additive White Gaussian Noise Channels" Mathematics 10, no. 22: 4343. https://doi.org/10.3390/math10224343

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop