Next Article in Journal
Synchrony-Division Neural Multiplexing: An Encoding Model
Next Article in Special Issue
Design of Multi-User Noncoherent Massive SIMO Systems for Scalable URLLC
Previous Article in Journal
GL-YOLO-Lite: A Novel Lightweight Fallen Person Detection Model
Previous Article in Special Issue
Joint Design of Polar Coding and Physical Network Coding for Two−User Downlink Non−Orthogonal Multiple Access
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lossy State Communication over Fading Multiple Access Channels

KTH Royal Institute of Technology, 114 28 Stockholm, Sweden
Entropy 2023, 25(4), 588; https://doi.org/10.3390/e25040588
Submission received: 26 February 2023 / Revised: 25 March 2023 / Accepted: 28 March 2023 / Published: 29 March 2023
(This article belongs to the Special Issue Advances in Multiuser Information Theory)

Abstract

:
Joint communications and sensing functionalities integrated into the same communication network have become increasingly relevant due to the large bandwidth requirements of next-generation wireless communication systems and the impending spectral shortage. While there exist system-level guidelines and waveform design specifications for such systems, an information-theoretic analysis of the absolute performance capabilities of joint sensing and communication systems that take into account practical limitations such as fading has not been addressed in the literature. Motivated by this, we undertake a network information-theoretic analysis of a typical joint communications and sensing system in this paper. Towards this end, we consider a state-dependent fading Gaussian multiple access channel (GMAC) setup with an additive state. The state process is assumed to be independent and identically distributed (i.i.d.) Gaussian, and non-causally available to all the transmitting nodes. The fading gains on the respective links are assumed to be stationary and ergodic and available only at the receiver. In this setting, with no knowledge of fading gains at the transmitters, we are interested in joint message communication and estimation of the state at the receiver to meet a target distortion in the mean-squared error sense. Our main contribution here is a complete characterization of the distortion-rate trade-off region between the communication rates and the state estimation distortion for a two-sender GMAC. Our results show that the optimal strategy is based on static power allocation and involves uncoded transmissions to amplify the state, along with the superposition of the digital message streams using appropriate Gaussian codebooks and dirty paper coding (DPC). This acts as a design directive for realistic systems using joint sensing and transmission in next-generation wireless standards and points to the relative benefits of uncoded communications and joint source-channel coding in such systems.

1. Introduction

The scarcity of spectrum, as well as the bandwidth requirements of key emerging applications such as 6G, necessitate a rethinking of resource consumption. In such systems, it appears prudent to co-design sensing and communication functionalities. This method enables significant gains in spectral, energy, hardware, and cost efficiency. This is known as joint sensing and communication, and it represents a paradigm shift in which sensing and communication operations can be jointly optimized by utilizing a single hardware platform and a joint signal processing framework. These ideas have already been used in a number of novel applications, including vehicular networks, indoor positioning, and covert communications. Joint sensing and communication scenarios have recently received a lot of attention from the signal processing community (see for instance [1,2,3,4]), the communications community (see [5,6,7,8,9,10,11]), and the information theory community (see [12,13,14,15,16]). This work belongs to the final category, where we take an information-theoretic view of joint sensing and communication in a multi-terminal setting.
Joint sensing and communication also arises in multi-user networks with several sensor nodes observing a common analog source phenomenon, and communicating to a base station (destination) over a wireless fading medium, see Figure 1.
In this setting, the sensor nodes must convey a description of the source process to the base station, which then tries to estimate the source process subject to a fidelity criterion. Some of the sensor nodes might also have additional digital data to convey to the base station, which must reliably recover them as well. Since the source process, as well as the data from each node, are of interest to the base station, a tension naturally arises between the rates of data communication and source estimation fidelity. The trade-off between these objectives is of particular interest in such systems, which is among the primary motivations for this work. The analog phenomenon in this example can be thought of as a channel state that affects the digital communication of messages, with the receiver being required to reliably estimate this channel state while also recovering the transmitted messages. As far as the fading process is concerned, it is reasonable in practice to assume that the receiver can track the channel variations, for example, via the use of pilot transmission sequences.
In this work, we consider an information-theoretic abstraction of the communication setting in Figure 1. In particular, we focus on joint communication and state estimation over a state-dependent fading Gaussian multiple access channel with no fading knowledge at the transmitters. At each encoder, the state process is assumed to be known non-causally. The fading processes encountered on the respective links are assumed to be stationary and ergodic and to be known only at the receiver. The dual goals of message communication and state estimation at the receiver must be met with a distortion tolerance with respect to a squared-error metric. The trade-off between the average message communication rates and the average distortion in receiver state estimation is of interest. We completely characterize the optimal trade-off region between the communication rates of the different transmitters and the state estimation distortion at the receiver. The details of the setting as well as the motivation for investigating it, will be elucidated in the section that follows.
Having introduced the general problem framework, we now discuss the other relevant contributions in the literature, emphasizing the differences from our setting under consideration in the following section.

2. Literature Review

In this section, we discuss the related literature and place our contributions in the context of the state of the art. In particular, we enlist prior works on joint communication and channel state estimation in both point-to-point as well as network information theoretic settings and identify several knowledge gaps.
Systems such as the one in Figure 1 can be modeled as state-dependent channels, where the channel state typically refers to a variable used to model unknown parameters of the channel statistics. A canonical form of such state-dependent channel models consisting of an additive state over an additive white Gaussian noise (AWGN) channel was investigated in [17], popularly known as the dirty paper coding (DPC) setting. Surprisingly, ref. [17] demonstrated that regardless of the presence of the state, the channel capacity of this setting remained the same as that of an AWGN channel, independent of the variance of the channel state. This phenomenon later found widespread applications in settings such as digital watermarking [18] and multiple-input-multiple-output wireless broadcast channels [19].
In certain state-dependent channels, in addition to communicating messages, the transmitter may wish to assist the receiver in estimating the channel state (as in the sensor network scenario described in Figure 1). Splitting the average available power between the dual tasks of uncoded transmission of the state and DPC for the message was found to be optimal for the mean squared error distortion measure [20] in a point-to-point (single-user) AWGN channel. In [21], joint communication and state estimation were considered in a different scenario where the transmitters were unaware of the channel state. In an interesting variation of [20], Tian et al. [22] characterized the distortion-transmit power trade-off in a point-to-point Gaussian model with noisy state observations at the transmitter in the absence of messages. However, in the presence of messages [22], a complete characterization of the rate-distortion trade-off region remains unknown for the case of noisy state observations at the transmitter.
In the literature, channel state estimation has been studied in two different frameworks, each of them being motivated by different real-world problems. These frameworks correspond to (a) state estimation performed at the receiver and (b) state estimation performed at the transmitter side. We first consider the problem of channel state estimation at the receiver, which has been investigated before in certain information-theoretic settings. Relevant works include [23] (see also [24]), which investigated joint estimation and message communication over a Gaussian broadcast channel (BC) without state-dependence and derived a complete characterization of the trade-off between achievable rates and estimation errors. In [25], communication and state estimation were studied in a multiple access setting without fading, and the distortion-rate trade-off region was characterized. In [26], a state-dependent Gaussian BC with the dual goals of amplifying the channel state at one of the receivers while masking it [27] from the other receiver (with no message transmissions) was investigated, and achievable coding schemes, as well as outer bounds, were derived. More recently, ref. [28] addressed state estimation for a discrete memoryless BC with causal state information at the transmitter, also taking into account any possible feedback signals from the strong receiver to the sender, and gave a characterization of the capacity region.
As far as channel state estimation at the transmitter is concerned, this line of work originated in [12], where a point-to-point channel with generalized feedback signals to aid the state estimation was investigated. Such models are motivated by joint radar and communications systems where the radar, as well as data communication, share the same frequency band. Following this, ref. [13] considered a multiple access channel extension of the same, where both the senders obtained generalized feedback and obtained an achievable trade-off region. An improved achievable scheme for the multiple access setting was derived recently in [15]. Most recently, a broadcast channel variant was investigated in [16] (see also [14]), where inner and outer bounds were given for general broadcast channels while a complete characterization was obtained for the special case of physically degraded broadcast channels.
Fading multiple access channels without state (or state estimation requirements) and different degrees of channel state information have been explored in the literature. For instance, the ergodic capacity region for fast-fading Gaussian multiple access channels (GMAC) with perfect channel state information at the transmitters and the receiver was characterized in [29]. More general configurations for transmitter channel state availability were analyzed in [30], where the capacity region for time-varying models was determined via optimization over appropriate power control laws. Slow fading multiple access channels with distributed channel state information at the transmitters were studied in [31].
State-dependent point-to-point fading channels with no state estimation have received some attention in the literature. Vaze and Varanasi [32], for example, investigated a model with full state knowledge and partial knowledge of the fading process at the transmitter, and the high-SNR achievable rate was characterized. Rini and Shamai [33] examined the impact of phase fading in the DPC setting [17] when the receiver was informed of the fading process. We also note that [34] has addressed point-to-point fading channels (without any state process or state estimation requirements) with channel gains known at both the sender and the receiver.
Having reviewed the related literature, we now identify the key knowledge gaps in prior work and the necessity of our work.
  • Analysis of the State of the art and Research gaps: We identify the following crucial aspects.
  • We note that none of the works above consider joint state estimation along with message communication over state-dependent multi-terminal settings with noncausal transmitter state information, which is highly relevant in applications like the joint sensing and communication setting in Figure 1. This is addressed in this paper.
  • While there exist system-level guidelines and waveform design specifications for such systems, a network information-theoretic analysis of the absolute performance capabilities of joint sensing and communication systems that take into account practical limitations has not been addressed in the literature, which we undertake here.
  • Moreover, none of the works on joint communication and estimation mentioned above take fading links into account. Fading is an impairment that must be accounted for in practical wireless communication channel models, such as the joint sensing and communication application shown in Figure 1. This is another gap in the literature that this paper seeks to fill by investigating joint communication and estimation over state-dependent multi-user fading channels, the point-to-point counterpart of which was addressed by the author in [35].
Novelty and relevance: In this paper, we address the problem of joint communication and state estimation over a state-dependent fading GMAC with no fading knowledge at the transmitters. The key scientific question we address here is: what is the best possible trade-off between the competing goals of message communication from multiple senders and the fidelity in state estimation at the receiver?
  • The key novelty of our work is that it is the first instance where joint communication and estimation have been considered in a multiple-user setting that also accounts for fading links, as opposed to previous works, which focused only on non-fading links.
  • Moreover, it is the first work that considers non-causal state information (as opposed to causal or strictly causal) at the transmitter in a fading multi-user scenario which is practically relevant as described in the sensor network example from Figure 1.
  • Furthermore, we undertake a comprehensive network information-theoretic study of the fundamental performance limits of such joint communication and estimation settings, which is lacking in the literature. Please refer to Table 1, which highlights our contributions in this paper with respect to the existing works.
The key relevance of our study is that it serves as a design guideline for practical systems employing joint sensing and communication envisioned in future 6G wireless standards and broadly applies to systems that involve joint compression and communication/rate-distortion trade-offs. It also points to the relative benefits of uncoded transmission versus joint source-channel coding in such systems. The progress embodied herein builds up towards a better understanding of joint state estimation and communication problems in multi-terminal settings (such as multiple access channels), which is relatively less explored in the literature (with or without the fading aspect).
  • Summary of contributions: We list them below. See also the contribution summary Table 1, which emphasizes the novelty of our work with respect to the existing works.
  • One of our main contributions in the paper is a complete characterization (Theorem 1) of the rate-distortion trade-off region for joint state estimation and communication over a two-user fading GMAC with the state. The key non-trivial part is the proof of converse, which is given in Section 5.
  • We prove that the optimal strategy for the setting under consideration involves uncoded transmissions to amplify the state, along with the superposition of the digital message streams using appropriate Gaussian codebooks and DPC.
  • We prove the optimality of uncoded state amplification in the special case where there are no messages to communicate—please refer to the section on implications of our result given after the statement of Theorem 1 for the details.
  • Our framework naturally generalizes the results of [35] to multiple users, ref. [25] to fading links and the work of [20] to fading links with multiple users, thereby providing a unified framework that encompasses all these prior works on joint communication and estimation.
  • Our study gives a network information-theoretic analysis of the fundamental performance limits of joint sensing and communication systems that take into account practical limitations such as fading. This acts as a design directive for realistic systems using joint sensing and transmission anticipated in upcoming wireless standards and points to the relative merits of uncoded communications and joint source-channel coding in such systems.
Notations: Random variables are denoted by capital letters, while their realizations are denoted by the corresponding lower-case letters. We use P ( · ) to denote the probability of an event. The joint probability distribution of two random variables ( X , Y ) is denoted by p X , Y ( x , y ) . Let E [ · ] denote the expected value of a random variable. At times, we will use an explicit subscript in the expectation operation, E X [ · ] , to denote that the expectation is taken with respect to the probability distribution of the random variable X. All logarithms are in base 2, unless mentioned otherwise. We denote random sequences of length n with a superscript notation, i.e., U n : = U 1 , U 2 , , U n . An indexed set of random sequences each of length n is denoted with a subscript for the random variable and a superscript for the length, i.e., U j n : = U j 1 , U j 2 , , U j n , where U j i stands for the i -th component of U j n . The covariance of a random vector X n is denoted by Cov ( X n ) . Calligraphic letters represent alphabets of random variables. . denotes the Euclidean norm of a vector, while Conv ( · ) denotes the convex closure of a set. The absolute value of a number is denoted | · | , while the transpose of a matrix A is denoted as A T . The notation A B is used to denote independent random variables ( A , B ) . The Gaussian (normal) distribution with mean μ and variance σ 2 is denoted by N ( μ , σ 2 ) . The set of real numbers is denoted by R , while the set of n tuples of positive real numbers is denoted by R + n . The Shannon entropy of a discrete-valued random variable X is denoted by H ( X ) , while the differential entropy of a continuous-valued random variable Y is denoted by h ( Y ) . The mutual information between any two random variables V and W is denoted by I ( V ; W ) . The corresponding conditional quantities given a random variable Z are conditional entropy H ( X | Z ) , conditional differential entropy h ( Y | Z ) , and conditional mutual information I ( V ; W | Z ) . If n is an integer variable, ϕ ( n ) is a positive function and f ( n ) is an arbitrary function, we say that f = o ( ϕ ) provided that lim n f ( n ) / ϕ ( n ) = 0 . For any three random variables ( A , B , C ) , we say that A B C is a Markov chain if A and C are conditionally independent given B.
The rest of the paper is organized as follows: in Section 3, we introduce the system model and state our main results. Section 4 and Section 5 contain the achievable coding scheme and the converse to the rate-distortion trade-off region, respectively. Section 6 contains concluding remarks. The Appendix A.1, Appendix A.2, and Appendix A.3 contain all the details of the proofs that are skipped in the main discussion, to maintain the readability of the paper.

3. System Model and Results

Consider the fading multiple access channel shown in Figure 2. The observed samples at the receiver at time instant i { 1 , 2 , , n } are given by
Y i = j = 1 2 Θ j , i X j , i + S i + Z i .
Here, the samples of the additive noise process are independent and identically distributed (i.i.d.) according to Z i N ( 0 , σ Z 2 ) , while the samples of the state process are i.i.d. according to S i N ( 0 , σ S 2 ) . The state process is assumed to be known non-causally at each encoder. The fading processes encountered on the respective links are represented by Θ j n , j { 1 , 2 } , with these fading gains being known only at the receiver. The fading processes encountered on both links are assumed to be stationary and ergodic. The codeword lengths can be chosen arbitrarily long to average over the fading of the channel. The given model represents a fast-fading multiple access channel with no knowledge of the fading processes at the transmitters. The state, fading, and additive noise processes are assumed to be independent of each other. In our model, the power constraint on the inputs is assumed to be across blocks, averaged over the random state and the codebook. The dual goals of message ( ( W 1 , W 2 ) in Figure 2) communication, and estimation of the state S n at the receiver to meet a distortion tolerance with respect to a squared error metric must be met. The trade-off between the average message communication rates ( R 1 , R 2 ) and the average distortion incurred in state estimation (D) at the receiver is sought.
We take W j to be uniformly drawn from the set W j { 1 , 2 , , 2 n R j } for j = { 1 , 2 } , and independent of each other. The messages ( W 1 , W 2 ) are assumed to be independent of the state process S n . The power constraint on the transmissions is:
1 n E W j , S n i = 1 n X j i 2 ( W j , S n ) P j , j = { 1 , 2 } .
After n observations, the decoder estimates S ^ n = ϕ ( Y n , Θ 1 n , Θ 2 n ) using a (state) reconstruction map ϕ ( · ) : Y n × j = 1 2 Θ j n R n , and also decodes the messages ( W 1 , W 2 ) . The (message) decoding map is given by ψ : Y n × j = 1 2 Θ j n W 1 × W 2 . Our aim here is to maintain the average squared-error distortion incurred in state estimation below a given threshold, while also ensuring that the average error probability of decoding the messages is small enough.
Definition 1. 
A scheme using the encoder mappings E j : { 1 , , 2 n R j } × S n X j for j = 1 , 2 satisfying the power constraints in (2), along with two mappings ϕ ( · ) and ψ ( · ) at the receiver is called an ( n , R 1 , R 2 , P 1 , P 2 ) communication-estimation scheme.
A triple ( R 1 , R 2 , D ) is said to be achievable if there exists a sequence of ( n , R 1 , R 2 , P 1 , P 2 ) communication-estimation schemes such that
lim sup n 1 n E Θ 1 n , Θ 2 n E [ S n ϕ ( Y n , Θ 1 n , Θ 2 n ) 2 ] D ,
and
lim sup n E Θ 1 n , Θ 2 n P ( ψ ( Y n , Θ 1 n , Θ 2 n ) ( W 1 , W 2 ) ) = 0 .
Let C est fad - mac ( P 1 , P 2 ) be the closure of the set of all achievable ( R 1 , R 2 , D ) triples, with 0 D σ S 2 . The main result of the paper is stated below.
Theorem 1. 
For the fading Gaussian MAC with state, the trade-off region C est fad - mac ( P 1 , P 2 ) is characterized by the convex closure of all ( R 1 , R 2 , D ) R + 3 such that
R 1 E Θ 1 1 2 log 1 + γ Θ 1 2 P 1 σ Z 2 ,
R 2 E Θ 2 1 2 log 1 + β Θ 2 2 P 2 σ Z 2 ,
R 1 + R 2 E Θ 1 , Θ 2 1 2 log 1 + γ Θ 1 2 P 1 + β Θ 2 2 P 2 σ Z 2 ,
D E Θ 1 , Θ 2 σ S 2 ( σ Z 2 + γ Θ 1 2 P 1 + β Θ 2 2 P 2 ) Θ 1 2 P 1 + Θ 2 2 P 2 + σ S 2 + σ Z 2 + 2 Θ 1 γ ¯ P 1 σ S 2 + 2 Θ 2 β ¯ P 2 σ S 2 + 2 Θ 1 Θ 2 γ ¯ β ¯ P 1 P 2 ,
for some fractions γ [ 0 , 1 ] and β [ 0 , 1 ] , with γ ¯ 1 γ and β ¯ 1 β .
Proof. 
The proof is given in the following two sections, wherein Section 4 contains the achievability proof while the converse is proved in Section 5. □
Implications of our result: We now discuss the main consequences of our Theorem 1 for the sensor network scenario outlined earlier in Figure 1. If a given transmitter (sensor node) has a message to communicate to the receiver (base station), then the optimal strategy involves splitting its available power budget into two parts: one part is used to send a scaled version of the state (uncoded state amplification), while the other part is used to communicate the message using dirty paper coding. The parameters γ and β in Theorem 1 precisely perform this role of power-sharing between the dual goals of communication and estimation. This will be elaborated upon in the proof of achievability in Section 4.
On the other hand, if a given transmitter (sensor node) has no messages to communicate to the receiver (base station), then the optimal strategy simply involves utilizing its entire power budget for uncoded state amplification, i.e., sending the scaled version
X j = P j σ S 2 S .
In other words, uncoded transmission is optimal for such nodes. This phenomenon resembles that of [38], albeit the latter is in the context of non-fading links with no message communication and no state-dependence. We close this section with a series of remarks that shed further light on the implications of our Theorem 1 and its connection with existing results in the literature.
Remark 1. 
When the second sender is absent, i.e., P 2 = 0 , and with constant fading gains Θ 1 ( = Θ 2 ) = 1 almost surely, our Theorem 1 recovers the characterization of [20] for the point-to-point non-fading scenario as a special case.
Remark 2. 
When the fading gains are constant, i.e., Θ 1 = Θ 2 = 1 almost surely, our Theorem 1 recovers the characterization of [25] for the multiple-access non-fading scenario as a special case.
Remark 3. 
When γ = β = 0 , we obtain an extreme point of the region with zero rates, i.e, R 1 = R 2 = 0 , and the best state estimate, i.e., minimum possible distortion
D min = E Θ 1 , Θ 2 σ S 2 σ Z 2 Θ 1 2 P 1 + Θ 2 2 P 2 + σ S 2 + σ Z 2 + 2 Θ 1 P 1 σ S 2 + 2 Θ 2 P 2 σ S 2 + 2 Θ 1 Θ 2 P 1 P 2 .
This corresponds to each encoder utilizing its entire power budget for uncoded state amplification, and therefore no message communication is possible.
Remark 4. 
On the other hand, when γ = β = 1 , we obtain the other extreme point of the region with the maximum possible rates for a fading Gaussian MAC, and the worst state estimate, i.e., maximum possible distortion
R 1 R 1 , max = E Θ 1 1 2 log 1 + Θ 1 2 P 1 σ Z 2 ,
R 2 R 2 , max = E Θ 2 1 2 log 1 + Θ 2 2 P 2 σ Z 2 ,
R 1 + R 2 E Θ 1 , Θ 2 1 2 log 1 + Θ 1 2 P 1 + Θ 2 2 P 2 σ Z 2 ,
D D max = E Θ 1 , Θ 2 σ S 2 ( σ Z 2 + Θ 1 2 P 1 + Θ 2 2 P 2 ) Θ 1 2 P 1 + Θ 2 2 P 2 + σ S 2 + σ Z 2 ,
This corresponds to each encoder utilizing its entire power budget for message communication, and therefore no state amplification is possible, and maximum distortion is incurred in state estimation.

4. Achievability

The achievability builds upon well-known techniques like dirty paper coding and successive cancellation, along with appropriate power splitting. The power P 1 available at encoder 1 is split into two parts: γ P 1 for message transmission and γ ¯ P 1 for state amplification, for some γ [ 0 , 1 ] . Similarly, the power P 2 available at the second encoder is split into β P 2 (message transmission) and β ¯ P 2 (state amplification) for some β [ 0 , 1 ] . Then, the following state amplification signals are generated
X 1 s j = γ ¯ P 1 σ S 2 S j and X 2 s j = β ¯ P 2 σ S 2 S j , 1 j n
at the respective encoders. In other words, the power fractions γ ¯ P 1 and β ¯ P 2 at encoders 1 and 2 respectively are used for uncoded state amplification. Hence, (1) can be rewritten as
Y j = Θ 1 j X 1 m j + Θ 1 j X 1 s j + Θ 2 j X 2 m j w w w + Θ 2 j X 2 s j + S j + Z j = Θ 1 j X 1 m j + Θ 2 j X 2 m j w w w + 1 + Θ 1 j γ ¯ P 1 σ S 2 + Θ 2 j β ¯ P 2 σ S 2 S j + Z j ,
where E [ | | X 1 m n | | 2 ] n γ P 1 and E [ | | X 2 m n | | 2 ] n β P 2 , with both X 1 m n and X 2 m n being independent of S n . The subscript m in (16) indicates that the corresponding signals are intended for message transmission, whereas the subscript s indicates state amplification signals. To communicate the messages across to the receiver, we invoke the writing on dirty paper result for a Gaussian MAC [37].
From the DPC result [17], we recall that a known state process over an AWGN channel can be completely canceled. In particular, a rate R that satisfies
R I ( U ; Y ) I ( U ; S ) ,
when evaluated for some feasible joint probability distribution p U , S , X ( u , s , x ) p Y | X , S ( y | x , s ) , can be achieved by Gelfand-Pinsker coding [39] for a point-to-point channel with a non-causally known state. In order to prove the achievability of the rates (5)–(7), we first consider a dirty paper channel with input Θ 1 j X 1 m j , known state
S j = 1 + Θ 1 j γ ¯ P 1 / σ S 2 + Θ 2 j β ¯ P 2 / σ S 2 S j ,
and unknown noise Θ 2 j X 2 m j + Z j . We choose U 1 j = Θ 1 j X 1 m j + α 1 j S j , X 1 m j S j with X 1 m j N ( 0 , γ P 1 ) and
α 1 j = γ Θ 1 j 2 P 1 γ Θ 1 j 2 P 1 + β Θ 2 j 2 P 2 + σ Z 2 .
This yields the following rate for user-1 at time instant j with the error probability approaching zero
1 2 log 1 + γ Θ 1 j 2 P 1 β Θ 2 j 2 P 2 + σ Z 2 .
The achievable rate for user-1 averaged over a time interval { 1 , 2 , , n } is
1 n j = 1 n 1 2 log 1 + γ Θ 1 j 2 P 1 β Θ 2 j 2 P 2 + σ Z 2 ,
which converges as n to
E 1 2 log 1 + γ Θ 1 2 P 1 β Θ 2 2 P 2 + σ Z 2
due to the stationarity and ergodicity of the fading processes. The decoded codeword U 1 j is then subtracted from the channel output to obtain another dirty paper channel
Y ˜ j = Y j U 1 j = Θ 2 j X 2 m j + ( 1 α 1 j ) S j + Z j ,
with input Θ 2 j X 2 m j , known state S j = ( 1 α 1 j ) S j and unknown noise Z j . We pick U 2 j = Θ 2 j X 2 m j + α 2 j S j , X 2 m j S j with X 2 m j N ( 0 , β P 2 ) and
α 2 j = β Θ 2 j 2 P 2 β Θ 2 j 2 P 2 + σ Z 2 .
This yields the following rate for user-2 at time instant j with the error probability approaching zero
1 2 log 1 + β Θ 2 j 2 P 2 σ Z 2 .
The achievable rate for user-2 averaged over a time interval { 1 , 2 , , n } is
1 n j = 1 n 1 2 log 1 + β Θ 2 j 2 P 2 σ Z 2 ,
which converges as n to
E 1 2 log 1 + β Θ 2 2 P 2 σ Z 2
due to the stationarity and ergodicity of the fading processes. By reversing the decoding order and using time-sharing, the region in expressions (5) through (7) can be achieved. Note that the right-hand sides of expressions (20) and (23) add up to the right-hand side of the sum rate expression in (7). For the state estimate, the receiver forms the linear minimum mean-squared error (MMSE) estimate S ^ j ( Y j ) of S j based on Y j :
S ^ j ( Y j ) = ( σ S 2 + Θ 1 j γ ¯ P 1 σ S 2 + Θ 2 j β ¯ P 2 σ S 2 ) Y j Θ 1 j 2 P 1 + Θ 2 j 2 P 2 + σ S 2 + σ Z 2 + 2 Θ 1 j γ ¯ P 1 σ S 2 + 2 Θ 2 j β ¯ P 2 σ S 2 + 2 Θ 1 j Θ 2 j γ ¯ β ¯ P 1 P 2 c 1 c 2 Y j ,
where
c 1 = σ S 2 + Θ 1 j γ ¯ P 1 σ S 2 + Θ 2 j β ¯ P 2 σ S 2 , c 2 = Θ 1 j 2 P 1 + Θ 2 j 2 P 2 + σ S 2 + σ Z 2 + 2 Θ 1 j γ ¯ P 1 σ S 2 + 2 Θ 2 j β ¯ P 2 σ S 2 + 2 Θ 1 j Θ 2 j γ ¯ β ¯ P 1 P 2 .
For the achievable distortion at time instant j, we evaluate the expected squared error between S j and S ^ j ( Y j ) , above. The resulting MMSE can be easily verified to be
E [ | S j S ^ j | 2 ] = σ S 2 c 1 2 c 2 = σ S 2 ( σ Z 2 + γ Θ 1 j 2 P 1 + β Θ 2 j 2 P 2 ) Θ 1 j 2 P 1 + Θ 2 j 2 P 2 + σ S 2 + σ Z 2 + 2 Θ 1 j γ ¯ P 1 σ S 2 + 2 Θ 2 j β ¯ P 2 σ S 2 + 2 Θ 1 j Θ 2 j γ ¯ β ¯ P 1 P 2 .
The achievable distortion averaged over a time interval { 1 , 2 , , n } is
1 n j = 1 n σ S 2 ( σ Z 2 + γ Θ 1 j 2 P 1 + β Θ 2 j 2 P 2 ) Θ 1 j 2 P 1 + Θ 2 j 2 P 2 + σ S 2 + σ Z 2 + 2 Θ 1 j γ ¯ P 1 σ S 2 + 2 Θ 2 j β ¯ P 2 σ S 2 + 2 Θ 1 j Θ 2 j γ ¯ β ¯ P 1 P 2 ,
which converges as n to
E σ S 2 ( σ Z 2 + γ Θ 1 2 P 1 + β Θ 2 2 P 2 ) Θ 1 2 P 1 + Θ 2 2 P 2 + σ S 2 + σ Z 2 + 2 Θ 1 γ ¯ P 1 σ S 2 + 2 Θ 2 β ¯ P 2 σ S 2 + 2 Θ 1 Θ 2 γ ¯ β ¯ P 1 P 2
due to the stationarity and ergodicity of the fading processes. This concludes the proof of achievability.

5. Converse

In this section, we prove that any successful scheme (that has a vanishing probability of error and is within the distortion tolerance) must satisfy the rate-distortion constraints of Theorem 1. This is proved in two subsections: in Section 5.1, we construct an outer bound on the rate-distortion trade-off region. Subsequently, we shall prove in the next Section 5.2 that this outer bound is achievable, thereby proving Theorem 1.

5.1. Outer Bound

The proof of our outer bound is aided by the following lemma, adapted from (Equation (2), [20]).
Lemma 1. 
Any communication estimation scheme achieving a distortion D n 1 n E | | S n S ^ n | | 2 over blocklength n satisfies
n 2 log σ S 2 D n I ( S n ; Y n , Θ 1 n , Θ 2 n ) .
Proof. 
The proof is given in Appendix A.1. □
Another useful property is the differential entropy maximizing property of the Gaussian distribution [40], i.e., for X g n N ( 0 , K ) ,
h ( X n ) h ( X g n ) whenever Cov ( X n ) K .
The above facts will be extensively used in our proofs.
For ( η 1 , η 2 , λ ) R + 3 , we define
L ( η 1 , η 2 , λ ) = max η 1 R 1 + η 2 R 2 + λ 2 log σ S 2 D ,
where the maximum is over all ( R 1 , R 2 , D ) obeying (5)–(8). We note that it suffices to restrict attention to η i 0 , since η i < 0 will trivially correspond to R i = 0 in the maximization, a case already accounted for by η i = 0 . Likewise, since D σ S 2 , only λ 0 need be considered. Therefore, we only consider non-negative weighting coefficients in the sequel. The converse is established by showing that if ( R 1 , R 2 , D n ) is achievable using block length n, then, for all η 1 , η 2 , λ 0 ,
η 1 R 1 + η 2 R 2 + λ 2 log σ S 2 D n L ( η 1 , η 2 , λ ) + o ( 1 ) ,
where o ( 1 ) has the usual meaning in standard Landau notation. We note that since the tuple ( W 1 , W 2 , S n , Θ 1 n , Θ 2 n ) is independent, the Markov chain X 1 n S n X 2 n holds. Denoting
σ X | Y n 2 min α R n × 1 E [ X α T Y n ] 2 ,
we have for the i-th entry in a block of transmissions,
σ X 1 i + X 2 i | S n 2 = σ X 1 i | S n 2 + σ X 2 i | S n 2 .
We define the empirical covariance matrix K i of the vector ( X 1 i , X 2 i , S i ) with K i ( p , l ) denoting its entries. Let us denote
K i ( j , j ) = E [ X j i 2 ] = P j i , j = 1 , 2
where P j i , j = 1 , 2 satisfy the power constraints
P 1 1 n i = 1 n P 1 i , P 2 1 n i = 1 n P 2 i .
Next, we introduce two parameters γ i [ 0 , 1 ] and β i [ 0 , 1 ] such that
σ X 1 i | S n 2 γ i P 1 i ,
σ X 2 i | S n 2 β i P 2 i .
We now define two parameters γ [ 0 , 1 ] and β [ 0 , 1 ] such that
γ = 1 n P 1 i = 1 n γ i P 1 i , β = 1 n P 2 i = 1 n β i P 2 i .
With this, we are ready to prove (27). Firstly, considering the case where η 1 η 2 is sufficient, as a simple renaming of the indices will give us the other case. For η 2 > 0 , since λ is an arbitrary positive number, maximizing the left-hand side of (27) is equivalent to maximizing η 1 R 1 + η 2 R 2 + η 2 λ 1 2 log σ S 2 D n . Dividing by η 2 , and then renaming η 1 η 2 as η , the maximization becomes η 1 , λ 0 ,
max η R 1 + R 2 + λ 2 log σ S 2 D n .
For a given η > 1 , three regimes of λ arise, as shown in Figure 3. Let R 1 ( γ ) , R 2 ( β ) , R s u m ( γ , β ) and D ( γ , β ) , respectively, denote the right-hand side of Equations (5)–(8). The following two lemmas are crucial to our proofs.
Lemma 2. 
For λ 1 , and γ , β defined in (30), we have
η R 1 + R 2 + λ 2 log σ S 2 D n w ( η 1 ) R 1 ( γ ) + R s u m ( γ , β ) + λ 2 log σ S 2 D ( γ , β ) + o ( 1 ) .
Proof. 
The proof is given in Appendix A.2. □
Lemma 3. 
The function g ( γ , β ) : = R s u m ( γ , β ) + 1 2 log σ S 2 D ( γ , β ) is a non-increasing function in each of the arguments when the other argument is held fixed, for γ [ 0 , 1 ] and β [ 0 , 1 ] .
Proof. 
We first note that D ( γ , β ) increases with γ (or β ), see (8). Furthermore, a straightforward inspection reveals that g ( γ , β ) is non-increasing in each of the arguments. □
We now consider the different regimes for λ (see Figure 3).
Case 1 
( λ 1 and η 1 ): In this regime, Lemma 2 directly gives a bound on the weighted sum rate.
Case 2 
( λ η and η 1 ): Since η 1 , we have
η R 1 + R 2 + λ 2 log σ S 2 D n η R 1 + η R 2 + λ 2 log σ S 2 D n = η ( R 1 + R 2 ) + λ 2 log σ S 2 D n = η R 1 + R 2 + 1 2 log σ S 2 D n + λ η 2 log σ S 2 D n ( a ) η R s u m ( 0 , 0 ) + 1 2 log σ S 2 D ( 0 , 0 ) + λ η 2 log σ S 2 D n ( b ) 0 + η 2 log σ S 2 D ( 0 , 0 ) + λ η 2 log σ S 2 D ( 0 , 0 ) ,
where ( a ) follows from an application of Lemma 2 followed by Lemma 3, and ( b ) follows from the fact that uncoded transmission of the state by the two users acting as a super-user with power ( P 1 + P 2 ) 2 results in the minimal distortion possible [20].
Case 3 
( 1 λ η and η 1 ): Since λ 1 , we have
Since λ 1 , we have
η R 1 + R 2 + λ 2 log σ S 2 D n η R 1 + λ R 2 + λ 2 log σ S 2 D n = ( η λ ) R 1 + λ R 1 + R 2 + 1 2 log σ S 2 D n ( η λ ) R 1 + λ R s u m ( γ , 0 ) + 1 2 log σ S 2 D ( γ , 0 ) ,
where the last step follows from Lemmas 2 and 3. From (A11) in Appendix A.2, it follows that the inequality R 1 E Θ 1 1 2 log ( 1 + γ Θ 1 2 P 1 / σ Z 2 ) holds. Thus,
η R 1 + R 2 + λ 2 log σ S 2 D n η R s u m ( γ , 0 ) + λ 2 log σ S 2 D ( γ , 0 ) .
We next prove that (32)–(34) define the region in Theorem 1.

5.2. Equivalence of Inner and Outer Bounds

We now show that the regions defined by the inner and outer bounds in Section 4 and Section 5.1 coincide, thereby establishing the capacity region. We will consider three regimes for λ 0 for each η 1 , and prove that the maximal value of η R 1 + R 2 + λ 2 log σ S 2 D in the outer bound specified by (32)–(34) can be achieved.
While maximizing η R 1 + R 2 + λ 2 log σ S 2 D n , we already proved that λ η corresponds to an extreme point with zero sum-rate (Case 2 in Section 5.1). Clearly, the corresponding distortion lower bound D ( 0 , 0 ) for this case specified by (33) can be achieved by uncoded state transmission by both transmitters using all the available powers. As a result, the condition λ = η encompasses all λ η . Moreover, the regime 1 λ η (Case 3 of Section 5.1) corresponds to the case where R 2 = 0 . This implies that we only need to consider λ = 1 rather than λ [ 1 , η ) . Clearly, the region with R 2 = 0 follows from the single-user results of [35], but for a state process with variance ( P 2 + σ S 2 ) 2 . This proves the achievability of the bound specified by (34). This leaves us with proving the achievability for those cases in which 0 < λ < 1 holds, corresponding to (32). In this regime, the following lemma is crucial.
Lemma 4. 
For ( 0 < λ < 1 , η 1 ) , the function k ( γ , β ) : = ( η 1 ) R 1 ( γ ) + R s u m ( γ , β ) + λ 2 log σ S 2 D ( γ , β ) is jointly strictly concave in ( γ , β ) for 0 γ 1 and 0 β 1 .
Proof. 
The proof is given in Appendix A.3. □
Since we know that η R 1 + R 2 + λ 2 log σ S 2 D n k ( γ , β ) for some value of ( γ , β ) [ 0 , 1 ] × [ 0 , 1 ] , the strict concavity of k ( · ) suggests the existence of a unique ( γ * , β * ) for which η R 1 + R 2 + λ 2 log σ S 2 D n is maximized for the given η > 1 and 0 λ 1 . Evidently, choosing these maximizing parameters ( γ * , β * ) in our achievable theorem will give us the same operating point. Reversing the roles of R 1 and R 2 , we have covered the whole region. Thus, we have established the achievability of the outer bound specified with (32)–(34). This completes the proof of Theorem 1.

6. Conclusions

We investigated joint message transmission and state estimation in a state-dependent fading Gaussian multiple access channel in this paper and characterized the trade-off region between message rates and state estimation distortion. It was shown that the optimal strategy involved static power allocation and uncoded state amplification combined with Gaussian signaling and dirty paper coding. While the role of uncoded communications has been examined before for non-fading settings without state dependence, ours is the first result that brings out its significance in the context of state-dependent fading systems.
Our framework naturally generalizes previous results concerning state estimation on point-to-point fading channels to multiple users as well as point-to-point non-fading settings to fading links with multiple users. Our results contribute to a better understanding of joint state estimation and communication problems in multi-terminal settings. They can be used as design guidelines for practical systems employing joint sensing and communication envisioned in future 6G wireless standards and broadly applies to systems that involve joint compression and communication/rate-distortion trade-offs.
However, we assumed perfect state observation at the transmitters in this work. A long-standing open problem is that of communicating state and message streams in a fading GMAC with noisy state observations at the transmitters, which is left for future work. Moreover, there could be settings when the receiver cannot track the channel fading gains either, unlike this work. Thus, another interesting avenue for further research is an investigation of the current setup when the encoders, as well as the decoder, are totally uninformed about the fading coefficients on the links.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The author gratefully acknowledges the insightful comments and suggestions of the three anonymous reviewers, which have greatly helped to improve the presentation of the results in this manuscript.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Appendix A.1. Proof of Lemma 1

The proof is along the lines of (Equation (2), [20]), with appropriate modifications to incorporate the fading coefficients ( Θ 1 n , Θ 2 n ) . We provide a full proof for completeness. Consider the following chain of inequalities, starting with the right-hand side in Lemma 1:
1 n I ( S n ; Y n , Θ 1 n , Θ 2 n ) = 1 n ( h ( S n ) h ( S n | Y n , Θ 1 n , Θ 2 n ) ) = ( a ) 1 n ( h ( S n ) h ( S n S ^ n ( Y n , Θ 1 n , Θ 2 n ) | Y n , Θ 1 n , Θ 2 n ) ) 1 n ( h ( S n ) h ( S n S ^ n ) ) = ( b ) 1 n ( i = 1 n h ( S i ) h ( S n S ^ n ) ) 1 n i = 1 n ( h ( S i ) h ( S i S ^ i ) ) = 1 n i = 1 n 1 2 log ( 2 π e σ S 2 ) h ( S i S ^ i ) ( c ) 1 n i = 1 n 1 2 log ( 2 π e σ S 2 ) 1 2 log ( 2 π e E ( S i S ^ i ) 2 ) ( d ) 1 2 log ( 2 π e σ S 2 ) 1 2 log 2 π e 1 n i = 1 n E ( S i S ^ i ) 2 = 1 2 log ( 2 π e σ S 2 ) 1 2 log 2 π e 1 n E S n S ^ n 2 = 1 2 log σ S 2 1 n E S n S ^ n 2 = 1 2 log ( σ S 2 D n ) ,
where (a) follows since S ^ n ( Y n , Θ 1 n , Θ 2 n ) is a function of ( Y n , Θ 1 n , Θ 2 n ) , (b) follows since S n is i.i.d., (c) follows since the Gaussian distribution maximizes differential entropy for a given variance, while (d) follows from Jensen’s inequality.

Appendix A.2. Proof of Lemma 2

By Fano’s inequality [40], we can write for any ϵ > 0 , for large enough n, H ( W 1 , W 2 | Y n ) n ϵ . Then, we have
n η R 1 + n R 2 + n λ 2 log σ S 2 D n = n ( η 1 ) R 1 + n j = 1 2 R j + n λ 2 log σ S 2 D n ( a ) ( η 1 ) H ( W 1 ) + H ( W 1 , W 2 ) + λ I ( S n ; Y n , Θ 1 n , Θ 2 n ) = ( b ) ( η 1 ) H ( W 1 | X 2 n , S n , Θ 1 n ) + H ( W 1 , W 2 | S n , Θ 1 n , Θ 2 n ) w w w + λ I ( S n ; Y n , Θ 1 n , Θ 2 n ) = ( c ) ( η 1 ) H ( W 1 | X 2 n , S n , Θ 1 n ) + H ( W 1 , W 2 | S n , Θ 1 n , Θ 2 n ) w w w + λ I ( S n ; Y n | Θ 1 n , Θ 2 n ) ( d ) ( η 1 ) I ( W 1 ; Y n | X 2 n , S n , Θ 1 n ) + n ϵ w w + I ( W 1 , W 2 ; Y n | S n , Θ 1 n , Θ 2 n ) + n ϵ + λ I ( S n ; Y n | Θ 1 n , Θ 2 n ) = ( η 1 ) I ( W 1 ; Y n | X 2 n , S n , Θ 1 n ) w w w + λ I ( W 1 , W 2 , S n ; Y n | Θ 1 n , Θ 2 n ) w w w + ( 1 λ ) I ( W 1 , W 2 ; Y n | S n , Θ 1 n , Θ 2 n ) + 2 n ϵ = ( η 1 ) ( h ( Y n | X 2 n , S n , Θ 1 n ) h ( Y n | W 1 , X 2 n , S n , Θ 1 n ) ) w + λ ( h ( Y n | Θ 1 n , Θ 2 n ) h ( Y n | W 1 , W 2 , S n , Θ 1 n , Θ 2 n ) ) w + ( 1 λ ) ( h ( Y n | S n , Θ 1 n , Θ 2 n ) w w w w w w w w h ( Y n | W 1 , W 2 , S n , Θ 1 n , Θ 2 n ) ) + 2 n ϵ i = 1 n ( η 1 ) ( h ( Y i | X 2 i , S i , Θ 1 i ) h ( Z i ) ) w + i = 1 n { λ h ( Y i | Θ 1 i , Θ 2 i ) + ( 1 λ ) h ( Y i | S i , Θ 1 i , Θ 2 i ) w w w w w w w w w w w w w w w w w w w w w w w h ( Z i ) } + 2 n ϵ = E i = 1 n ( η 1 ) ( h ( Y i | X 2 i , S i , Θ 1 i = Θ 1 ) h ( Z i ) ) w + E [ i = 1 n ( λ h ( Y i | Θ 1 i = Θ 1 , Θ 2 i = Θ 2 ) w w w w w + ( 1 λ ) h ( Y i | S i , Θ 1 i = Θ 1 , Θ 2 i = Θ 2 ) w w w w w w w w w w w w w w w w w w w w w w w h ( Z i ) ) ] + 2 n ϵ ,
where ( a ) uses Lemma 1 and the fact that the messages are uniformly distributed on their respective alphabets, ( b ) follows since ( W 1 , W 2 ) ( S n , Θ 1 n , Θ 2 n ) and W 1 ( X 2 n , S n , Θ 1 n ) , (c) follows since ( Θ 1 n , Θ 2 n ) S n and ( d ) follows from Fano’s inequality. We now upper bound the term λ h ( Y i | Θ 1 i = Θ 1 , Θ 2 i = Θ 2 ) + ( 1 λ ) h ( Y i | S i , Θ 1 i = Θ 1 , Θ 2 i = Θ 2 ) in (A2). Notice that for a given covariance matrix K i , λ h ( Y i | Θ 1 i = Θ 1 , Θ 2 i = Θ 2 ) + ( 1 λ ) h ( Y i | S i , Θ 1 i = Θ 1 , Θ 2 i = Θ 2 ) , λ [ 0 , 1 ] , is simultaneously maximized when ( Θ 1 X 1 i + Θ 2 X 2 i ) is jointly Gaussian with S i .
Without loss of generality, for the purposes of finding an upper bound on η R 1 + R 2 + λ 2 log σ S 2 D n , we can express X j i in terms of its linear least squares estimate given S n and an error term. Therefore,
X j i = V j i + W j i ,
where V j i is uncorrelated with S n , and W j i = k = 1 n β j i k S k for appropriate coefficients { β j i k , k { 1 , 2 , , n } } . But it is readily verified from expressions (28), (29) and (A3) that E [ W 1 i 2 ] = ( 1 γ i ) P 1 i and E [ W 2 i 2 ] = ( 1 β i ) P 2 i . Now, for a fixed E [ W 1 i 2 ] = ( 1 γ i ) P 1 i , it follows that the variance of Θ 1 X 1 i + S i = Θ 1 V 1 i + ( Θ 1 W 1 i + S i ) would be maximized when Θ 1 W 1 i is a scaled version of S i , i.e.,
Θ 1 X 1 i = Θ 1 V 1 i + μ 1 i Θ 1 ( 1 γ i ) P 1 i σ S 2 S i ,
where μ 1 i { 1 , + 1 } is the sign of the correlation between X 1 i and S i . Likewise, we can express
Θ 2 X 2 i = Θ 2 V 2 i + μ 2 i Θ 2 ( 1 β i ) P 2 i σ S 2 S i ,
where μ 2 i { 1 , + 1 } is the sign of the correlation between X 2 i and S i . Adding expressions (A4) and (A5), we obtain
j = 1 2 Θ j X j i = V i + μ 1 i Θ 1 ( 1 γ i ) P 1 i σ S 2 w w w w w w + μ 2 i Θ 2 ( 1 β i ) P 2 i σ S 2 S i ,
where V i = Θ 1 V 1 i + Θ 2 V 2 i is zero mean Gaussian, and independent of S n . The second term on the right-hand side of (A6) can be understood as the linear estimate of ( Θ 1 X 1 i + Θ 2 X 2 i ) given S i . Since V i = Θ 1 V 1 i + Θ 2 V 2 i and S n are independent, it follows from (28) and (29) that
Var [ V i ] = σ Θ 1 X 1 i + Θ 2 X 2 i | S n 2 = γ i Θ 1 2 P 1 i + β i Θ 2 2 P 2 i .
Using this, it follows that
E [ ( Θ 1 X 1 i + Θ 2 X 2 i ) 2 ] Θ 1 2 P 1 i + Θ 2 2 P 2 i + 2 Θ 1 Θ 2 ( 1 γ i ) ( 1 β i ) P 1 i P 2 i ,
where we have taken μ 1 i = μ 2 i = 1 as the sign of correlation in (A6), since negative correlation can only be detrimental for the right-hand side. Denoting κ ( x ) = ( 1 / 2 ) log ( 2 π e x ) and using the differential entropy maximizing property of Gaussian random variables for a given variance, we have
h ( Y i | X 2 i , S i , Θ 1 i = Θ 1 ) κ ( γ i Θ 1 2 P 1 i + σ Z 2 ) ,
h ( Y i | S i , Θ 1 i = Θ 1 , Θ 2 i = Θ 2 ) w w w w κ ( γ i Θ 1 2 P 1 i + β i Θ 2 2 P 2 i + σ Z 2 ) ,
h ( Y i | Θ 1 i = Θ 1 , Θ 2 i = Θ 2 ) w w w κ ( Θ 1 2 P 1 i + Θ 2 2 P 2 i + σ S 2 + σ Z 2 w w w w w w w + 2 Θ 1 γ ¯ i P 1 i σ S 2 + 2 Θ 2 β ¯ i P 2 i σ S 2 w w w w w w w + 2 Θ 1 Θ 2 γ ¯ i β ¯ i P 1 i P 2 i ) ,
where (A8) is under the choice of ( Θ 1 X 1 i + Θ 2 X 2 i ) which maximizes λ h ( Y i | Θ 1 i = Θ 1 , Θ 2 i = Θ 2 ) + ( 1 λ ) h ( Y i | S i , Θ 1 i = Θ 1 , Θ 2 i = Θ 2 ) , for all λ [ 0 , 1 ] . Continuing the sequence of inequalities from (A2):
n η R 1 + n R 2 + n λ 2 log σ S 2 D n 2 n ϵ ( a ) E i = 1 n ( η 1 ) 1 2 log γ i Θ 1 2 P 1 i + σ Z 2 σ Z 2 n 2 log ( σ Z 2 ) w + E i = 1 n λ 2 log Θ 1 2 P 1 i + Θ 2 2 P 2 i + σ S 2 + σ Z 2 + 2 Θ 1 γ ¯ i P 1 i σ S 2 + 2 Θ 2 β ¯ i P 2 i σ S 2 + 2 Θ 1 Θ 2 γ ¯ i β ¯ i P 1 i P 2 i w + E i = 1 n ( 1 λ ) 2 log γ i Θ 1 2 P 1 i + β i Θ 2 2 P 2 i + σ Z 2 ( b ) E ( η 1 ) n 2 log γ Θ 1 2 P 1 + σ Z 2 σ Z 2 n 2 log ( σ Z 2 ) w + E λ n 2 log Θ 1 2 P 1 + Θ 2 2 P 2 + σ S 2 + σ Z 2 + 2 Θ 1 γ ¯ P 1 σ S 2 + 2 Θ 2 β ¯ P 2 σ S 2 + 2 Θ 1 Θ 2 γ ¯ β ¯ P 1 P 2 w + E n ( 1 λ ) 2 log γ Θ 1 2 P 1 + β Θ 2 2 P 2 + σ Z 2 = ( c ) E ( η 1 ) n 2 log γ Θ 1 2 P 1 + σ Z 2 σ Z 2 w w + n R s u m ( γ , β ) + λ n 2 log σ S 2 D ( γ , β ) .
where (a) follows from the fact that both λ and ( 1 λ ) are non-negative for λ [ 0 , 1 ] and expressions (A7)–(A9), (b) follows from Jensen’s Inequality, and (c) follows from the definitions of R s u m ( γ , β ) , D ( γ , β ) from (7) and (8). Thus, the lemma is proved for all λ [ 0 , 1 ] . Following similar lines, the individual rate R 1 can be bounded using Fano’s inequality and (A7), to obtain
R 1 E 1 2 log γ Θ 1 2 P 1 + σ Z 2 σ Z 2 .

Appendix A.3. Proof of Lemma 4

Consider a concave function T ( ω ̲ ) , where ω ̲ = [ ω 1 ω 2 ω N ] [ 0 , 1 ] N , and define
f ( ω ̲ ) : = i = 1 N δ i log 1 + j = 1 i ω j P j + λ 2 log T ( ω ̲ ) ,
where δ i , 1 i N and λ are non-negative constants.
Lemma A1. 
For 0 < λ 1 , f ( · ) is strictly concave in ω ̲ [ 0 , 1 ] N , whenever δ i , 1 i N are not identically zero.
Proof. 
The first term, which is a linear combination of logarithms, is strictly concave. We next consider the second term. Let y 1 ̲ and y 2 ̲ be two N dimensional vectors in R N . Notice that for ι [ 0 , 1 ] ,
ι log T ( y 1 ̲ ) + ( 1 ι ) log T ( y 2 ̲ ) w w w w w w w w w w w w w w w log ( ι T ( y 1 ̲ ) + ( 1 ι ) T ( y 2 ̲ ) ) w w w w w w w w w w w w w w w log T ( ι y 1 ̲ + ( 1 ι ) y 2 ̲ ) ,
since T ( · ) itself is concave by assumption. □
Let us now proceed to prove Lemma 4. We denote
T ( γ , β ) = Θ 1 2 P 1 + Θ 2 2 P 2 + σ S 2 + σ Z 2 w + 2 Θ 1 γ ¯ P 1 σ S 2 + 2 Θ 2 β ¯ P 2 σ S 2 w + 2 Θ 1 Θ 2 γ ¯ β ¯ P 1 P 2 ,
for convenience. Note that the function
k ( γ , β ) ( η 1 ) 2 log 1 + γ Θ 1 2 P 1 σ Z 2 w w w + λ 2 log T ( γ , β ) σ Z 2 w w w + ( 1 λ ) 2 log 1 + γ Θ 1 2 P 1 + β Θ 2 2 P 2 σ Z 2
is a sum similar to (A12). We first show that k ( γ , β ) is strictly concave by proving T ( · ) in (A14) to be a concave function. For d 0 > 0 , and non-negative constants d 1 , , d N , the function
T ( ω ̲ ) = d 0 + i = 1 N j = 1 N d i d j ( 1 ω i ) ( 1 ω j )
is concave. To see this, we note that x is strictly concave in x 0 . Moreover, x z is jointly concave in ( x , z ) [ 0 , 1 ] 2 , implying that T ( ω ) is a concave function. Notice that concavity in the range of interest is unaffected if every variable x [ 0 , 1 ] is replaced by 1 x . Finally, it follows that the function in Lemma 4, k ( γ , β ) = E [ k ( γ , β ) ] , is strictly concave as well as a result of Jensen’s inequality. This concludes the proof of the lemma.

References

  1. Chiriyath, A.R.; Paul, B.; Jacyna, G.M.; Bliss, D.W. Inner bounds on performance of radar and communications co-existence. IEEE Trans. Signal Process. 2015, 64, 464–474. [Google Scholar] [CrossRef]
  2. Liu, F.; Liu, Y.F.; Li, A.; Masouros, C.; Eldar, Y.C. Cramér-Rao Bound Optimization for Joint Radar-Communication Beamforming. IEEE Trans. Signal Process. 2021, 70, 240–253. [Google Scholar] [CrossRef]
  3. Shi, C.; Wang, F.; Salous, S.; Zhou, J. Low probability of intercept-based radar waveform design for spectral coexistence of distributed multiple-radar and wireless communication systems in clutter. Entropy 2018, 20, 197. [Google Scholar] [CrossRef] [Green Version]
  4. Shi, C.; Wang, F.; Salous, S.; Zhou, J.; Hu, Z. Nash bargaining game-theoretic framework for power control in distributed multiple-radar architecture underlying wireless communication system. Entropy 2018, 20, 267. [Google Scholar] [CrossRef] [Green Version]
  5. Kumari, P.; Choi, J.; González-Prelcic, N.; Heath, R.W. IEEE 802.11 ad-based radar: An approach to joint vehicular communication-radar system. IEEE Trans. Veh. Technol. 2017, 67, 3012–3027. [Google Scholar] [CrossRef] [Green Version]
  6. Liu, F.; Masouros, C.; Petropulu, A.P.; Griffiths, H.; Hanzo, L. Joint radar and communication design: Applications, state-of-the-art, and the road ahead. IEEE Trans. Commun. 2020, 68, 3834–3862. [Google Scholar] [CrossRef] [Green Version]
  7. Gaudio, L.; Kobayashi, M.; Caire, G.; Colavolpe, G. On the effectiveness of OTFS for joint radar parameter estimation and communication. IEEE Trans. Wirel. Commun. 2020, 19, 5951–5965. [Google Scholar] [CrossRef]
  8. Xu, W.; Alshamary, H.A.; Al-Naffouri, T.; Zaib, A. Optimal joint channel estimation and data detection for massive SIMO wireless systems: A polynomial complexity solution. IEEE Trans. Inf. Theory 2020, 66, 1822–1844. [Google Scholar] [CrossRef]
  9. Pucci, L.; Paolini, E.; Giorgetti, A. System-level analysis of joint sensing and communication based on 5G new radio. IEEE J. Sel. Areas Commun. 2022, 40, 2043–2055. [Google Scholar] [CrossRef]
  10. Qi, Q.; Chen, X.; Khalili, A.; Zhong, C.; Zhang, Z.; Ng, D.W.K. Integrating Sensing, Computing, and Communication in 6G Wireless Networks: Design and Optimization. IEEE Trans. Commun. 2022, 70, 6212–6227. [Google Scholar] [CrossRef]
  11. Wang, Z.; Mu, X.; Liu, Y.; Xu, X.; Zhang, P. NOMA-aided joint communication, sensing, and multi-tier computing systems. IEEE J. Sel. Areas Commun. 2023, 41, 574–588. [Google Scholar] [CrossRef]
  12. Kobayashi, M.; Caire, G.; Kramer, G. Joint state sensing and communication: Optimal tradeoff for a memoryless case. In Proceedings of the 2018 IEEE International Symposium on Information Theory (ISIT), Vail, CO, USA, 17–22 June 2018; pp. 111–115. [Google Scholar]
  13. Kobayashi, M.; Hamad, H.; Kramer, G.; Caire, G. Joint state sensing and communication over memoryless multiple access channels. In Proceedings of the 2019 IEEE International Symposium on Information Theory (ISIT), Paris, France, 7–12 July 2019; pp. 270–274. [Google Scholar]
  14. Ahmadipour, M.; Wigger, M.; Kobayashi, M. Joint sensing and communication over memoryless broadcast channels. In Proceedings of the 2020 IEEE Information Theory Workshop (ITW), Riva del Garda, Italy, 11–15 April 2021; pp. 1–5. [Google Scholar]
  15. Ahmadipour, M.; Wigger, M.; Kobayashi, M. Coding for Sensing: An Improved Scheme for Integrated Sensing and Communication over MACs. arXiv 2022, arXiv:2202.00989. [Google Scholar]
  16. Ahmadipour, M.; Kobayashi, M.; Wigger, M.; Caire, G. An information-theoretic approach to joint sensing and communication. IEEE Trans. Inf. Theory 2023, 2023, 3176139. [Google Scholar] [CrossRef]
  17. Costa, M.H. Writing on dirty paper (corresp.). IEEE Trans. Inf. Theory 1983, 29, 439–441. [Google Scholar] [CrossRef]
  18. Steinberg, Y.; Merhav, N. Identification in the presence of side information with application to watermarking. IEEE Trans. Inf. Theory 2001, 47, 1410–1422. [Google Scholar] [CrossRef] [Green Version]
  19. Weingarten, H.; Steinberg, Y.; Shamai, S. The capacity region of the Gaussian multiple-input multiple-output broadcast channel. IEEE Trans. Inf. Theory 2006, 52, 3936–3964. [Google Scholar] [CrossRef]
  20. Sutivong, A.; Chiang, M.; Cover, T.M.; Kim, Y.H. Channel capacity and state estimation for state-dependent Gaussian channels. IEEE Trans. Inf. Theory 2005, 51, 1486–1495. [Google Scholar] [CrossRef]
  21. Zhang, W.; Vedantam, S.; Mitra, U. Joint transmission and state estimation: A constrained channel coding approach. IEEE Trans. Inf. Theory 2011, 57, 7084–7095. [Google Scholar] [CrossRef] [Green Version]
  22. Tian, C.; Bandemer, B.; Shamai, S. Gaussian State Amplification with Noisy Observations. IEEE Trans. Inf. Theory 2015, 61, 4587–4597. [Google Scholar] [CrossRef]
  23. Bross, S.I.; Lapidoth, A. The Gaussian Source-and-Data-Streams Problem. IEEE Trans. Commun. 2019, 67, 5618–5628. [Google Scholar] [CrossRef]
  24. Zhao, Y.; Chen, B. Capacity theorems for multi-functioning radios. In Proceedings of the 2014 IEEE International Symposium on Information Theory (ISIT), Honolulu, HI, USA, 29 June–4 July 2014; pp. 2406–2410. [Google Scholar]
  25. Ramachandran, V.; Pillai, S.R.B.; Prabhakaran, V.M. Joint state estimation and communication over a state-dependent Gaussian multiple access channel. IEEE Trans. Commun. 2019, 67, 6743–6752. [Google Scholar] [CrossRef]
  26. Koyluoglu, O.O.; Soundararajan, R.; Vishwanath, S. State Amplification Subject to Masking Constraints. IEEE Trans. Inf. Theory 2016, 62, 6233–6250. [Google Scholar] [CrossRef] [Green Version]
  27. Merhav, N.; Shamai, S. Information rates subject to state masking. IEEE Trans. Inf. Theory 2007, 53, 2254–2261. [Google Scholar] [CrossRef]
  28. Bross, S.I. Message and Causal Asymmetric State Transmission Over the State-Dependent Degraded Broadcast Channel. IEEE Trans. Inf. Theory 2020, 66, 3342–3365. [Google Scholar] [CrossRef]
  29. Tse, D.N.C.; Hanly, S.V. Multiaccess fading channels. I. Polymatroid structure, optimal resource allocation and throughput capacities. IEEE Trans. Inf. Theory 1998, 44, 2796–2815. [Google Scholar] [CrossRef] [Green Version]
  30. Das, A.; Narayan, P. Capacities of time-varying multiple-access channels with side information. IEEE Trans. Inf. Theory 2002, 48, 4–25. [Google Scholar] [CrossRef]
  31. Sreekumar, S.; Dey, B.K.; Pillai, S.R.B. Distributed rate adaptation and power control in fading multiple access channels. IEEE Trans. Inf. Theory 2015, 61, 5504–5524. [Google Scholar] [CrossRef] [Green Version]
  32. Vaze, C.S.; Varanasi, M.K. Dirty paper coding for fading channels with partial transmitter side information. In Proceedings of the 2008 42nd Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 26–29 October 2008; pp. 341–345. [Google Scholar]
  33. Rini, S.; Shamai, S. The impact of phase fading on the dirty paper coding channel. In Proceedings of the 2014 IEEE International Symposium on Information Theory, Honolulu, HI, USA, 29 June–4 July 2014; pp. 2287–2291. [Google Scholar]
  34. Goldsmith, A.J.; Varaiya, P.P. Capacity of fading channels with channel side information. IEEE Trans. Inf. Theory 1997, 43, 1986–1992. [Google Scholar] [CrossRef] [Green Version]
  35. Ramachandran, V. Joint Communication and State Estimation over a State-Dependent Fading Gaussian Channel. IEEE Wirel. Commun. Lett. 2022, 11, 367–370. [Google Scholar] [CrossRef]
  36. Steinberg, Y. Coding for the degraded broadcast channel with random parameters, with causal and noncausal side information. IEEE Trans. Inf. Theory 2005, 51, 2867–2877. [Google Scholar] [CrossRef]
  37. Kim, Y.H.; Sutivong, A.; Sigurjonsson, S. Multiple user writing on dirty paper. In Proceedings of the 2004 IEEE International Symposium on Information Theory, Chicago, IL, USA, 27 June–2 July 2004; p. 534. [Google Scholar]
  38. Gastpar, M. Uncoded transmission is exactly optimal for a simple Gaussian sensor network. IEEE Trans. Inf. Theory 2008, 54, 5247–5251. [Google Scholar] [CrossRef]
  39. Gelfand, S.; Pinsker, M. Coding for channels with random parameters. Probl. Control. Inf. Theory 1980, 9, 19–31. [Google Scholar]
  40. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
Figure 1. Sensor network example.
Figure 1. Sensor network example.
Entropy 25 00588 g001
Figure 2. State estimation over a fading Gaussian MAC with state, without fading knowledge at the transmitters.
Figure 2. State estimation over a fading Gaussian MAC with state, without fading knowledge at the transmitters.
Entropy 25 00588 g002
Figure 3. Range of λ for a given η .
Figure 3. Range of λ for a given η .
Entropy 25 00588 g003
Table 1. Summary of paper contributions. Note that single-user (noncausal) refers to a point-to-point state-dependent channel with noncausal transmitter state information, BC (causal) refers to a state-dependent broadcast channel with causal transmitter state information, while MAC (noncausal) refers to a state-dependent multiple access channel with noncausal state information at all the transmitters.
Table 1. Summary of paper contributions. Note that single-user (noncausal) refers to a point-to-point state-dependent channel with noncausal transmitter state information, BC (causal) refers to a state-dependent broadcast channel with causal transmitter state information, while MAC (noncausal) refers to a state-dependent multiple access channel with noncausal state information at all the transmitters.
Single-User (Noncausal)BC (Causal)MAC (Noncausal)
No FadingFadingNo FadingNo FadingFading
No State Estimation[17][32][36][37]This work
State Estimation[20]This work[28][25]This work
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ramachandran, V. Lossy State Communication over Fading Multiple Access Channels. Entropy 2023, 25, 588. https://doi.org/10.3390/e25040588

AMA Style

Ramachandran V. Lossy State Communication over Fading Multiple Access Channels. Entropy. 2023; 25(4):588. https://doi.org/10.3390/e25040588

Chicago/Turabian Style

Ramachandran, Viswanathan. 2023. "Lossy State Communication over Fading Multiple Access Channels" Entropy 25, no. 4: 588. https://doi.org/10.3390/e25040588

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop