Next Article in Journal
Entropy Analysis and Image Encryption Application Based on a New Chaotic System Crossing a Cylinder
Previous Article in Journal
Online Signature Analysis for Characterizing Early Stage Alzheimer’s Disease: A Feasibility Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Upper Bound on the Joint Entropy of Correlated Sources Encoded by Good Lattices

School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ 85281, USA
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(10), 957; https://doi.org/10.3390/e21100957
Submission received: 6 August 2019 / Revised: 9 September 2019 / Accepted: 27 September 2019 / Published: 29 September 2019

Abstract

:
Lattices provide useful structure for distributed coding of correlated sources. A common lattice encoder construction is to first round an observed sequence to a ‘fine’ lattice with dither, then produce the result’s modulo to a ‘coarse’ lattice as the encoding. However, such encodings may be jointly-dependent. A class of upper bounds is established on the conditional entropy-rates of such encodings when sources are correlated and Gaussian and the lattices involved are a from an asymptotically-well-behaved sequence. These upper bounds guarantee existence of a joint–compression stage which can increase encoder efficiency. The bounds exploit the property that the amount of possible values for one encoding collapses when conditioned on other sufficiently informative encodings. The bounds are applied to the scenario of communicating through a many-help-one network in the presence of strong correlated Gaussian interferers, and such a joint–compression stage is seen to compensate for some of the inefficiency in certain simple encoder designs.

Graphical Abstract

1. Introduction

Lattice codes are a useful tool for information theoretic analysis of communications networks. Sequences of lattices can be designed to posess certain properties which make them useful for noisy channel coding or source coding in limit with dimension. These properties have been termed ‘good for channel coding’ and ‘good for source coding’ [1]. Sequences posessing both such properties exist, and an arbitrary number of sequences can be nested [2]. One application of ‘good’ sequences of nested lattices is in construction of distributed source codes for Gaussian signals. Well designed codes for such a scenario built off of such lattices enables encoders to produce a more efficient representation of their observations than would be possible without joint code design [3]. Such codes can provide optimal or near-optimal solutions to coding problems [4,5,6]. Despite their demonstrated ability to compress signals well in these cases, literature has identified redundancies across lattice encodings in other contexts [7,8,9,10]. In these cases, further compression of encodings is possible. This paper studies the correlation between lattice encodings of a certain design.
A class of upper bounds on the conditional Shannon entropies between lattice encodings of correlated Gaussian sources is produced by exploiting linear relations between lattice encodings and their underlying signals’ covariances. The key idea behind the analysis is that when the lattice-modulo of one random signal is conditioned on the lattice-modulo of a related signal, the region of feasible points for the first modulo collapses. A sketch of this support reduction is shown in Figure 1. This process is repeated until all information from the conditionals is integrated into the estimate of the support set. The upper bound establishes stronger performance limits for such coding structures since it demonstrates that encoders are able to convey the same encodings at lower messaging rates.

1.1. Contributions

The following novel contributions are provided:
  • A class of upper bounds on conditional entropy-rates of appropriately designed lattice encoded Gaussian signals.
  • An application of the bounds to the problem of point-to-point communication through a many-help-one network in the presence of interference. This strategy takes advantage of a specially designed transmitter codebook’s lattice structure.
  • A numerical experiment demonstrating the behavior of these bounds. It is seen that a joint–compression stage can partially alleviate inefficiencies in lattice encoder design.

1.2. Background

The redundancy of lattice-modulo-encoded messages has been noticed before, usually in the context of the following many-help-one problem: many `helpers’ observe correlated Gaussian signals and forward messages to a decoder which is interested in recovering a linear combination of said signals. Towards this end, Wagner in [7] provides an upper and lower bound on conditional entropies such as those here for a case with two lattice encodings. Yang in [8] realized a similar compression scheme for such encodings using further lattice processing on them and presents an insightful `coset planes’ abstraction. It was further noticed by Yang in [9] that improvement towards the many-help-one problem is obtained by splitting helper messages into two parts: one part a coarse quantization of the signal, compressed across helpers via Slepian–Wolf joint–compression (these message parts corresponding to the ‘high bit planes’), and another a lattice-modulo-encoding representing signal details (corresponding to ‘low bit planes’). This paper extends these ideas to a general quantity of helpers, and treats a case where a single component of the observations is known to have lattice structure.
Most recently, a joint–compression scheme for lattice encodings called ‘Generalized Compute Compress and Forward’ was introduced in [10], towards coding for a multi-user additive white Gaussian noise channel where a decoder seeks to recover all user’s messages and is informed by helpers. The scheme in [10] makes use of concepts from [9]. In the scheme each lattice message is split into a combination of multiple components, each component from a different coset plane. Design of which coset planes are used yields different performance results. Section 3 in the present work follows along the same lines, although for a network with one user and where many interferers without codebook structure are also present.
Throughout the paper, terminology and basic lattice theory results are taken from [1]. The lattice encoders studied are built from an ensemble of nested lattices, all both ‘good for quantization’ (Rogers-good) and ‘good for coding’ (Poltyrev-good). Such a construction is provided in [2]. An algorithm from [3] is also used which takes as an argument the structure of some lattice modulo encodings and returns linear combinations of the underlying signals recoverable by a certain type of processing on such encodings. This algorithm is listed here as Stages* · and is shown in Appendix A.

1.3. Outline

The main theorem providing upper bounds on conditional entropies of lattice messages, along with an overview of its proof is stated in Section 2. The theorem is slightly strengthened for an application to the problem of communicating over a many-help-one network in Section 3. A numerical analysis of the bounds is given in Section 3.2. A conclusion and discussion on the bound’s remaining inefficiencies is given in Section 4. A table of notation is provided in Table 1. A key for the interpretation of significant named variables is given in Table 2.

2. Main Results

The main results are as follows:
Theorem 1.
For covariance Σ R K × K , take X n = ( X 1 n , , X K n ) to be n independent draws from the joint-distribution N ( 0 , Σ ) . Take rates r 1 , , r K > 0 and any ε > 0 . If n is large enough, an ensemble of nested lattices L c L 1 , , L K (with base regions B c B 1 , , B K ) from [2] (Theorem 1) can be designed so that the following holds. First fix independent dithers W k unif B k . These dithers have var W k = 2 - 2 r k . Also fix Y k : = round B k ( X k n + W k ) - W k and lattice modulo encodings U k : = mod B c ( round B k ( X k n + W k ) ) .
Now for any α 0 Z K - 1 , number n 0 N , basis { α 1 , , α K } Z K , fix variables:
Y 0 : = Y K + 1 n 0 α 0 Y [ K - 1 ] , Y c : = ( Y 0 - Y K , Y 1 , , Y K - 1 ) , δ 0 2 : = n 0 2 , σ k 2 : = var Y 0 | S T A G E S * var Y c | ( α j Y c ) 0 < j k Y c , k { 0 } [ K ] , δ k 2 : = var α k Y c | S T A G E S * var Y c | ( α j Y c ) 0 < j < k Y c , k [ K ] .
Then the conditional entropy-rate is bounded:
1 n H U K | U [ K - 1 ] , W min k { 0 } [ K ] r K + 1 2 log σ k 2 + j = 0 k max { 1 2 log δ j 2 , 0 } + K 2 · ε .
Bounds of this form hold simultaneously for any subset and reordering of message indices 1 , , K .
Proof for Theorem 1 is given in Appendix B. The proof is built from [3] (Theorem 1), its associated algorithm S T A G E S * · (listed here in Appendix A) and two lemmas which provide useful decompositions of the involved random variables.
Lemma 1.
Take variables as in the statement of Theorem 1. Then, the ensemble of lattices described can include an ‘auxiliary lattice’ L ^ L K with base region B ^ , nesting ratio 1 n log | B ^ L K | 1 2 log σ 2 + ε so that
U K = mod B c C + 1 n 0 Y ˜ + Y ˜ ,
where C , D are functions of ( U [ K ] , W ) , and with high probability
Y ˜ = - α 0 Y [ K - 1 ] ( D + L c ) ,
Y ˜ = E Y 0 | A B ^ ,
A = S T A G E S * var Y c Y c .
In addition, σ 2 = max { 2 - 2 r K , var Y ˜ } .
Lemma 2.
Take variables as in the statement of Theorem 1. Then, the ensemble of lattices described can include ‘auxiliary lattices’ L ^ L c , L ^ L K with base regions B ^ , B ^ , nesting ratios 1 n log | B ^ L c | 1 2 log δ 2 + ε , 1 n log | B ^ L K | 1 2 log σ 2 + ε so that, for any linear combination Y of Y [ K ] , vector α Z K , matrix A R * × K and A = A Y c , then
Y = C + β Y ˜ + Y ˜ ,
where C , D are functions of ( A , mod n 0 B c ( Y 0 ) , U [ K ] , W ) , β is some scalar estimation coefficient, and with high probability
Y ˜ = E α Y c | A ( D + L c ) B ^ ,
Y ˜ = E Y | A , Y ˜ B ^ .
In addition, δ 2 = var Y ˜ , σ 2 = max { 2 - 2 r K , var Y ˜ } .
Proofs for Lemmas 1, 2 are given in Appendix B. These lemmas do not strictly require that the sources be multivariate normal. This technical generalization is relevant in the application to the communication strategy in Section 3. Broadly, the proof of Theorem 1 goes as follows.
  • Choose some α 0 Z K - 1 , n 0 N . Apply Lemma 1 to U K . Call Y ˜ a ‘residual.’
  • Choose some α Z K . Apply Lemma 2 to the residual to break the residual Y ˜ up into the sum of a lattice part due to α Y [ K - 1 ] and a new residual, whatever is left over.
  • Repeat the previous step until the residual vanishes (up to K - 1 times). Notice that this process has given several different ways of writing U K ; by stopping at any amount of steps, U K is the modulo sum of several lattice components and a residual.
  • Design the lattice ensemble for the encoders such that the log-volume contributed to the support of U K by each component can be estimated. The discrete parts will each contribute log-volume 1 2 log δ 2 and residuals log-volume r K + 1 2 log σ 2 .
  • Recognize the entropy of U K is no greater than the log-volume of its support. Choose the lowest support log-volume estimate of those just found.
Notice that each lemma application involves choice of some integer parameters. Choices which yield the strongest bound are unknown. Possible schemes for these decisions are the subroutines Alpha0(·), Alpha(·), listed in Appendix A. As implemented, Alpha0(·) chooses n 0 = 1 and the integer linear combination α 0 which leaves the least residual. As implemented, A L P H A · chooses the integer linear combination α for which α Y [ K - 1 ] is closest to being recoverable from current knowledge at each lemma application. It produces the combination for which the entropy 1 2 log δ 2 of the unknown part of α Y [ K - 1 ] is minimized. This may be a suboptimal choice since, while such combinations are close to recoverable, they may not be very pertinent to a description of U K . Nonetheless, it is still a good enough rule to produce nontrivial entropy bounds, as seen in Section 3.2.

3. Lattice-Based Strategy for Communication via Decentralized Processing

Consider a scenario where a decoder seeks to decode a message from a single-antenna broadcaster in an additive white Gaussian noise (AWGN). The decoder does not observe a signal directly but instead is provided information by a collection of distributed observers (‘helpers’) which forward it digital information, each observer-to-decoder link supporting a different communications rate. This network is depicted in Figure 2. A block diagram is shown in Figure 3. This is the problem of a single-antenna transmitter communicating to a decoder informed out-of-band by a network of helpers in the presence of additive white Gaussian noise and interference.
Note that this problem is different from the problem of distributed source coding of a linear function [3,7,8,9,11]. In contrast to the source coding problem, the signal being preserved by the many-help-one network in the present case has a codebook structure. This structure can be exploited to improve the source-to-decoder communications rate. This problem has been studied [12,13], but the best achievable rate is still unknown. In this section, we present a strategy that takes advantage of this codebook structure.
The core of the strategy is to apply a slight modification of Theorem 1 to the network. The transmitter modulates its communications message using a nested lattice codebook such as one in [4]. The helpers employ lattice encoders such as those from Theorem 1, and then perform Slepian–Wolf distributed lossless compression [14] (Theorem 10.3) on their encodings to further reduce their rate. Because the codeword appears as a component of all the helper’s observations, the bound on the message’s joint entropy obtained from Theorem 1 can be strengthened, allowing one to use a more aggressive compression stage.

3.1. Description of the Communication Scheme

It is well known that a nested lattice codebook with dither achieves Shannon information capacity in a point-to-point AWGN channel with a power-constrained transmitter [4]. One interesting aspect of the point-to-point communications scheme described in [4] is that decoding of the noisy signal is done in modulo space. We will see in this section how lattice encodings like those in Theorem 1 can be used to provide such a decoder enough information to recover a communications message.
Without loss of generality, assume that the transmitter is limited to have average transmission power 1. The scheme’s codebook is designed from nested lattices L f , msg L c , msg with base regions B f , msg , B c , msg . L f , msg is chosen to be good for coding and L c , msg good for quantization. The messaging rate of this codebook is determined by the nesting ratio of L c , msg in L f , msg :
R msg : = 1 n log L f , msg B c , msg .
Lattices can be designed with nesting ratios such that any rate above zero can be formed. Taking a message M L f , msg B c , msg and choosing a dither W msg - B c , msg of which the decoder is informed, then the codeword associated with M takes the form:
X msg n ( M ) : = mod L c , msg M + W msg var W msg B L c , msg var W msg R n .
We now describe observations of such a signal by helpers in the presence of AWGN interferers. For covariance Σ noise R K × K , take
X noise n = ( X noise , 1 n , , X noise , K n ) ( R n ) K
to be n independent draws from the joint-distribution N ( 0 , Σ noise ) . In addition, take a random vector X msg n as described at the beginning of Section 3.1 and a vector c msg R K and define Σ msg : = c msg c msg . Now, the k-th helper observes the vector:
X k n = [ c msg ] k X msg n + X noise , k n R n .
Form an observations vector:
X n : = c msg ( X msg n ) + X noise n ( R n ) K ,
and finally form a cumulative time-averaged covariance matrix as
Σ : = var X n = c msg c msg + Σ noise R K × K .
If helpers are informed of message dither W msg , then they are informed of the codebook for X msg and its lattice structure. Using lattice encoders such as those described in Theorem 1, this codebook information can be used to strengthen the upper bound on conditional entropies between the messages.
Theorem 2.
In the context of the channel description given in Section 3.1, entropy bounds identical to those from Theorem 1 hold for its described observer encodings. The bounds also hold re-defining:
Y 0 : = X m s g ,
defining the rest of the variables in the theorem as stated. The bounds also hold instead re-defining:
Y c : = ( Y 0 - Y K , Y 1 , , Y K - 1 , X s r c ) ,
vectors { α 1 , , α K + 1 } Z K + 1 a basis where all vectors but one α s , s [ K + 1 ] have 0 as their ( K + 1 ) -th component and α s = [ 0 , 0 , , 0 , 1 ] , taking
a R ( m s g ) image S T A G E S * var [ Y c ] [ K ] | ( α j Y c ) 0 < j < s , a Z ( m s g ) Z K , λ ( m s g ) : = cov ( X msg n , ( a R ( m s g ) + a Z ( m s g ) ) [ Y c ] [ K ] ) , Y ( m s g ) : = E ( a R ( m s g ) + a Z ( m s g ) ) [ Y c ] [ K ] | X msg n , δ ( m s g ) 2 : = λ ( m s g ) γ n - 1 2 + var Y ( m s g ) , δ s 2 : = max { 1 , δ ( m s g ) 2 2 - 2 r m s g + ε } ,
and taking the rest of the variables in the theorem as stated over range k [ K + 1 ] .
A sketch for Theorem 2 is provided in Appendix C. The theorem’s statement can be broadly understood in terms of the proof of Theorem 1. After a number of steps s in the support analysis for Theorem 1, the codebook component X m s g n can be partially decoded yielding tighter estimation of that component’s contribution to the support of U K . The variables λ ( m s g ) , a R ( m s g ) , a Z ( m s g ) are parameters for this partial decoding. Lattice modulo messages such as those described in Theorem 2 can be recombined in a useful way:
Lemma 3.
For ε > 0 and vectors a Z Z K , a R image S T A G E S * Σ R K , then lattice modulo encodings U [ K ] from Theorem 2 can be processed into:
U proc : = mod L c , msg λ X msg + Y noise ,
where λ R is some constant:
λ : = cov X msg n , ( a Z + a R ) Y [ K ]
and the noise term Y noise has the following properties:
  • σ noise 2 : = var Y noise = var ( a Z + a R ) Y [ K ] | X msg n ,
  • Y noise ( X msg , M , W msg ) ,
  • Y noise is with high probability in the base cell of any lattice good for coding semi norm-ergodic noise up to power σ noise 2 + ε .
Lemma 3 is demonstrated in Appendix D. Notice that Equation (1) is precisely the form of signal processed by the communications decoder described in [4]. The following result summarizes the performance of this communications strategy.
Corollary 1.
Fix a codebook rate r msg > 0 . As long as helper-to-decoder messaging rates R 1 , , R K > 0 satisfy all the following criteria:
S [ K ] , k S R k > H ˜ ( S | [ K ] \ S ) + ε ,
each H ˜ ( S | [ K ] \ S ) being any entropy-rate bound obtained from Theorem 2, then the following communications rate from source to decoder is achievable, taking a Z , a R , λ , σ noise 2 from their definitions in Lemma 3:
R msg < min r msg , sup a Z , a R max γ 2 ( 0 , 1 ] 1 2 log γ 2 ( λ - γ ) 2 + σ noise 2 .
Proof for Corollary 1 is given in Appendix E, and evaluation of the achieved communications rates for certain lattice code designs is shown in Section 3.2.

3.2. Numerical Results

The achievable rate given in Corollary 1 depends on the design of the lattice encoding scheme at the helpers. Identification of the best such lattice encoders for such a system is closely tied to a receivers’ covariance structure [3]. For this reason and for the purpose of evaluating the effect of joint compression stage, we restrict our attention to a particular channel structure and lattice encoder design.
The line-of-sight configuration shown in Figure 4 is considered. It yields helper observations with the following covariance structure, labeling interferer signals in Figure 4 from top to bottom as ( W I 1 , W I 2 , W I 3 ) and indexing helpers from top to bottom:
X r a w , 1 = P S 1 + ( 2 3 ) e i π · 1 / 2 X m s g + W 1 + + P I ( 2 3 ) ( e i π · 1 / 2 - e i π · 2 / 3 ) W I 1 + P I ( 2 3 ) ( e i π · 1 / 2 - e i π · 1 ) W I 2 + P I ( 2 3 ) ( e i π · 1 / 2 - e i π · 4 / 3 ) W I 3 , X r a w , 2 = P S 1 + ( 2 3 ) e i π · 5 / 6 X m s g + W 2 + + P I ( 2 3 ) ( e i π · 5 / 6 - e i π · 2 / 3 ) W I 1 + P I ( 2 3 ) ( e i π · 5 / 6 - e i π · 1 ) W I 2 + P I ( 2 3 ) ( e i π · 5 / 6 - e i π · 4 / 3 ) W I 3 , X r a w , 3 = P S 1 + ( 2 3 ) e i π · 7 / 6 X m s g + W 3 + + P I ( 2 3 ) ( e i π · 7 / 6 - e i π · 2 / 3 ) W I 1 + P I ( 2 3 ) ( e i π · 7 / 6 - e i π · 1 ) W I 2 + P I ( 2 3 ) ( e i π · 7 / 6 - e i π · 4 / 3 ) W I 3 , X r a w , 4 = P S 1 + ( 2 3 ) e i π · 3 / 2 X m s g + W 4 + + P I ( 2 3 ) ( e i π · 3 / 2 - e i π · 2 / 3 ) W I 1 + P I ( 2 3 ) ( e i π · 3 / 2 - e i π · 1 ) W I 2 + P I ( 2 3 ) ( e i π · 3 / 2 - e i π · 4 / 3 ) W I 3 , W k N ( 0 , 1 ) i . i . d .
where P S , P I > 0 are signal, interferer powers, respectively. Choice of this channel is arbitrary but provides an instance where the decoder would not be able to recover the signal of interest if it observed directly without the provided helper messages.

3.2.1. Communications Schemes

First, we describe a class of lattice encoders the four helpers could employ:
  • Fix some c ( 0 , 3 ) . If helper k [ 4 ] in the channel from Figure 4 observes X raw , k n , then it encodes a normalized version of the signal:
    X k n : = c var X raw , k n X raw , k n .
  • Fix equal lattice encoding rates per helper r = r 1 = r 2 = r 3 = r 4 , and take lattice encoders as described in Theorem 1. Note that these rates may be distinct from the helper-to-base rates R 1 , , R 4 if post-processing of the encodings is involved.
Communications schemes involving lattice encoders of this form are compared in Figure 5 over an ensemble of choices for lattice encoder rates r and scales c ( 0 , 3 ) . Achieved transmitter-to-decoder communication rate versus sum-rate from helpers to decoder are plotted. The following quantities are plotted:
  • Upper Bound: An upper bound on the achievable transmitter-to-decoder communications rate, corresponding to helpers which forward with infinite rate. This bound is given by the formula I ( X m s g ; ( X raw , k ) k [ 4 ] ) .
  • Corollary 1 The achievable communications rate from Corollary 1, where each helper computes the lattice encoding described above, then employs a joint–compression stage to reduce its messaging rate. The sum-helpers-to-decoder rate for this scheme is given by Equation (2), taking S = [ 4 ] . The achieved messaging rate is given by the right-hand-side of Equation (3).
  • Uncompressed Lattice: The achievable communications rate from Corollary 1, with each helper forwarding to the decoder its entire lattice encoding without joint–compression. The sum-helpers-to-decoder rate for this scheme is 4 r since in this scheme each helper forwards to the base at rate R k = r . The achieved messaging rate is given by the right-hand-side of Equation (3).
  • Quantize & Forward: An achievable communications rate where helper-to-decoder rates R k , k [ 4 ] are chosen so that R 1 + R 2 + R 3 + R 4 = R s u m and each helper forwards a rate-distortion-optimal quantization of its observation to the decoder. The decoder processes these quantizations into an estimate of X msg and decodes. This is discussed in more detail in [13]. The sum-helpers-to-decoder rate for this scheme is R s u m . The achieved messaging rate is I ( X m s g ; ( X r a w , k + Z k ) k [ 4 ] ) , where Z k N ( 0 , var ( X r a w , k ) · 2 - 2 R k ) .
Performance of these strategies for different broadcaster powers is shown in Figure 5. It is seen that, although the lattice encoder designs are poor, the joint–compression stage partially compensates for this, and with joint compression the scheme outperforms the plain ‘Quantize & Forward’ scheme. Notice that none of the strategies produce convex rate regions, indicating that time-sharing can be used to achieve better rates in some regimes.
In all figures shown, the gap between achieved rates from the joint–compression bound given from Theorem 1 and Theorem 2 (the latter being an improvement) were often nonzero but too small to noticeably change the graphs in Figure 5. For this reason only, achievable rates for the strategy from Corollary 1 are plotted. The gain from involving codebook knowledge in lattice encoding compression is either insignificant for the tested scenario, or choices in computing the upper bounds are too poor to reveal its performance gains. Sub-optimality of the algorithm implementations here are all summarized and discussed in Section 4.

4. Conclusions

A class of upper bounds on the joint entropy of lattice-modulo encodings of correlated Gaussian signals was presented in Theorem 1. Proof of these bounds involves reducing the problem to the entropy of one lattice message, say, U K conditioned on the rest, U [ K - 1 ] . The upper bound for this reduced case involves an iterative construction where in each step a suitable integer vector is chosen. Choice of vectors in these steps determines the order in which the observed lattice-modulo components are integrated into an estimate of U K ’s support. Different choice of vectors at each step yields a different bound, and the strongest sequence of choices is unknown. For numerical results in Section 3.2, a certain suboptimal was used although there is no guarantee that this choice is optimal.
The upper bounds were applied to the problem of communicating through a many-help-one network, and these bounds were evaluated for a rendition of the problem using lattice codes of simple structure. The bounds in Theorem 1 can be strengthened in this scenario by integrating codebook knowledge. This strengthening is described in Theorem 2.
In spite of the suboptimal lattice encoder designs analyzed, it was seen in Section 3.2 that jointly-compressed lattice encoders are able to significantly outperform more basic schemes in the presence of heavy interference, even when the joint compression stage uses the weaker entropy bounds from Theorem 1. In the numerical experiments tried, the strengthening in Theorem 2 was not seen to significantly improve compression. Whether this is typically true or just an artifact of poor design of the joint-compression stage is unknown. In either case, the simpler joint-compression strategy without codebook knowledge was seen to improve performance.
The most immediate forwards steps to the presented results is in characterization of the search problems posed by Theorems 1, 2. Although not discussed, corner-points of joint compression described here are implementable using further lattice processing on the encodings U 1 , , U K and their dithers W . Such a process might mimic the compression procedure described in [10]. Tightness arguments from this work may also apply to the present less structured channel.
Finally, according to the transmission method in [10], the achievable rate in Corollary 1 may be improvable by breaking the transmitter’s message M up into a sum of multiple components, each from a finer lattice. Joint–compression for such a transmission could integrate codebook information from each message component separately, allowing for more degrees of freedom in the compression stage’s design, possibly improving the achievable rate. This is an extension of the argument in Appendix D. These improvements are out of this paper’s scope but provide meaningful paths forward.

Author Contributions

C.C. performed formal analysis. The scheme was conceptualized by C.C. and D.W.B. Work was supervised by D.W.B.

Funding

This research was funded by the Raytheon project “Simultaneous Signaling, Sensing, and Radar Operations under Constrained Hardware”.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Subroutines

Here, we provide a list of subroutines involved in a statement of the results:
  • Stages*(·) is a slight modification of an algorithm from [3], reproduced here in Algorithm 1. The original algorithm characterizes the integral combinations A Y which are recoverable with high probability from lattice messages U and dithers W , excluding those with zero power. The exclusion is due to the algorithm’s use of SLVC · as just defined. Such linear combinations never arose in the context of [3], although it provides justification for them being recoverable; in the paper, the algorithm’s argument is always full-rank. This is not true in the present context. The version here includes these zero-power subspaces by including a call to L A T T I C E K E R N E L · before returning.
  • SLVC B , ‘Shortest Lattice Vector Coordinates’ returns the nonzero integer vector a which minimizes the norm of B a while B a 0 , or the zero vector if no such vector exists. SLVC · can be implemented using a lattice enumeration algorithm like one in [15] together with the LLL algorithm to convert a set of spanning lattice vectors into a basis [16].
  • LatticeKernel(B,A), for B R K × d , A Z d × a returns the integer matrix A Z d × b whose columns span the collection of all a Z K where B a = 0 while A a = 0 a . In other words, it returns a basis for the integer lattice in ker B whose components are orthogonal to the lattice A . This can be implemented using an algorithm for finding ‘simultaneous integer relations’ as described in [17].
  • ICQM M , v , c is an “Integer Convex Quadratic Minimizer.” It provides a solution for the NP-hard problem: “Minimize ( x M x + 2 v x + c ) over x with integer components.” Although finding the optimal solution is exponentially difficult in input size, algorithms are tractable for low dimension. [18] (Algorithm 5, Figure 2).
  • CVarComponents Σ Q , A returns certain variables { M , v , c } involved in computing
    var Y K - α Y [ K - 1 ] | A Y [ K - 1 ]
    when Y = ( Y 1 , , Y K ) has covariance Σ Q . Write some matrices in block form:
    Σ Q = M 1 v 1 v 1 ς 1 2 , Σ Q A 0 A 0 Σ Q A 0 - 1 A 0 Σ Q = M 2 v 2 v 2 ς 2 2 .
    Then, taking M = ( M 1 - M 2 ) , v = - ( v 1 - v 2 ) , c = ( ς 1 2 - ς 2 2 ) , one can check that:
    var Y K - α Y [ K - 1 ] | A Y [ K - 1 ] = α M α + 2 v α + c .
  • CVar M 1 | M 2 ; Σ computes the conditional covariance matrix of M 1 Z conditioned on M 2 Z for Z N ( 0 , Σ ) . This is given by the formula:
    C V A R M 1 | M 2 ; Σ : = M 1 Σ M 1 - M 1 Σ M 2 pinv ( M 2 Σ M 2 ) M 2 Σ M 2 .
  • A L P H A 0 Σ Q , A in Algorithm 2 implements a strategy for choosing α 0 in Theorems 1, 2.
  • A L P H A Σ , A in Algorithm 3 implements a strategy for choosing α s in theorems 1, 2.
Algorithm 1 Compute recoverable linear combinations A R K × m from modulos of lattice encodings with covariance Σ Q R K × K .
  • function Stages * ( Σ )
  •      A [ ] , a SLVC Σ Q 1 / 2 , R I K ,
  •     while 0 < ( R a ) Σ Q ( R a ) < 1 do
  •          A [ A , a ]
  •          R I K - A pinv ( A Σ Q A ) A Σ Q
  •          a SLVC Σ Q 1 / 2 R
  •     end while
  •      A [ A , L A T T I C E K E R N E L Σ Q 1 / 2 R , A ]
  •     return A
  • end function
Algorithm 2 Strategy for choosing α 0 for Theorems 1, 2
  • function Alpha0( Σ )      ▹ Find α 0 which minimizes var Y K - α 0 Y [ K - 1 ] | A Y [ K - 1 ] for Σ = var Y [ K - 1 ] , A = S T A G E S * Σ .
  •      A S T A G E S * Σ .
  •      { M , v , c } C V A R C O M P O N E N T S Σ , A
  •      { α , σ 2 } ICQM M , v , c
  •      n 0 1
  •     return { n 0 , α , σ 2 }
  • end function
Algorithm 3 Strategy for picking α s for Theorems 1, 2.
  • function Alpha( Σ , A )        ▹Entropy-greedy implementation: choose α where the unknown part of α Y [ K - 1 ] has the least entropy among any combination with an unknown part. Expects Σ = var Y c , A = S T A G E S * var Y c | [ α 0 , , α s - 1 ] Y c .
  •     if rank A = K then
  •          α 0
  •     else
  •          Σ reduced C V A R I K | A ; Σ , α SLVC Σ reduced
  •     end if
  •     return α
  • end function

Appendix B. Proof of Lemmas 1, 2, Theorem 1

Proof. 
(Lemma 1)
Take D : = mod B c ( α 0 ( U [ K - 1 ] - W [ K - 1 ] ) ) . Then, by modulo’s distributive property D = mod B c ( α 0 Y [ K - 1 ] ) so that Y ˜ = - α 0 Y [ K - 1 ] ( D + L c ) . Compute:
1 n 0 mod n 0 B c Y ˜ = 1 n 0 mod n 0 B c ( - α 0 Y [ K - 1 ] ) .
= mod B c - 1 n 0 α 0 Y [ K - 1 ] .
Now:
U K = mod B c W k + Y k + 1 n 0 α 0 Y [ K - 1 ] - 1 n 0 α 0 Y [ K - 1 ] = mod B c W k + Y 0 - 1 n 0 α 0 Y [ K - 1 ] = mod B c W k + E Y 0 | A + E Y 0 | A - 1 n 0 α 0 Y [ K - 1 ] = mod B c W k + E Y 0 | A + Y ˜ + 1 n 0 Y ˜ .
By [3] (Theorem 1), A can be recovered by processing ( U [ K - 1 ] , W , Y ˜ ) , hence E Y 0 | A can also be recovered. Choose C : = - W k + E Y 0 | A so that the claim holds applying modulo’s distributive property. □
Proof. 
(Lemma 2)
Take U 0 = n 0 U K + α 0 U [ K - 1 ] , W 0 = n 0 W K + α 0 W [ K - 1 ] , U c = ( U 0 , U [ K ] ) , W c = ( W 0 , W ) . Take C : = E α Y c | A and D : = mod B c ( α ( U c - W c ) - E α Y c | A ) = mod B c ( E α Y c | A ) . Choose β : = cov Y , Y ˜ | A δ 2 .
Include good-for-coding auxillary lattices with the prescribed scales in the lattice ensemble from Theorem 1. With high probability since L ^ is good for coding semi norm-ergodic noise of power δ 2 + ε [2] and applying [19] (Appendix V) to Y ˜ , Y ˜ yields the result. □
Proof. 
(Theorem 1)

Appendix B.1. Upper Bound for Singleton S

Take a nested lattice construction from [20] (Theorem 1), involving the following sets:
  • Coarse and fine encoding lattices L c , L 1 , , L K (base regions B c , B 1 , , B K ) with each k has L c L k designed with nesting ratio 1 n log | B c L k | r k .
  • Discrete part auxiliary lattices L ^ 1 , , L ^ K (base regions B ^ 1 , , B ^ K ) with each L ^ k L c having nesting ratio 1 n log | B c L ^ k | 1 2 log δ k 2 .
  • Initial residual part auxiliary lattice L ^ 0 (base region B ^ 0 ) with L ^ 0 L K , nesting ratio 1 n log | B ^ 0 L K | 1 2 log σ 0 2 .
  • Residual part auxiliary lattices L ^ 1 , , L ^ K (base regions B ^ 1 , , B ^ K ) with each L ^ k L K , having nesting ratio 1 n log | B ^ k L K | 1 2 log σ k 2 .
The specified nesting ratios for the auxiliary lattices, σ 0 2 , σ 1 2 , , σ K 2 , δ 1 2 , , δ K 2 will be specified later.
Initialization 
Apply Lemma 1 to U K , and label the resulting variables Y ˜ 0 : = Y ˜ , Y ˜ 0 : = Y ˜ , ( L ^ 0 , B ^ 0 ) : = ( L ^ , B ^ ) , D 0 : = n 0 D , C 0 : = C , σ 0 2 : = σ 2 . In addition, define δ 0 2 : = n 0 2 , β 0 = 1 n 0 , B ^ 0 : = n 0 B c Now,
U K = mod B c C 0 + β 0 Y ˜ 0 + Y ˜ 0
so the support of U K is contained within:
S 0 : = [ C 0 + B ^ 0 + ( L c / n 0 + D 0 ) ] ( B c L K ) = [ C 0 + B ^ 0 + β 0 [ ( D 0 + L c ) B ^ 0 ] ( B c L K ) .
Support Reduction
Iterate over steps s = 1 , , K . For step s, condition on any event of the form Y ˜ ( s - 1 ) = s ( D s - 1 + L c ) B ^ s - 1 , of which there are no more than 2 n · ( log ( δ s - 1 2 ) + ε ) choices due to the nesting ratio for L ^ s - 1 in L c . Take A s : = S T A G E S * var Y c | [ α 0 , , α s - 1 ] Y c . By [3] (Theorem 1), A s : = A s Y c is recoverable by processing ( A s - 1 , mod n 0 B c Y 0 U [ K ] , W ) .
Now, apply Lemma 2 to ( Y , α , A ) = ( Y ˜ ( s - 1 ) , α s , A s ) , and label the resulting variables Y ˜ s : = Y ˜ , Y ˜ s : = Y ˜ , ( L ^ s , B ^ s ) : = ( L ^ , B ^ ) , ( L ^ s , B ^ s ) : = ( L ^ , B ^ ) , D s : = D , C s : = C , β s : = β , σ s 2 : = σ 2 , δ s 2 : = δ 2 . Now,
U K = mod B c t = 0 s C t + β t Y ˜ t + Y ˜ s
so the support of U K is contained within:
S s : = t = 0 s C t + β t ( D t + L c ) B ^ t + B ^ s ( B c L K ) .
Count Points in Estimated Supports
By design, there are no more than t = 0 s 2 n · ( 1 2 log ( δ t 2 ) + ε ) possible choices for S s . Each S s has no more than | B ^ s ( B c L K ) | 2 n · ( r K + 1 2 log ( σ s 2 ) + ε ) points. Then,
H ( U K | U [ K - 1 ] , W ) min s { 0 } [ K ] n · r s + 1 2 log ( σ s 2 ) + t = 0 s 1 2 log ( δ t 2 ) + K ε .
Bound Simultaneity
An argument is given in Section B.1 for an upper bound on the singleton case. The argument uses a Zamir-good nested lattice construction with a finite amount of nesting criteria, and conditions on a finite amount of high-probability events. Then, the argument holds for all cases of this form simultaneously by using a Zamir-good nested lattice construction satisfying all of each case’s nesting criteria and conditioning on all of each case’s high-probability events.
The entropy for the general case S = { s 1 , , s | S | } , T = { t 1 , , t | T | } can be rewritten using the chain rule:
H U S | U T , W = p = 1 | S | H U s p | U { s m : m < p } T , W .

Appendix C. Sketch of Theorem 2 for Upper Bound on Entropy-Rates of Decentralized Processing Messages

Proof. 
(Sketch) Proceed identically as in the proof of Theorem 1 in Appendix B up until either Section B.1 Initialization if definition for Y 0 was changed, or repetition s where α s = [ 0 , 0 , , 0 , 1 ] in Section B.1 Support Reduction if definition for ( α k ) k changed. In this portion, perform the following analysis instead. Compute:
D ( m s g ) : = mod B c , msg ( ( a R ( m s g ) + a Z ( m s g ) ) Y [ K - 1 ] - W msg ) = mod B c , msg ( λ ( m s g ) X msg n + Y ( m s g ) - W msg ) = mod B c , msg λ ( m s g ) γ n mod B c , msg ( M + W msg ) + Y ( m s g ) - W msg = mod B c , msg 1 + λ ( m s g ) γ n - 1 mod B c , msg ( M + W msg ) + Y ( m s g ) - W msg = mod B c , msg M + λ ( m s g ) γ n - 1 mod B c , msg ( M + W msg ) + Y ( m s g ) .
The additive terms in Equation (A1) are independent of one another, and the terms besides M have observed power δ ( m s g ) 2 . Choose the nesting ratio for L f , msg in B ^ s as r ^ s : = 1 2 log δ s 2 .
Then, with high probability since L ^ s is good for coding semi norm-ergodic noise below power δ s 2 [2] and applying [19] (Appendix V) to the derivation in Equation (A1),
M L ( m s g ) : = ( L f , msg B c , msg ) mod B c , msg D ( m s g ) + B ^ s ,
where D ( m s g ) is computable by processing ( U [ K - 1 ] , W , ( Y ˜ t ) [ s - 1 ] ) and 1 n log | L ( m s g ) | r ^ s + ε .
Rearranging Equation (A2),
X msg n = mod B c , msg ( M + W msg ) L s : = mod B c , msg L ( msg ) + W msg .
Now, define:
Y ˜ s : = X msg n ,
Y ˜ s : = E Y ˜ ( s - 1 ) | X msg n ,
C s : = E Y ˜ s - 1 | A s ,
β s : = cov Y ˜ ( s - 1 ) , Y s | A s δ s 2 ,
σ s 2 : = var ( Y ˜ s ) .
By construction, Y ˜ s is the components in Y ˜ ( s - 1 ) uncorrelated with X msg n :
Y ˜ ( s - 1 ) = β s Y ˜ s + Y ˜ s + C s ,
Y ˜ s L s .
Proceed as in proof of Theorem 1. □

Appendix D. Proof of Lemma 3 for Recombination of Decentralized Processing Lattice Modulos

Proof. 
By [3] (Theorem 1), a processing of U [ K ] with high probability outputs
a R Y [ K ] , a R image S T A G E S * Σ R K .
One can assume the nested lattices for the message transmitter, L f , msg L c , msg , are part of the lattice ensemble from Theorem 1, in particular ones finer than the main coarse lattice L c so that L c L c , msg and:
1 n log | L c , msg B c | r ^ c , msg 0 .
With this structure, then, for any a Z Z K , the encodings can be processed to produce (using lattice modulo’s distributive and subgroup properties)
mod L c , msg mod L c a R Y [ K ] + a Z ( U [ K ] - W [ K ] ) = mod L c , msg mod L c a R Y [ K ] + a Z Y [ K ] = mod L c , msg a R Y [ K ] + a Z Y [ K ] = mod L c , msg λ X msg + Y noise ,
where, in Equation (1), λ R and Y noise is the conglomerate of noise terms independent of X msg that are left over.
For channels with additive Gaussian noise, Y noise is a mixture of Gaussians and independent components uniform over good-for-quantization lattice base regions, so Y noise will probably, for long enough blocklength, land inside the base of any coarse enough good-for-coding lattice [19] (Appendix V). □

Appendix E. Proof of Corollary 1 for Achievability of the Decentralized Processing Rate

Proof. 
Fix any r msg , a Z , a R , λ , σ noise 2 from their definitions in Lemma 3 and any γ 2 ( 0 , 1 ] . Choose a communications rate R msg satisfying the criterion in the statement. Form an ensemble of lattices such as those described in Theorem 2, with nesting ratio for L c in L msg as 1 2 log ( 1 / γ 2 ) for γ ( 0 , 1 ) and L msg = L c if γ 2 = 1 . This design means γ n 2 : = var mod B c , msg ( X msg + W msg ) n γ 2 .
Have the transmitter encode its message M into a modulation X msg n as described at the beginning of Section 3.1 using a dither W msg of which all helpers and the decoder are informed. Have each k-th helper, k = 1 , , K , process its observation vector into a lattice modulo encoding U k as described in Theorem 2 using a dither W k of which the decoder is informed.
By Theorem 2, there exists a Slepian–Wolf binning scheme such that each k-th helper can process its message U k into a compression U k * with 1 n H ( U k * ) < R k , and where a decoder can with high probability process the ensemble of compressions ( U 1 * , , U K * ) along with dither side information ( W , W msg ) into ( U 1 , , U K ) . Employ this binning scheme at each of the receivers, and have them each forward their compressions U k * to the decoder.
Have the decoder decompress ( U 1 * , , U K * ) into ( U ^ 1 , , U ^ K ) . By the previous statement, with high probability, ( U ^ 1 , , U ^ K ) = U . Use the processing obtained from Lemma 3 on ( U ^ 1 , , U ^ K ) , with high probability producing a signal:
U proc : = mod L c , msg λ X msg + Y noise .
Decoding 
Decoding proceeds similar to [4]. At the decoder, compute:
U proc : = mod L c , msg U proc - W msg = mod L c , msg λ γ n mod L c , msg M + W msg + Y noise - W msg = mod L c , msg 1 + λ γ n - 1 mod L c , msg M + W msg + Y noise - W msg = mod L c , msg M + λ γ n - 1 mod L c , msg M + W msg + Y noise .
Recall that the fine codebook lattice L f , msg has been designed to be good for coding and so that the coarse codebook lattice L c , msg has a nesting ratio within it as R msg . This means that L f , msg is good for coding semi norm-ergodic noise with power less than γ n 2 2 - 2 R msg .
Notice M mod L c , msg M + W msg Y noise , where the first independence is by the crypto lemma [1]. This is to say that additive terms other than M in Equation (A3) are noise with power
var λ γ n - 1 mod L c , msg M + W msg + Y noise = γ n 2 · ( 1 - λ / γ n ) 2 + σ noise 2 .
Furthermore, by [19] (Appendix V) on the noise, then it is probably in the base region of any lattice good for coding semi norm-ergodic noise with power less than Equation (A4). Then, round L f , msg ( U proc ) = M with high probability if
γ n 2 · ( 1 - λ / γ n ) 2 + σ noise 2 < γ n 2 2 - 2 R msg ,
or, rearranging,
R msg < 1 2 log γ n 2 ( λ - γ n ) 2 + σ noise 2 .
The limit of the right side of Equation (A5) equals Equation (3). □

References

  1. Zamir, R. Lattice Coding for Signals and Networks: A Structured Coding Approach to Quantization, Modulation, and Multiuser Information Theory; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  2. Ordentlich, O.; Erez, U. A simple proof for the existence of “good” pairs of nested lattices. IEEE Trans. Inf. Theory 2016, 62, 4439–4453. [Google Scholar] [CrossRef]
  3. Chapman, C.; Kinsinger, M.; Agaskar, A.; Bliss, D.W. Distributed Recovery of a Gaussian Source in Interference with Successive Lattice Processing. Entropy 2019, 21, 845. [Google Scholar] [CrossRef]
  4. Erez, U.; Zamir, R. Achieving 1 2 log(1+SNR) on the AWGN Channel With Lattice Encoding and Decoding. IEEE Trans. Inf. Theory 2004, 50, 1. [Google Scholar] [CrossRef]
  5. Ordentlich, O.; Erez, U.; Nazer, B. Successive integer-forcing and its sum-rate optimality. In Proceedings of the 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 2–4 October 2013; pp. 282–292. [Google Scholar]
  6. Ordentlich, O.; Erez, U. Precoded integer-forcing universally achieves the MIMO capacity to within a constant gap. IEEE Trans. Inf. Theory 2014, 61, 323–340. [Google Scholar] [CrossRef]
  7. Wagner, A.B. On Distributed Compression of Linear Functions. IEEE Trans. Inf. Theory 2011, 57, 79–94. [Google Scholar] [CrossRef]
  8. Yang, Y.; Xiong, Z. An improved lattice-based scheme for lossy distributed compression of linear functions. In Proceedings of the 2011 Information Theory and Applications Workshop, La Jolla, CA, USA, 6–11 Feburuary 2011. [Google Scholar]
  9. Yang, Y.; Xiong, Z. Distributed compression of linear functions: Partial sum-rate tightness and gap to optimal sum-rate. IEEE Trans. Inf. Theory 2014, 60, 2835–2855. [Google Scholar] [CrossRef]
  10. Cheng, H.; Yuan, X.; Tan, Y. Generalized compute-compress-and-forward. IEEE Trans. Inf. Theory 2018, 65, 462–481. [Google Scholar] [CrossRef]
  11. Saurabha, T.; Viswanath, P.; Wagner, A.B. The Gaussian Many-help-one Distributed Source Coding Problem. IEEE Trans. Inf. Theory 2009, 56, 564–581. [Google Scholar]
  12. Sanderovich, A.; Shamai, S.; Steinberg, Y.; Kramer, G. Communication via Decentralized Processing. IEEE Trans. Inf. Theory 2008, 54, 3008–3023. [Google Scholar] [CrossRef]
  13. Chapman, C.D.; Mittelmann, H.; Margetts, A.R.; Bliss, D.W. A Decentralized Receiver in Gaussian Interference. Entropy 2018, 20, 269. [Google Scholar] [CrossRef]
  14. El Gamal, A.; Kim, Y.H. Network Information Theory; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  15. Schnorr, C.P.; Euchner, M. Lattice Basis Reduction: Improved Practical Algorithms and Solving Subset Sum Problems. Math. Program. 1994, 66, 181–199. [Google Scholar] [CrossRef]
  16. Buchmann, J.; Pohst, M. Computing a Lattice Basis from a System of Generating Vectors. In Proceedings of the European Conference on Computer Algebra, Leipzig, Germany, 2–5 June 1987; pp. 54–63. [Google Scholar]
  17. Hastad, J.; Just, B.; Lagarias, J.C.; Schnorr, C.P. Polynomial time algorithms for finding integer relations among real numbers. SIAM J. Comput. 1989, 18, 859–881. [Google Scholar] [CrossRef]
  18. Ghasemmehdi, A.; Agrell, E. Faster recursions in sphere decoding. IEEE Trans. Inf. Theory 2011, 57, 3530–3536. [Google Scholar] [CrossRef]
  19. Krithivasan, D.; Pradhan, S.S. Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function. In International Symposium on Applied Algebra, Algebraic Algorithms, and Error-Correcting Codes; Springer: Berlin/Heidelberg, Germany, 2007; pp. 178–187. [Google Scholar] [Green Version]
  20. Erez, U.; Litsyn, S.; Zamir, R. Lattices Which are Good for (Almost) Everything. IEEE Trans. Inf. Theory 2005, 51, 3401–3416. [Google Scholar] [CrossRef]
Figure 1. Collapse of the support of a random signal’s modulo after conditioning on the modulo of a related signal. Modulo is shown to some lattice L with base region B. Consider a signal comprised of two independent random components, a and b , equaling β a + b . A possible outcome is drawn on the far left. Unconditioned, the support for mod ( β a + b ) is the entire base region B, shown fully shaded in gray. Once mod ( a ) is observed, the component β a is known up to an additive factor in β L . If further the powers of a and b are bounded above, this leaves feasible points for mod ( β a + b ) as a subset of those of the unconditioned variable. This subset is shaded yellow on the far right.
Figure 1. Collapse of the support of a random signal’s modulo after conditioning on the modulo of a related signal. Modulo is shown to some lattice L with base region B. Consider a signal comprised of two independent random components, a and b , equaling β a + b . A possible outcome is drawn on the far left. Unconditioned, the support for mod ( β a + b ) is the entire base region B, shown fully shaded in gray. Once mod ( a ) is observed, the component β a is known up to an additive factor in β L . If further the powers of a and b are bounded above, this leaves feasible points for mod ( β a + b ) as a subset of those of the unconditioned variable. This subset is shaded yellow on the far right.
Entropy 21 00957 g001
Figure 2. High level overview of the communications scenario in Section 3. A transmitter seeks to communicate digital information to a decoder through a Gaussian channel in the presence of Gaussian interference (one interferer drawn). The decoder is informed of the transmitter’s signal through helpers which pass it digital information through an out-of-band local area network (LAN).
Figure 2. High level overview of the communications scenario in Section 3. A transmitter seeks to communicate digital information to a decoder through a Gaussian channel in the presence of Gaussian interference (one interferer drawn). The decoder is informed of the transmitter’s signal through helpers which pass it digital information through an out-of-band local area network (LAN).
Entropy 21 00957 g002
Figure 3. Block diagram of the communications scenario. A signal X msg from a codebook is broadcast through an additive white Gaussian noise (AWGN) channel in the presence of independent Gaussian interference, creating correlated additive noise ( W 1 , , W K ) . The signal is observed at K receivers labeled Q 1 , , Q K . The k-th reciver observes X k , raw and processes its observation into a rate- R k message U k . The messages are forwarded losslessly over a local area side channel to a decoder which attempts to recover the message.
Figure 3. Block diagram of the communications scenario. A signal X msg from a codebook is broadcast through an additive white Gaussian noise (AWGN) channel in the presence of independent Gaussian interference, creating correlated additive noise ( W 1 , , W K ) . The signal is observed at K receivers labeled Q 1 , , Q K . The k-th reciver observes X k , raw and processes its observation into a rate- R k message U k . The messages are forwarded losslessly over a local area side channel to a decoder which attempts to recover the message.
Entropy 21 00957 g003
Figure 4. The line-of-sight channel considered. A black transmit node at ( 0 , 0 ) seeks to communicate with a black decoder node at ( 1 , 0 ) . Three red ‘interferer’ nodes broadcast an independent Gaussian signal, each interferer has its own signal. The decoder does not observe any signal directly but is forwarded messages from four blue ‘helper’ nodes which observe signals through a line-of-sight additive-white-Gaussian noise channel. The interferers and helpers are oriented alternatingly and equispaced about a radius- 2 / 3 semicircle towards the encoder with center ( 1 , 0 ) .
Figure 4. The line-of-sight channel considered. A black transmit node at ( 0 , 0 ) seeks to communicate with a black decoder node at ( 1 , 0 ) . Three red ‘interferer’ nodes broadcast an independent Gaussian signal, each interferer has its own signal. The decoder does not observe any signal directly but is forwarded messages from four blue ‘helper’ nodes which observe signals through a line-of-sight additive-white-Gaussian noise channel. The interferers and helpers are oriented alternatingly and equispaced about a radius- 2 / 3 semicircle towards the encoder with center ( 1 , 0 ) .
Entropy 21 00957 g004
Figure 5. Communications rate versus helper-sum-rate for 1000 randomly chosen encoding schemes as described in Section 3.2.1 in the line-of-sight channel from Figure 4, Equation (4). In each subplot, the transmitter broadcasts with power such that the average SNR seen across helpers is the given ‘transmitter’ dB figure. Each interferer broadcasts its own signal with its power the given ‘interferer’ dB stronger than the transmitter’s power. Notice that, although the uncompressed lattice scheme is often outperformed by plain Quantize & Forward for the same helper message rates, adding a properly configured compression stage can more than make up for the sum-rate difference. In certain regimes, even the compressed lattice scheme performs worse or practically the same as Quantize & Forward, indicating the given lattice encoder design is weak; uncompressed lattice encoders can be configured to implement the Quantize & Forward scheme.
Figure 5. Communications rate versus helper-sum-rate for 1000 randomly chosen encoding schemes as described in Section 3.2.1 in the line-of-sight channel from Figure 4, Equation (4). In each subplot, the transmitter broadcasts with power such that the average SNR seen across helpers is the given ‘transmitter’ dB figure. Each interferer broadcasts its own signal with its power the given ‘interferer’ dB stronger than the transmitter’s power. Notice that, although the uncompressed lattice scheme is often outperformed by plain Quantize & Forward for the same helper message rates, adding a properly configured compression stage can more than make up for the sum-rate difference. In certain regimes, even the compressed lattice scheme performs worse or practically the same as Quantize & Forward, indicating the given lattice encoder design is weak; uncompressed lattice encoders can be configured to implement the Quantize & Forward scheme.
Entropy 21 00957 g005
Table 1. Symbols and notation.
Table 1. Symbols and notation.
a : = b Define a to equal b
[ n ] Integers from 1 to n
A , a , A Matrix, column vector, vector, random vector
A , a Transpose (All matrices involved are real)
[ A ] S , T Submatrix corresponding to rows S, columns T of A
Y S an | S | -vector, the sub-vector of Y including components with indices in S. If S has order then this vector respects S’s order.
I K K × K identity matrix
0 K K × 1 zero vector
diag a Square diagonal matrix with diagonals a
pinv ( · ) Moore-Penrose pseudoinverse
N ( 0 , Σ ) Normal distribution with zero mean, covariance Σ
X f X is a random variable distributed like f
X n , f ( x n ) Vector of n independent trials of a random variable distributed like X, a function whose input is intended to be such a variable
var ( a ) Variance (or covariance matrix) of (components of) a, averaged over time index.
var ( a | b ) Conditional variance (or covariance matrix) of (components of) a given observation b, averaged over time index.
cov ( a , b ) , cov a , b | c Covariance between a and b , , covariance between a and b conditioned on c, averaged over time index.
E a | b Linear MMSE estimate of a given observations b
E a | b Complement of E a | b , i.e., E a | b : = a - E a | b . An important property is that E a | b and E a | b are uncorrelated.
round L ( · ) , mod L ( · ) Lattice round, modulo to a lattice L (when it is clear what base region is associated with L).
Table 2. Description of variables.
Table 2. Description of variables.
KNumber of lattice encodings in current context.
nScheme blocklength
X k n Observation at receiver k
W k Lattice dither k
U k Lattice encoding k
Y k Quantization of X k n
Y c Ensemble of lattice quantizations, sans modulo
Σ K × K time-averaged covariance between observations X 1 n , , X K n
Σ Q K × K time-averaged covariance between quantizations Y 1 , , Y K
r 1 , , r K Nesting ratios for coarse lattice L c in the fine lattices L 1 , , L K , equivalent to the encoding rates of lattice codes when joint compression is not used
R 1 , , R K Messaging rates for helpers in the Section 3 communications scenario
r msg Nesting ratio for codebook coarse lattice L c , msg in codebook fine lattice L f , msg in Section 3, equivalent to codebook rate
h msg Covariance between codeword and quantizations in Section 3
α s Integer combination of Y c to analyze in step s of Appendix B
δ s 2 Variance of α s Y c after removing prior knowledge in Appendix B
σ s 2 Variance of Y K uncorrelated with prior knowledge and α s Y c in Appendix B
β s Regression coefficient for α s Y c in Y K after including prior knowledge at step s in Appendix B

Share and Cite

MDPI and ACS Style

Chapman, C.; Bliss, D.W. Upper Bound on the Joint Entropy of Correlated Sources Encoded by Good Lattices. Entropy 2019, 21, 957. https://doi.org/10.3390/e21100957

AMA Style

Chapman C, Bliss DW. Upper Bound on the Joint Entropy of Correlated Sources Encoded by Good Lattices. Entropy. 2019; 21(10):957. https://doi.org/10.3390/e21100957

Chicago/Turabian Style

Chapman, Christian, and Daniel W. Bliss. 2019. "Upper Bound on the Joint Entropy of Correlated Sources Encoded by Good Lattices" Entropy 21, no. 10: 957. https://doi.org/10.3390/e21100957

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop