Next Article in Journal
Measuring Uncertainty in the Negation Evidence for Multi-Source Information Fusion
Next Article in Special Issue
Orthogonal Time Frequency Space Modulation Based on the Discrete Zak Transform
Previous Article in Journal
On Applicability of Quantum Formalism to Model Decision Making: Can Cognitive Signaling Be Compatible with Quantum Theory?
Previous Article in Special Issue
Bounds for Coding Theory over Rings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Gaussian Multiuser Wiretap Channels in the Presence of a Jammer-Aided Eavesdropper †

1
Department of Electrical Engineering and Computer Science, Wichita State University, Wichita, KS 67260, USA
2
Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43210, USA
*
Authors to whom correspondence should be addressed.
This paper is an extended version of our paper published in Chou, R.; Yener, A. The Gaussian multiple access wiretap channel when the eavesdropper can arbitrarily jam. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017.
Entropy 2022, 24(11), 1595; https://doi.org/10.3390/e24111595
Submission received: 29 September 2022 / Revised: 27 October 2022 / Accepted: 28 October 2022 / Published: 2 November 2022
(This article belongs to the Special Issue Information Theoretic Methods for Future Communication Systems)

Abstract

:
This paper considers secure communication in the presence of an eavesdropper and a malicious jammer. The jammer is assumed to be oblivious of the communication signals emitted by the legitimate transmitter(s) but can employ any jamming strategy subject to a given power constraint and shares her jamming signal with the eavesdropper. Four such models are considered: (i) the Gaussian point-to-point wiretap channel; (ii) the Gaussian multiple-access wiretap channel; (iii) the Gaussian broadcast wiretap channel; and (iv) the Gaussian symmetric interference wiretap channel. The use of pre-shared randomness between the legitimate users is not allowed in our models. Inner and outer bounds are derived for these four models. For (i), the secrecy capacity is obtained. For (ii) and (iv) under a degraded setup, the optimal secrecy sum-rate is characterized. Finally, for (iii), ranges of model parameter values for which the inner and outer bounds coincide are identified.

1. Introduction

Consider secure communication over wireless channels between legitimate parties in the presence of an eavesdropper and a malicious jammer. The jammer is assumed to be oblivious of the legitimate users’ communication but can employ any jamming strategy subject to a given power constraint. Consequently, the main channel between the legitimate users is arbitrarily varying [1]. Unlike most works that consider arbitrarily varying channels, however, pre-shared randomness is not available to the legitimate users in our scenario. Additionally, the jammer shares her jamming signal with the eavesdropper who can thus perfectly cancel the effect of the jamming signal on her channel. In this paper, we study the fundamental limits of secure communication rates in the presence of such a jammer-aided eavesdropper over four Gaussian wiretap channel models: the Gaussian wiretap channel [2], the Gaussian multiple-access wiretap channel [3], the Gaussian broadcast wiretap channel [4], and the Gaussian symmetric interference wiretap channel.

1.1. Contributions

Our contributions are summarized as follows.
  • For secure communication over Gaussian point-to-point, multiple-access, broadcast, and symmetric interference wiretap channels in the presence of a jammer-aided eavesdropper as described above, we determine inner and outer bounds on the secrecy capacity region.
  • We show that our bounds are tight for the point-to-point setting, tight for sum-rates for the multiple-access and interference settings under degraded setups, and tight for some ranges of model parameter values for the broadcast setting.
Our main strategy to handle our multiuser settings is to reduce the problem to single-user coding. Previous known techniques for such a reduction, such as rate-splitting [5] and successive cancellation decoding [5] [Appendix C], that have been developed for multiple-access settings without security constraints, do not easily apply to wiretap channel models. These techniques consist in achieving the corner points of achievability regions that can be described by polymatroids whose corner points have positive components. However, regions described by polymatroids whose corner points have negative components, as in our wiretap channel models, prevent the applications of these techniques. We overcome this roadblock by proposing novel time-sharing strategies coupled with appropriate secret-key exchanges between the legitimate users. As seen in the proofs of our results, eavesdropping and arbitrary jamming are not easy to decouple in the secrecy analysis. In particular, the analysis of the secrecy in our proposed model does not follow from a standard secrecy analysis in the absence of jamming, as we need to consider (i) codewords uniformly distributed over spheres, which we use to handle an arbitrarily varying main channel; and (ii) block-Markov coding and specific time-sharing strategies (to allow the reduction of multiuser coding to single-user coding) which create inter-dependencies between coding blocks. Note that our achievability schemes also rely on point-to-point codes developed in [1]. One of the benefits of reducing multiuser coding to point-to-point coding techniques is that despite the fact that our setting involves multiple transmitters and an arbitrarily varying channel between the legitimate users, pre-shared randomness among the legitimate users will not be needed in our achievability schemes. Our strategy for the converse consists of reducing the problem of determining a converse for our model to the problem of determining a converse for a related model in the absence of a jammer.

1.2. Related Works

Related works that consider simultaneous eavesdropping and oblivious jamming threats for the point-to-point discrete memoryless wiretap channel include [6,7,8,9,10,11]. The proof techniques used in these references to obtain security, such as random binning [12,13], resolvability/soft covering [10,14,15], or typicality arguments, are challenging to apply to a Gaussian setting in the absence of shared randomness at the legitimate user. Specifically, for the Gaussian point-to-point channel in the presence of an adversary that arbitrarily jams [1], the only known coding mechanism to obtain reliability in the absence of pre-shared randomness relies on codewords uniformly drawn on a unit sphere [1], which are challenging to integrate with the above techniques to obtain security because their components are not independent and identically distributed.
Another line of work [16] considers Gaussian channel models where the eavesdropper channel can vary arbitrarily, but the main channel is not. The setting considered in the present paper, where the main channel between the legitimate users is arbitrarily varying, prevents the use of analyses similar to those in [16] for the same reasons described above.
Several other works have considered continuous channel models, including the Gaussian MIMO wiretap channel [17], the Gaussian multiple-access wiretap channel [18], where deviating users can be viewed as active adversary, and continuous point-to-point wiretap channels [19,20], where the adversary can choose between eavesdropping or jamming. These references differ from the above-mentioned references on arbitrarily varying channels as they assume a specific signaling strategy for the jammer.
Finally, note that for point-to-point channels, stronger jamming strategies that depend on the signals of the legitimate transmitters have been studied in [21,22,23].

1.3. Organization of the Paper

The remainder of the paper is organized as follows. We describe the models in Section 2. We present our results for the Gaussian point-to-point wiretap channel, the Gaussian multiple-access wiretap channel, the Gaussian broadcast wiretap channel, and the Gaussian symmetric interference wiretap channel in Section 3, Section 4, Section 5 and Section 6, respectively. We discuss in Section 4.2 a way to avoid, at least for some channel parameters, time-sharing for the multiple-access setting. We also discuss in Section 4.3 an extension of the multiple-access setting to more than two transmitters. We detail the proofs for the multiple-access setting in Section 7 and Section 8. We end the paper with concluding remarks in Section 9.

2. Problem Statement

2.1. Notation

For a , b R , define a , b [ a , b ] N , ] a , b [ [ a , b ] { a , b } , ] a , b ] [ a , b ] { a } , and [ a , b [ [ a , b ] { b } . The components of a vector, X n , of size n N , are denoted by subscripts, i.e., X n ( X 1 , X 2 , , X n ) . For x R , define [ x ] + max ( 0 , x ) . The notation x y describes a function that associates y to x when the domain and the image of the function are clear from the context. The power set of a finite set S is denoted by 2 S . The convex hull of a set S is denoted by C o n v ( S ) . Unless specified otherwise, capital letters designate random variables, whereas lowercase letters designate realizations of associated random variables, e.g., x is a realization of the random variable X. For R R + , B 0 n ( R ) denotes the ball of radius R centered in 0 in R n under the Euclidian norm.

2.2. Gaussian Multiuser Wiretap Channel in the Presence of a Jammer-Aided Eavesdropper

Consider the Gaussian memoryless wiretap channel model with two transmitters and two legitimate receivers
Y 1 n g 11 X 1 n + g 12 X 2 n + g 13 S n + N 1 n ,
Y 2 n g 21 X 1 n + g 22 X 2 n + g 23 S n + N 2 n ,
Z n h 1 X 1 n + h 2 X 2 n + N Z n ,
where Y 1 n , Y 2 n are the channel outputs observed by the legitimate receivers, and Z n is the channel output observed by the eavesdropper. For l { 1 , 2 } , X l n is the signal emitted by Transmitter l satisfying the power constraint X l n 2 i = 1 n ( X l ) i 2 n Γ l , S n is an arbitrary jamming sequence transmitted by the jammer that is oblivious of the communication of the legitimate users and satisfies the power constraint S n 2 i = 1 n S i 2 n Λ , and N Y 1 n , N Y 2 n , N Z n are sequences of independent and identically distributed Gaussian noise with variances σ 1 2 , σ 2 2 , σ Z 2 , respectively. The channel coefficients g 11 , g 12 , g 13 , g 21 , g 22 , g 23 , h 1 , h 2 are fixed and known to all parties. Note that we assume that the jammer helps the eavesdropper by sharing her jamming sequence, which allows the eavesdropper to perfectly cancel S n from Z n . Coding schemes and achievable rates are defined as follows.
Definition 1.
Let n , k N . A 2 n R 1 , 2 n R 2 , n , k code C n consists, for each block j 1 , k , of
  • Two message sets M l ( j ) 1 , 2 n R l ( j ) , l { 1 , 2 } ;
  • Two stochastic encoders, e l ( j ) : M l ( j ) B 0 n ( n Γ l ) , l { 1 , 2 } ;
  • Two decoders, g l ( j ) : R n M l ( j ) , l { 1 , 2 } ;
where for any l { 1 , 2 } , R l = 1 k j = 1 k R l ( j ) , and operates as follows. For each block j 1 , k , transmitter l { 1 , 2 } encodes with e l ( j ) a uniformly distributed message M l ( j ) M l ( j ) to a codeword of length n, which is sent to the legitimate receiver over the channel described by Equation (1a), Equation (1b), Equation (1c) with the power constraint n Λ for the jamming signal S i n . Note that all the power constraints at the transmitters and the jammer hold for a given transmission block of length n, which is relevant when the power constraints hold within any time window corresponding to n channel uses. Then, the legitimate receiver l { 1 , 2 } forms an estimate M ^ l ( j ) g l ( j ) ( Y l n ) of the message M l ( j ) . We define M ^ M ^ 1 ( j ) , M ^ 2 ( j ) j 1 , k , M M 1 ( j ) , M 2 ( j ) j 1 , k , S ( S i n ) i 1 , k , and S { ( S i n ) i 1 , k : S i n 2 n Λ , i 1 , k } .
Definition 2.
A rate pair ( R 1 , R 2 ) is achievable, if there exists a sequence of 2 n R 1 , 2 n R 2 , n , k codes such that
lim n sup S S P [ M ^ M ] = 0 ( reliability ) ,
lim n 1 n k H ( M | Z k n ) R 1 + R 2 ( equivocation ) .

2.3. Special Case 1: The Gaussian Wiretap Channel in the Presence of a Jammer-Aided Eavesdropper

Assume that the two transmitters are colocated and the two receivers are colocated in Section 2.2. More specifically, as depicted in Figure 1, the channel model of Section 2.2 becomes
Y n X n + S n + N 1 n ,
Z n h X n + N Z n ,
where σ 1 2 = σ Z 2 = 1 . We term this model as Gaussian Wiretap channel with Jammer-Aided eavesdropper (Gaussian WT-JA in short form). Note that this model recovers as a special case the Gaussian wiretap channel [2].

2.4. Special Case 2: The Gaussian Multiple-Access Wiretap Channel in the Presence of a Jammer-Aided Eavesdropper

Assume that the two receivers are colocated in Section 2.2. More specifically, as depicted in Figure 2, the channel model of Section 2.2 becomes
Y n X 1 n + X 2 n + S n + N 1 n ,
Z n h 1 X 1 n + h 2 X 2 n + N Z n ,
where σ 1 2 = σ Z 2 = 1 . We term the model as Gaussian Multiple-Access Wiretap channel with Jammer-Aided eavesdropper (Gaussian MAC-WT-JA in short form) with the parameters ( Γ 1 , Γ 2 , h 1 , h 2 , Λ , σ 1 2 , σ Z 2 ) . This model recovers as special cases the model in [24] in the absence of the security constraint (2b), and the Gaussian multiple-access wiretap channel [3]. Note that the model in [24] was introduced to study the presence of selfish transmitters via cooperative game theory, and that, similarly, the Gaussian MAC-WT-JA can be used to study the presence of selfish transmitters via coalitional game theory [25].

2.5. Special Case 3: The Gaussian Broadcast Wiretap Channel in the Presence of a Jammer-Aided Eavesdropper

Assume that the two transmitters are colocated in Section 2.2. More specifically, as depicted in Figure 3, the channel model of Section 2.2 becomes
Y 1 n X n + g 1 S n + N 1 n ,
Y 2 n X n + g 2 S n + N 2 n ,
Z n h X n + N Z n ,
where σ Z 2 = 1 . We term the model as Gaussian Broadcast Wiretap channel with Jammer-Aided eavesdropper (Gaussian BC-WT-JA in short form). Note that this model recovers as special cases the multi-receiver wiretap channel [26] and the model in [27] in the absence of the security constraint (2b).

2.6. Special Case 4: The Gaussian Symmetric Interference Wiretap Channel in the Presence of a Jammer-Aided Eavesdropper

Consider the following special case of the channel model of Section 2.2.
Y 1 n X 1 n + X 2 n + S n + N 1 n ,
Y 2 n X 1 n + X 2 n + S n + N 2 n ,
Z n h 1 X 1 n + h 2 X 2 n + N Z n ,
where σ 1 2 = σ 2 2 = σ Z 2 = 1 . We term the model as Gaussian Symmetric Interference Wiretap channel with Jammer-Aided eavesdropper (Gaussian SI-WT-JA in short form). In the absence of the security constraint (2b) and the jamming sequence, this model recovers a special case of the Gaussian interference channel under strong interference [28].

3. The Gaussian Wiretap Channel in the Presence of a Jammer-Aided Eavesdropper

We present a capacity result for the Gaussian WT-JA model described in Section 2.3.
Theorem 1.
The secrecy capacity of the Gaussian WT-JA is
C ( Λ ) 1 2 log 1 + ( 1 + Λ ) 1 Γ 1 + h Γ + if Γ > Λ 0 if Γ Λ .
Observe that C ( Λ ) is non-zero if and only if Γ > Λ and ( 1 + Λ ) 1 > h . When Γ > Λ , Theorem 1 means that arbitrary oblivious jamming is no more harmful than Gaussian jamming, i.e., when the jamming sequence is obtained from independent and identical realizations of a zero-mean Gaussian random variable with variance equal to the power constraint Λ .
The proof of Theorem 1 follows as a special case of the achievability and converse bounds derived in the next section in Theorems 2 and 3, respectively, for the Gaussian MAC-WT-JA.

4. The Gaussian Multiple-Access Wiretap Channel in the Presence of a Jammer-Aided Eavesdropper

4.1. Inner and Outer Bounds for the Gaussian MAC-WT-JA

We derive inner and outer bounds for the Gaussian MAC-WT-JA in Theorems 2 and 3. Their proofs are provided in Section 7 and Section 8, respectively.
Theorem 2
(Achievability). We consider three cases.
  • When Γ 1 > Λ and Γ 2 Λ ,
    R 1 M A C ( R 1 , 0 ) : R 1 max 0 P 2 Γ 2 1 2 log 1 + Γ 1 ( 1 + Λ + P 2 ) 1 1 + Γ 1 h 1 ( 1 + h 2 P 2 ) 1 +
    is achievable.
  • When Γ 2 > Λ and Γ 1 Λ ,
    R 2 M A C ( 0 , R 2 ) : R 2 max 0 P 1 Γ 1 1 2 log 1 + Γ 2 ( 1 + Λ + P 1 ) 1 1 + Γ 2 h 2 ( 1 + h 1 P 1 ) 1 +
    is achievable.
  • When min ( Γ 1 , Γ 2 ) > Λ ,
    R M A C C o n v R 1 M A C R 2 M A C Λ < P 1 Γ 1 Λ < P 2 Γ 2 R 1 , 2 M A C ( P 1 , P 2 )
    is achievable, where
    R 1 , 2 M A C ( P 1 , P 2 ) T Λ ( R 1 , R 2 ) : R 1 1 2 log 1 + P 1 ( 1 + Λ ) 1 1 + P 1 h 1 ( 1 + h 2 P 2 ) 1 + , R 2 1 2 log 1 + P 2 ( 1 + Λ ) 1 1 + P 2 h 2 ( 1 + h 1 P 1 ) 1 + , R 1 + R 2 1 2 log 1 + ( P 1 + P 2 ) ( 1 + Λ ) 1 1 + P 1 h 1 + P 2 h 2 + .
Theorem 3
(Partial Converse).
  • If max ( Γ 1 , Γ 2 ) Λ , then no positive rate is achievable.
  • When min ( Γ 1 , Γ 2 ) > Λ and h 1 = h 2 , the sum-rate bound of R 1 , 2 M A C ( Γ 1 , Γ 2 ) described in Equation (11) is tight by choosing ( P 1 , P 2 ) = ( Γ 1 , Γ 2 ) .
Observe that in the achievability scheme for R 1 M A C , choosing a transmission power smaller than Γ 1 for Transmitter 1 would result in a smaller region, since for a fixed P 2 , x log 1 + x ( 1 + Λ + P 2 ) 1 1 + x h 1 ( 1 + h 2 P 2 ) 1 is either negative when ( 1 + Λ + P 2 ) 1 h 1 ( 1 + h 2 P 2 ) 1 , or non-decreasing when ( 1 + Λ + P 2 ) 1 > h 1 ( 1 + h 2 P 2 ) 1 . By exchanging the role of the transmitters, we have the same observation for R 2 M A C .

4.2. Discussion of Rate-Splitting

Rate-splitting [5] can be adapted to the Gaussian MAC-WT-JA to avoid time-sharing, however, the entire region in Equation (11) cannot be achieved as splitting the power of one user precludes reliable communication. Assuming that
I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) max [ I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) , I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) ] ,
then one can split the power of Transmitter 1 in ( P 1 δ ) and δ , where δ [ 0 , P 1 ] , and define the following functions from [ 0 , P 1 ] to R
R U : δ 1 2 log 1 + ( P 1 δ ) ( 1 + Λ + δ + P 2 ) 1 1 + h 1 ( P 1 δ ) ,
R V : δ 1 2 log 1 + δ ( 1 + Λ ) 1 1 + h 1 δ ( 1 + h 1 ( P 1 δ ) + h 2 P 2 ) 1 ,
R 2 : δ 1 2 log 1 + P 2 ( 1 + Λ + δ ) 1 1 + h 2 P 2 ( 1 + h 1 ( P 1 δ ) ) 1 .
Lemma 1.
For any δ [ 0 , P 1 ] , we have ( R U + R V + R 2 ) ( δ ) = I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) . Moreover, for any point ( x 0 , y 0 ) in
D ( P 1 , P 2 ) ( R 1 , R 2 ) R 1 , 2 M A C ( P 1 , P 2 ) : R 1 + R 2 = 1 2 log 1 + ( P 1 + P 2 ) ( 1 + Λ ) 1 1 + P 1 h 1 + P 2 h 2 + ,
there exists δ 0 [ 0 , P 1 ] such that x 0 = ( R U + R V ) ( δ 0 ) and y 0 = R 2 ( δ 0 ) .
Proof. 
Define
Y U + V + X 2 + N Y ,
Z h 1 ( U + V ) + h 2 X 2 + N Z ,
where V, U, X 2 , N Y , N Z are independent zero-mean Gaussian random variables with variances δ [ 0 , P 1 ] , P 1 δ , P 2 , ( 1 + Λ ) , 1, respectively. Additionally, define
R U ( δ ) I ( U ; Y ) I ( U ; Z | V X 2 ) = 1 2 log 1 + ( P 1 δ ) ( 1 + Λ + δ + P 2 ) 1 1 + h 1 ( P 1 δ ) ,
R V ( δ ) I ( V ; Y | U X 2 ) I ( V ; Z ) = 1 2 log 1 + δ ( 1 + Λ ) 1 1 + h 1 δ ( 1 + h 1 ( P 1 δ ) + h 2 P 2 ) 1 ,
R 2 ( δ ) I ( X 2 ; Y | U ) I ( X 2 ; Z | V ) = 1 2 log 1 + P 2 ( 1 + Λ + δ ) 1 1 + h 2 P 2 ( 1 + h 1 ( P 1 δ ) ) 1 .
By the chain rule, we have, for any δ [ 0 , P 1 ] , ( R U + R V + R 2 ) ( δ ) = I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) . Finally, since ( R U + R V ) ( 0 ) = I ( X 1 ; Y ) I ( X 1 ; Z | X 2 ) and ( R U + R V ) ( P 1 ) = I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) , by continuity of δ ( R U + R V ) ( δ ) , there exists δ 0 [ 0 , P 1 ] such that x 0 = ( R U + R V ) ( δ 0 ) and y 0 = R 2 ( δ 0 ) for any point ( x 0 , y 0 ) in D ( P 1 , P 2 ) . □
As remarked in [29], a potential issue is that R U ( δ 0 ) or R V ( δ 0 ) can be negative in Lemma 1. We have the following achievability result.
Proposition 1.
Let ( x 0 , y 0 ) D ( P 1 , P 2 ) and δ 0 be as in Lemma 1. Then, ( x 0 , y 0 ) can be achieved without time-sharing if R U ( δ 0 ) 0 and R V ( δ 0 ) 0 and min ( δ 0 , P 1 δ 0 ) > Λ . ( x 0 , y 0 ) D ( P 1 , P 2 ) can also be achieved without time-sharing if similar conditions (obtained by exchanging the role of the two transmitters) are satisfied when splitting the power of Transmitter 2.
Proof idea.
Transmitter 1 is split into two virtual users that transmit at rate R U ( δ ) with power δ and at rate R V ( δ ) with power P 1 δ . Encoding for User 2 and the two virtual users is similar to Case 1 in the proof of Theorem 2. The receiver adopts a minimum distance decoding rule as in Theorem 2 to first decode the message associated with the virtual user that transmits at rate R V , then to decode the message associated with User 2, and finally, to decode the message associated with the virtual user that transmits at rate R U . A similar procedure can be performed if one decides to split the power of Transmitter 2. □
An illustration of Proposition 1 is depicted in Figure 4. Note that for some model parameters, the set of points achievable with Proposition 1 can be empty and, unfortunately, it does not seem easy to obtain a simple analytical characterization of the rate pairs achievable with Proposition 1.

4.3. Extension to More Than Two Transmitters

We extend our result for the MAC-WT-JA to the case of an arbitrary number of transmitters. The problem is more involved than the case of two transmitters and requires new time-sharing strategies that leverage extended polymatroid properties.
Consider the model of Section 2.4 with L transmitters instead of two transmitters. We let L 1 , L denote the set of transmitters. More specifically, the channel model of Section 2.4 becomes
Y n l L X l n + S n + N 1 n ,
Z n l L h l X l n + N Z n ,
where σ 1 2 = σ Z 2 = 1 . We term the model as Gaussian MAC-WT-JA with parameters ( ( Γ l ) l L , ( h l ) l L , Λ , σ 1 2 , σ Z 2 ) . When the channel gains ( h l ) l L are all equal to h [ 0 , 1 [ , we refer to this model as the degraded MAC-WT-JA with parameters ( ( Γ l ) l L , h , Λ , σ 1 2 , σ Z 2 ) . Given Λ R + and ( Γ l ) l L , we define h Λ ( 1 + Λ ) 1 , L ( Λ ) { l L : Γ l > Λ } , and L c ( Λ ) L L ( Λ ) . The following achievability result is proven in Appendix B.
Theorem 4.
Assume that for all l L ( Λ ) , h Λ > h l . The following region is achievable for the Gaussian MAC-WT-JA with parameters ( ( Γ l ) l L , ( h l ) l L , Λ , 1 , 1 )
R = ( P l ) l L : l L ( Λ ) , Λ < P l Γ l T Λ ( R l ) l L : l L c ( Λ ) , R l = 0 and T L ( Λ ) , R T 1 2 log 1 + h Λ P T 1 + ( l T h l P l ) ( 1 + l T c h l P l ) 1 + ,
where for any ( P l ) l L and T L , we use the notation P T l T P l .
We immediately obtain the following corollary.
Corollary 1.
The following region is achievable for the degraded Gaussian MAC-WT-JA with parameters ( ( Γ l ) l L , h , Λ , 1 , 1 )
R = ( P l ) l L : l L ( Λ ) , Λ < P l Γ l T Λ ( R l ) l L : l L c ( Λ ) , R l = 0 and T L ( Λ ) , R T 1 2 log 1 + h Λ P T 1 + h P T ( 1 + h P T c ) 1 + .
Note that the achievability strategy used in the proof of Theorem 4 is different than the achievability strategy used in the proof of Theorem 2. While Theorem 4 gains in generality by considering an arbitrary number of users, it requires the assumption l L ( Λ ) , h Λ > h l , which is not needed in Theorem 2. We also have the following optimality result, which is proven in Appendix C.
Theorem 5.
The maximal secrecy sum-rate R L l L R l achievable for the degraded Gaussian MAC-WT-JA with parameters ( ( Γ l ) l L , h , Λ , 1 , 1 ) is
1 2 log 1 + h Λ Γ L ( Λ ) 1 + h Γ L ( Λ ) + .
Note that the optimal secrecy sum-rate is positive if and only if h Λ > h and L ( Λ ) .

5. The Gaussian Broadcast Wiretap Channel in the Presence of a Jammer-Aided Eavesdropper

Theorems 6 and 7 provide inner and outer bounds, respectively, for the Gaussian BC-WT-JA.
Theorem 6
(Achievability). We have the following inner bounds.
  • When g 2 Λ Γ and g 1 Λ < Γ ,
    R 1 B C ( R 1 , 0 ) : R 1 1 2 log 1 + Γ σ 1 2 + g 1 Λ 1 + h Γ +
    is achievable.
  • When g 1 Λ Γ and g 2 Λ < Γ ,
    R 2 B C ( 0 , R 2 ) : R 2 1 2 log 1 + Γ σ 2 2 + g 2 Λ 1 + h Γ +
    is achievable.
  • When max ( g 1 Λ , g 2 Λ ) < Γ , and, without loss of generality, σ 1 2 + g 1 Λ σ 2 2 + g 2 Λ (exchange the role of the receivers if σ 1 2 + g 1 Λ > σ 2 2 + g 2 Λ ),
    Conv R 1 B C R 2 B C α ] max ( g 1 , g 2 ) Λ Γ 1 , 1 ] R B C ( α ) ,
    is achievable where we have defined for α [ 0 , 1 ]
    R B C ( α ) ( R 1 , R 2 ) : R 1 1 2 log 1 + ( 1 α ) Γ σ 1 2 + g 1 Λ 1 + h ( 1 α ) Γ + , . R 2 1 2 log 1 + α Γ ( 1 α ) Γ + σ 2 2 + g 2 Λ 1 + h α Γ h ( 1 α ) Γ + 1 + .
Note that R B C ( α = 0 ) = R 1 B C and R B C ( α = 1 ) = R 2 B C . The achievability scheme of Theorem 6 is similar to the proof of Theorem 2 and [27] [Theorem 3].
Theorem 7
(Partial converse).
  • If Γ min ( g 1 Λ , g 2 Λ ) , then no positive rate is achievable;
  • When g 2 Λ Γ and g 1 Λ < Γ , the achievability region R 1 B C in Theorem 6 is tight;
  • When g 1 Λ Γ and g 2 Λ < Γ , the achievability region R 2 B C in Theorem 6 is tight;
  • When Γ > max ( g 1 Λ , g 2 Λ ) , the following region is an outer bound
    α [ 0 , 1 ] R B C ( α ) ,
    where R B C ( α ) has been defined in Theorem 6.
The proof of Theorem 7 is similar to the proof of Theorem 3 using [26] in place of [30]. Observe that the gap between the inner and outer bounds of Theorems 6 and 7 when Γ > max ( g 1 Λ , g 2 Λ ) comes from the fact that our achievability scheme is limited to α ] max ( g 1 , g 2 ) Λ Γ 1 , 1 ] { 0 } .

6. The Symmetric Interference Wiretap Channel in the Presence of a Jammer-Aided Eavesdropper

By the symmetry in Equation (6a) and Equation (6b), a code for the Gaussian MAC-WT-JA allows Receiver i { 1 , 2 } to securely recover the message of Transmitter i. Hence, from the achievability result for the Gaussian MAC-WT-JA, we have the following achievability result for the Gaussian SI-WT-JA.
Theorem 8
(Achievability). We consider three cases.
  • When Γ 1 > Λ and Γ 2 Λ , R 1 S I R 1 M A C is achievable;
  • When Γ 2 > Λ and Γ 1 Λ , R 2 S I R 2 M A C is achievable;
  • When min ( Γ 1 , Γ 2 ) > Λ , R S I R M A C is achievable;
where R 1 M A C , R 2 M A C , and R M A C are defined in Theorem 2.
Next, by the symmetry in Equations (6a) and (6b), we have that any code for the Gaussian SI-WT-JA allows Receiver i { 1 , 2 } to securely recover the messages from both transmitters, meaning that an outer bound for the Gaussian SI-WT-JA can be obtained by considering an outer bound for a Gaussian MAC-WT-JA. Hence, from the partial converse for the Gaussian MAC-WT-JA, we obtain the following partial converse for the Gaussian SI-WT-JA.
Theorem 9
(Partial converse).
  • If max ( Γ 1 , Γ 2 ) Λ , then no positive rate is achievable.
  • When min ( Γ 1 , Γ 2 ) > Λ and h 1 = h 2 , the sum-rate achieved in R S I is tight by choosing ( P 1 , P 2 ) = ( Γ 1 , Γ 2 ) .

7. Proof of Theorem 2

To prove Theorem 2, it is sufficient to prove the achievability of the dominant face
D ( P 1 , P 2 ) ( R 1 , R 2 ) R 1 , 2 M A C ( P 1 , P 2 ) : R 1 + R 2 = 1 2 log 1 + ( P 1 + P 2 ) ( 1 + Λ ) 1 1 + P 1 h 1 + P 2 h 2 +
of R 1 , 2 M A C ( P 1 , P 2 ) to prove the achievability of R 1 , 2 M A C ( P 1 , P 2 ) when min ( Γ 1 , Γ 2 ) > Λ and where ( P 1 , P 2 ) ] Λ , Γ 1 ] × ] Λ , Γ 2 ] . The achievability of R i M A C , i { 1 , 2 } , when Γ i > Λ and Γ 3 i Λ is obtained similarly by having Transmitter i ¯ 3 i send Gaussian noise. Observe that the rate constraints in R 1 , 2 M A C ( P 1 , P 2 ) can be expressed as
R 1 [ I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) ] + ,
R 2 [ I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) ] + ,
R 1 + R 2 [ I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) ] + ,
where
Y X 1 + X 2 + N Y ,
Z h 1 X 1 + h 2 X 2 + N Z ,
and X 1 , X 2 , N Y , N Z are independent zero-mean Gaussian random variables with variances P 1 , P 2 , ( 1 + Λ ) , 1, respectively. As remarked in [29], the set function T I ( X T ; Y | X T c ) I ( X T ; Z ) is submodular but not necessarily non-decreasing, where T { 1 , 2 } , X T ( X t ) t T . This is the main reason why achieving the corner points of R 1 , 2 M A C ( P 1 , P 2 ) by means of point-to-point codes via the successive decoding method [5] [Appendix C] does not easily translate to our setting. Before we provide our solution, we summarize our proof strategy in the three cases below. Figure 5 illustrates these cases.
Case 1: Assume
I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) max [ I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) , I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) ] .
The corner points of R 1 , 2 M A C are given by
C ̲ 1 ( I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) , I ( X 2 ; Y ) I ( X 2 ; Z | X 1 ) ) ,
C ̲ 2 ( I ( X 1 ; Y ) I ( X 1 ; Z | X 2 ) , I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) ) .
We will achieve each corner point with point-to-point coding techniques and perform time-sharing to achieve D ( P 1 , P 2 ) . Specifically, to achieve C ̲ i , i { 1 , 2 } , the encoders will be designed such that the decoder can first estimate the codeword sent by Transmitter i ¯ 3 i (by considering the codewords of Transmitter i as noise), which is in turn used to estimate the codeword sent by Transmitter i. This approach is similar to the successive decoding method [5] [Appendix C] for a multiple-access channel in the absence of a security constraint.
Case 2.a: Assume
I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) ,
I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) < I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) .
Hence,
C ̲ ˜ 2 ( I ( X 1 ; Y ) I ( X 1 ; Z | X 2 ) , I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) )
has a negative x-coordinate and the method of Case 1 cannot be directly applied here. Now, the corner points of R 1 , 2 M A C are
C ̲ 1 ( I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) , I ( X 2 ; Y ) I ( X 2 ; Z | X 1 ) ) ,
C ̲ 2 ( 0 , I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) ) ) .
The idea to achieve C ̲ 1 is, as in Case 1, a successive decoding approach by decomposing the sum rate I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) as the sum of I ( X 2 ; Y ) I ( X 2 ; Z | X 1 ) , which represents the secret message rate for Transmitter 2, and I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) , which represents the secret message rate for Transmitter 1. However, C ̲ 2 cannot be decomposed in a similar manner and thus cannot be achieved with the same method. Instead, to achieve any point in D ( P 1 , P 2 ) , we rely on a strategy over several transmission blocks. First, in an appropriate number of transmission blocks, the transmitters can send secret messages with rates C ̲ 1 as in Case 1. Part of the secret messages of Transmitter 1, with a rate equal to the absolute value of the x-coordinate of the point C ̲ ˜ 2 , is dedicated to the exchange of a secret key between Transmitter 1 and the legitimate receiver. Then, for the remaining transmission blocks, Transmitter 2 transmits a secret message with rate I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) , while Transmitter 1 uses the previously generated secret key to produce a jamming signal, which can be canceled out by the legitimate receiver but not by the eavesdropper who does not know the secret key.
Case 2.b: Assume
I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) ,
I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) < I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) .
This case is handled as Case 2.a by exchanging the role of the two transmitters.
Case 3: Assume
I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) < min [ I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) , I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) ] .
Hence,
C ̲ ˜ 1 ( I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) , I ( X 2 ; Y ) I ( X 2 ; Z | X 1 ) ) ,
C ̲ ˜ 2 ( I ( X 1 ; Y ) I ( X 1 ; Z | X 2 ) , I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) ) ,
have a negative y-component and a negative x-component, respectively, and the strategy of Case 1 or Case 2 cannot be directly applied here. The corner points of the region are
C ̲ 1 ( I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) , 0 ) ,
C ̲ 2 ( 0 , I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) ) .
These corner points do not seem to be easily achievable using the method for Case 1. We will first show that it is possible to achieve a point R ̲ D ( P 1 , P 2 ) , where R ̲ has strictly positive components. All the other points in D ( P 1 , P 2 ) will then be achieved as in Case 2 by doing the substitutions C ̲ 1 R ̲ and C ̲ 2 R ̲ in Case 2.a and Case 2.b, respectively.
Note that it is sufficient to consider the case
min [ I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) , I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) ] 0 .
Indeed, for i { 1 , 2 } and i ¯ 3 i , when I ( X i ; Y | X i ¯ ) I ( X i ; Z ) > 0 and I ( X i ¯ ; Y | X i ) I ( X i ¯ ; Z ) 0 , we have R i ¯ = 0 and R i I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) I ( X i ; Y | X i ¯ ) I ( X i ; Z | X i ¯ ) = 1 2 log 1 + P i ( 1 + Λ ) 1 1 + P i h i . These cases correspond to Theorem 1 and can be treated as in Case 1.

7.1. Case 1

We show the achievability of C ̲ 2 . The achievability of C ̲ 1 is obtained by exchanging the role of the transmitters.
Codebook construction: For Transmitter i { 1 , 2 } , construct a codebook C n ( i ) with 2 n R i 2 n R ˜ i codewords drawn independently and uniformly on the sphere of radius n P i in R n . The codewords are labeled x i n ( m i , m ˜ i ) , where m i 1 , 2 n R i , m ˜ i 1 , 2 n R ˜ i . We define C n ( C n ( 1 ) , C n ( 2 ) ) and choose for δ > 0
R 1 I ( X 1 ; Y ) I ( X 1 ; Z | X 2 ) δ ,
R ˜ 1 I ( X 1 ; Z | X 2 ) δ ,
R 2 I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) δ ,
R ˜ 2 I ( X 2 ; Z ) δ .
Encoding at Transmitter i { 1 , 2 } : Given ( m i , m ˜ i ) , transmit x i n ( m i , m ˜ i ) . In the remainder of the paper, we use the term randomization sequence for m ˜ i .
Decoding: The receiver performs minimum distance decoding to first estimate ( m 1 , m ˜ 1 ) and then to estimate ( m 2 , m ˜ 2 ) , i.e., given y n , it determines ( m ^ 1 , m ˜ ^ 1 ) ϕ 1 ( y n , 0 ) , and ( m ^ 2 , m ˜ ^ 2 ) ϕ 2 ( y n , x 1 n ( m ^ 1 , m ˜ ^ 1 ) ) where for i { 1 , 2 }
ϕ i ( y n , x ) ( m i , m ˜ i ) if y n x x i n ( m i , m ˜ i ) 2 < y n x x i n ( m i , m ˜ i ) 2 for ( m i , m ˜ i ) ( m i , m ˜ i ) 0 if no such ( m i , m ˜ i ) 1 , 2 n R i × 1 , 2 n R ˜ i exists .
Define e ( C n , s n ) P ( M ^ 1 , M ^ 2 ) ( M 1 , M 2 ) | C n . We now prove that E C n [ sup s n e ( C n , s n ) ] + 1 n I ( M 1 M 2 ; Z n | C n ) n 0 . We will thus conclude by Markov’s inequality that there exists a sequence of realizations ( C n ) n 1 of ( C n ) n 1 such that both sup s n e ( C n , s n ) and 1 n I ( M 1 M 2 ; Z n | C n ) can be made arbitrarily close to zero as n .
Average probability of error: We have
e ( C n , s n ) P ( M ^ 1 , M ^ 2 ) ( M 1 , M 2 ) or ( M ˜ ^ 1 , M ˜ ^ 2 ) ( M ˜ 1 , M ˜ 2 ) | C n
e 1 ( C n , s n , x 2 n ( M 2 , M ˜ 2 ) ) + e 2 ( C n , s n , 0 ) ,
where for i { 1 , 2 }
e i ( C n , s n , x ) 1 2 n R i 2 n R ˜ i m i m ˜ i P x i n ( m i , m ˜ i ) + s n + x + N Y n x i n ( m i , m ˜ i ) 2 s n + x + N Y n 2 for some ( m i , m ˜ i ) ( m i , m ˜ i ) .
Next, we have
E C n [ e 1 ( C n , s n , x 2 n ( M 2 , M ˜ 2 ) ) ] E C n [ e 1 ( C n , s n , x 2 n ( M 2 , M ˜ 2 ) ) | C n ( 1 ) C 1 * ] + P [ C n ( 1 ) C 1 * ]
n 0 ,
where, in Equation (43a), C 1 * represents all the sets of unit norm vectors scaled by n P 1 that satisfy the two conditions of Lemma A1 (in Appendix A), Equation (43b) holds because P [ C n ( 1 ) C 1 * ] n 1 by Lemma A1, and E C n [ e 1 ( C n , s n , x 2 n ( M 2 , M ˜ 2 ) ) | C n ( 1 ) C 1 * ] n 0 by Theorem A1 (in Appendix A) using that R 1 + R ˜ 1 < I ( X 1 ; Y ) = 1 2 log 1 + P 1 1 + Λ + P 2 and by interpreting the signal of Transmitter 2 as noise. Then,
E C n [ e 2 ( C n , s n , 0 ) ] E C n [ e 2 ( C n , s n , 0 ) | C n ( 2 ) C 2 * ] + P [ C n ( 2 ) C 2 * ]
n 0 ,
where, in Equation (44a), C 2 * represents all the sets of unit norm vectors scaled by n P 2 that satisfy the two conditions of Lemma A1, Equation (44b) holds because P [ C n ( 2 ) C 2 * ] n 1 by Lemma A1, and E C n [ e 2 ( C n , s n , 0 ) | C n ( 2 ) C 2 * ] n 0 by Theorem A1 using that R 2 + R ˜ 2 < I ( X 2 ; Y | X 1 ) = 1 2 log 1 + P 2 1 + Λ . Hence, by Equations (41b), (43b) and (44b), we have
E C n [ e ( C n , s n ) ] n 0 .
Equivocation: We first study the average error probability of decoding ( m ˜ 1 , m ˜ 2 ) given ( z n , m 1 , m 2 ) with the following procedure. Given ( z n , m 1 , m 2 ) , determine m ˘ 2 ψ 2 ( z n , 0 ) , and m ˘ 1 ψ 1 ( z n , h 2 x 2 n ( m 2 , m ˘ 2 ) ) where
ψ i ( z n , x ) m ˜ i if z n x h i x i n ( m i , m ˜ i ) 2 < z n x h i x i n ( m i , m ˜ i ) 2 for m ˜ i m ˜ i 0 if no such m ˜ i 1 , 2 n R ˜ i exists .
We define e ˜ ( C n ) P ( M ˘ 1 , M ˘ 2 ) ( M ˜ 1 , M ˜ 2 ) | C n and for i { 1 , 2 } ,
e ˜ i ( C n , x ) 1 2 n R ˜ i m ˜ i P h i x i n ( m i , m ˜ i ) + x + N Z n h i x i n ( m i , m ˜ i ) 2 x + N Z n 2 for some m ˜ i m ˜ i .
Then, with the same notation as in Equations (43) and (44), we have
E C n [ e ˜ ( C n ) ] E C n [ e ˜ 1 ( C n , 0 ) ] + E C n [ e ˜ 2 ( C n , h 1 x 1 n ( M 1 , M ˜ 1 ) ) ] E C n [ e ˜ 1 ( C n , 0 ) | C n ( 1 ) C 1 * ] + P [ C n ( 1 ) C 1 * ]
                                                                                + E C n [ e ˜ 2 ( C n , h 1 x 1 n ( M 1 , M ˜ 1 ) ) | C n ( 2 ) C 2 * ] + P [ C n ( 2 ) C 2 * ]
n 0 ,
where Equation (48c) holds because P [ C n ( 1 ) C 1 * ] n 1 and P [ C n ( 2 ) C 2 * ] n 1 by Lemma A1, E C n [ e ˜ 1 ( C n , 0 ) | C n ( 1 ) C 1 * ] n 0 by Theorem A1 using that R ˜ 1 < I ( X 1 ; Z | X 2 ) = 1 2 log 1 + h 1 P 1 , and E C n [ e ˜ 2 ( C n , h 1 x 1 n ( M 1 , M ˜ 1 ) ) | C n ( 2 ) C 2 * ] n 0 by Theorem A1 using that R ˜ 2 < I ( X 2 ; Z ) = 1 2 log 1 + h 2 P 2 1 + h 1 P 1 and by interpreting the signal of Transmitter 1 as noise.
Define M ( M 1 , M 2 ) , M ˜ ( M ˜ 1 , M ˜ 2 ) . Let the superscript T denote the transpose operation and define X [ h 1 ( X 1 n ) T h 2 ( X 2 n ) T ] T R 2 n × 1 , such that
Z n = G X + N Z n ,
with G [ I n , I n ] R n × 2 n and I n the identity matrix with dimension n. Let K X denote the covariance matrix of X . Note that, by independence between X 1 n and X 2 n , we have K X = K h 1 X 1 n 0 n 0 n K h 2 X 2 n , where 0 n 0 × I n and K h i X i n is the covariance matrix of h i X i n , i { 1 , 2 } . Then, for i { 1 , 2 } , since X i n is chosen uniformly at random over a sphere of radius n P i , the off-diagonal elements of K h i X i n are all equal to 0 by symmetry, and the diagonal elements are all equal (also by symmetry) and sum to n h i P i . Hence, K h i X i n = h i P i I n , i { 1 , 2 } , and
K X = h 1 P 1 I n 0 n 0 n h 2 P 2 I n .
Then, we have
I ( M ; Z n | C n ) = I ( M M ˜ ; Z n | C n ) I ( M ˜ ; Z n | M C n )
                      = I ( M M ˜ ; Z n | C n ) H ( M ˜ | C n ) + H ( M ˜ | Z n M C n )
                                                                            I ( X ; Z n | C n ) H ( M ˜ | C n ) + H ( M ˜ | Z n M C n )
                                                          I ( X ; Z n ) H ( M ˜ | C n ) + H ( M ˜ | Z n M C n )
                                                                              = h ( Z n ) h ( N Z n ) H ( M ˜ | C n ) + H ( M ˜ | Z n M C n )
                                                                                1 2 log | G K X G T + I n | H ( M ˜ | C n ) + H ( M ˜ | Z n M C n )
                                                                                    = n 2 log ( 1 + h 1 P 1 + h 2 P 2 ) H ( M ˜ | C n ) + H ( M ˜ | Z n M C n )
                                                              = n I ( X 1 X 2 ; Z ) H ( M ˜ | C n ) + H ( M ˜ | Z n M C n )
                                                                                      n I ( X 1 X 2 ; Z ) n ( I ( X 1 X 2 ; Z ) 2 δ ) + O ( n E C n [ e ˜ ( C n ) ] )
                      = 2 δ n + o ( n ) ,
where Equation (51b) holds by independence between M and M ˜ ; Equation (51c) holds because ( M , M ˜ ) ( X , C n ) Z n forms a Markov chain; Equation (51d) holds because C n X Z n forms a Markov chain; Equation (51f) holds because h ( N Z n ) = 1 2 log ( ( 2 π e ) n ) and because h ( Z n ) 1 2 log ( ( 2 π e ) n | G K X G T + I n | ) by Equation (49) and the maximal differential entropy lemma (e.g., [31] Eq. (2.6)); Equation (51g) holds by Equation (50); in Equation (51i), we used the definition of R ˜ 1 + R ˜ 2 and the uniformity of M ˜ to obtain the second term, and Fano’s inequality to obtain the third term; Equation (51j) holds by Equation (48c).
Note that the idea of considering a fictitious decoder at the eavesdropper to use Fano’s inequality in Equation (51i) is a standard technique that already appeared in [32].

7.2. Case 2

We only consider Case 2.a; Case 2.b is handled by exchanging the role of the transmitters. Let R ̲ ( R 1 , R 2 ) D ( P 1 , P 2 ) . There exists α [ 0 , 1 [ such that R ̲ = ( 1 α ) C ̲ 1 + α C ˜ ̲ 2 . The corner point C ̲ 1 is achievable by Case 1, however, recall that since the first component of C ˜ ̲ 2 is negative, it thus cannot be achieved as in Case 1, and one cannot perform time-sharing between C ̲ 1 and C ˜ ̲ 2 to achieve R ̲ . Instead, we achieve R ̲ as follows. We define k , k N such that k / k = ( 1 α ) 1 1 + ϵ , ϵ > 0 , this is possible by density of Q in R . We realize a first transmission T 1 as in Case 1 of a pair of confidential messages of length n k C ̲ 1 . Part of these confidential messages is dedicated to exchange a secret key of length n k ( I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) ) > 0 between Transmitter 1 and the receiver, which is possible because ( 1 α ) C ̲ 1 + α C ˜ ̲ 2 = R ̲ has positive components. We then realize a second transmission T 2 of a pair of confidential messages of length n k ( 0 , I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) ) assisted with the secret key that is shared between Transmitter 1 and the receiver. Hence, the overall transmission rate of confidential messages is k k + k C ̲ 1 + k k + k C ˜ ̲ 2 , which is arbitrarily close to R ̲ by choosing a sufficiently small ϵ . We now explain how transmission T 2 is performed. We repeat k times the following coding scheme.
Codebook construction: Perform the same codebook construction as in Case 1 for Transmitter 2. For Transmitter 1, construct a codebook with 2 n R ˘ 1 2 n R ˚ 1 codewords drawn independently and uniformly on the sphere of radius n P 1 in R n . The codewords are labeled x 1 n ( m ˘ 1 , m ˚ 1 ) , where m ˘ 1 1 , 2 n R ˘ 1 , m ˚ 1 1 , 2 n R ˚ 1 . We define the rates R ˘ 1 I ( X 1 ; Y ) δ , R ˚ 1 I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) δ , and R ˜ 1 R ˘ 1 + R ˚ 1 = I ( X 1 ; Z | X 2 ) 2 δ .
Encoding at Transmitters: Encoding for Transmitter 2 is as in Case 1. Given ( m ˘ 1 , m ˚ 1 ) , Transmitter 1 forms x 1 n ( m ˘ 1 , m ˚ 1 ) , where m ˚ 1 is seen as a secret key known at the receiver and that has been shared through transmission T 1 described above. In the following, we define m ˜ 1 ( m ˘ 1 , m ˚ 1 ) .
Decoding and average probability of error: As in Case 1, using minimum distance decoding, one can show that on average over the codebooks, the receiver can reconstruct x 1 n ( m ˘ 1 , m ˚ 1 ) with a vanishing average probability of error because m ˚ 1 is known at the receiver and because R ˘ 1 < I ( X 1 ; Y ) . The receiver can then reconstruct x 2 n as in Case 1.
Equivocation: The equivocation computation for transmission T 2 is as in Case 1 by remarking that it is possible on average over the codebooks to reconstruct with vanishing average probability of error first x 2 n given ( z n , m 2 ) and then x 1 n given ( z n , x 2 n ) by using that R ˜ 1 < I ( X 1 ; Z | X 2 ) .
Finally, to conclude that R ̲ is achievable, we need to show that the secrecy constraint is satisfied for the joint transmissions T 1 and T 2 . We use the superscript ( T i ) to denote random variables associated with transmission T i , i { 1 , 2 } . Define M ( T 1 ) M 1 ( T 1 ) M ˚ 1 ( T 1 ) , M 2 ( T 1 ) , the confidential messages sent during transmission T 1 excluding M ˚ 1 ( T 1 ) , defined as all the confidential messages sent during transmission T 1 and used during transmission T 2 . We define M ( T 2 ) , M 2 ( T 2 ) as the confidential messages sent during transmission T 2 . We define M ˜ ( T i ) M ˜ 1 ( T i ) , M ˜ 2 ( T i ) as the randomization sequences used by both transmitters in Transmission T i , i { 1 , 2 } . We also define X ( T i ) as all the channel inputs from both transmitters in Transmission T i , i { 1 , 2 } , and Z ( T i ) as all the channel outputs observed by the eavesdropper in Transmission i { 1 , 2 } . Finally, we define M ( T 1 , T 2 ) M ( T 1 ) , M ( T 2 ) , M ˜ ( T 1 , T 2 ) M ˜ ( T 1 ) , M ˜ ( T 2 ) , Z ( T 1 , T 2 ) Z ( T 1 ) , Z ( T 2 ) , X ( T 1 , T 2 ) X ( T 1 ) , X ( T 2 ) , C n ( T 1 , T 2 ) C n ( T 1 ) , C n ( T 2 ) . We have
I ( M ( T 1 , T 2 ) ; Z ( T 1 , T 2 ) | C n ( T 1 , T 2 ) )
= I ( M ( T 1 , T 2 ) M ˜ ( T 1 , T 2 ) ; Z ( T 1 , T 2 ) | C n ( T 1 , T 2 ) ) I ( M ˜ ( T 1 , T 2 ) ; Z ( T 1 , T 2 ) | M ( T 1 , T 2 ) C n ( T 1 , T 2 ) ) = I ( M ( T 1 , T 2 ) M ˜ ( T 1 , T 2 ) ; Z ( T 1 , T 2 ) | C n ( T 1 , T 2 ) ) H ( M ˜ ( T 1 , T 2 ) | C n ( T 1 , T 2 ) )
+ H ( M ˜ ( T 1 , T 2 ) | Z ( T 1 , T 2 ) M ( T 1 , T 2 ) C n ( T 1 , T 2 ) ) I ( X ( T 1 , T 2 ) ; Z ( T 1 , T 2 ) | C n ( T 1 , T 2 ) ) H ( M ˜ ( T 1 , T 2 ) | C n ( T 1 , T 2 ) )
+ H ( M ˜ ( T 1 , T 2 ) | Z ( T 1 , T 2 ) M ( T 1 , T 2 ) C n ( T 1 , T 2 ) )
I ( X ( T 1 , T 2 ) ; Z ( T 1 , T 2 ) ) H ( M ˜ ( T 1 , T 2 ) | C n ( T 1 , T 2 ) ) + H ( M ˜ ( T 1 , T 2 ) | Z ( T 1 , T 2 ) M ( T 1 , T 2 ) C n ( T 1 , T 2 ) )
n ( k + k ) I ( X 1 X 2 ; Z ) H ( M ˜ ( T 1 , T 2 ) | C n ( T 1 , T 2 ) ) + H ( M ˜ ( T 1 , T 2 ) | Z ( T 1 , T 2 ) M ( T 1 , T 2 ) C n ( T 1 , T 2 ) )
3 n δ ( k + k ) + H ( M ˜ ( T 1 , T 2 ) | Z ( T 1 , T 2 ) M ( T 1 , T 2 ) C n ( T 1 , T 2 ) )
3 n δ ( k + k ) + O n E C n ( T 1 , T 2 ) [ e ˜ ( C n ( T 1 , T 2 ) ) ] ,
where Equation (52b) holds because we defined M ( T 1 , T 2 ) such that M ( T 1 , T 2 ) is independent from M ˜ ( T 1 , T 2 ) , Equation (52c) holds because ( M ( T 1 , T 2 ) , M ˜ ( T 1 , T 2 ) ) C n ( T 1 , T 2 ) , X ( T 1 , T 2 ) Z ( T 1 , T 2 ) forms a Markov chain, Equation (52d) holds because C n ( T 1 , T 2 ) X ( T 1 , T 2 ) Z ( T 1 , T 2 ) forms a Markov chain, Equation (52e) holds similar to Equation (51h), Equation (52f) holds because by definition R ˜ 1 + R ˜ 2 I ( X 1 X 2 ; Z ) 3 δ , Equation (52g) holds by Fano’s inequality with e ˜ ( C n ( T 1 , T 2 ) ) defined as the probability of error to reconstruct M ˜ ( T 1 , T 2 ) given Z ( T 1 , T 2 ) , M ( T 1 , T 2 ) using minimum distance decoding as in Case 1. Then, define e ˜ ( 1 ) ( C n ( T 1 , T 2 ) ) as the error probability to reconstruct M ˜ ( T 2 ) from Z ( T 2 ) , M ( T 2 ) using minimum distance decoding, and e ˜ ( 2 ) ( C n ( T 1 , T 2 ) ) as the error probability to reconstruct M ˜ ( T 1 ) from Z ( T 1 ) , M ( T 1 ) , M ˜ ( T 2 ) using minimum distance decoding. As in the analysis of Case 1 and by observing that M ˚ 1 ( T 1 ) is included in M ˜ ( T 2 ) , we have
E C n ( T 1 , T 2 ) [ e ˜ ( C n ( T 1 , T 2 ) ) ] E C n ( T 1 , T 2 ) [ e ˜ ( 1 ) ( C n ( T 1 , T 2 ) ) ] + E C n ( T 1 , T 2 ) [ e ˜ ( 2 ) ( C n ( T 1 , T 2 ) ) ]
n 0 .
We conclude from Equations (52g) and (53b)
I ( M ( T 1 , T 2 ) ; Z ( T 1 , T 2 ) | C n ( T 1 , T 2 ) ) = 3 n δ ( k + k ) + o ( n ) .

7.3. Case 3

We have I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) > 0 and I ( X 2 ; Z | X 1 ) I ( X 2 ; Y ) > 0 as depicted in Figure 5. Assume I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) > 0 , otherwise R 1 , 2 M A C ( P 1 , P 2 ) = { ( 0 , 0 ) } . We will use the following lemma.
Lemma 2.
Define h Λ ( 1 + Λ ) 1 . We have
  • I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) or I ( X 2 ; Z | X 1 ) I ( X 2 ; Y ) I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) .
  • h 1 < h Λ or h 2 < h Λ .
  • Assume I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) . There exists m , m N * , such that
    m ( I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) ) m ( I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) ) ,
    m ( I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) ) > m ( I ( X 2 ; Z | X 1 ) I ( X 2 ; Y ) ) .
Proof. 
(i) Assume that
I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) > I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) ,
I ( X 2 ; Z | X 1 ) I ( X 2 ; Y ) > I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) .
Then,
I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) + I ( X 2 ; Z | X 1 ) I ( X 2 ; Y ) > I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) + I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) ,
which contradicts the fact that I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) < I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) and I ( X 2 ; Z | X 1 ) I ( X 2 ; Y ) < I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) .
(ii) By contradiction, if h 1 h Λ and h 2 h Λ , then I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) 0 .
(iii) Choose m N * such that
I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) m ( I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) ) .
Then, there exists m N * and r [ 0 , I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) [ such that
m ( I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) ) = m ( I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) ) + r .
Then, we have
m ( I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) )
= m ( I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) ) + m ( I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) )
= m ( I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) ) + m ( I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) ) r
= m ( I ( X 2 ; Z | X 1 ) I ( X 2 ; Y ) ) + ( m + m ) ( I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) ) r
> m ( I ( X 2 ; Z | X 1 ) I ( X 2 ; Y ) ) + m ( I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) )
> m ( I ( X 2 ; Z | X 1 ) I ( X 2 ; Y ) ) ,
where Equation (60b) holds by Equation (59), and Equation (60d) holds because r < I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) m ( I ( X 1 X 2 ; Y ) I ( X 1 X 2 ; Z ) ) . □
By (i) in Lemma 2, assume without loss of generality that I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) by exchanging the role of the transmitters if necessary. We let m, m be as in (iii) of Lemma 2. D ( P 1 , P 2 ) is achieved in four steps.
Step 1. During a first transmission T 0 , Transmitter 2 transmits a confidential message of length n m ( I ( X 2 ; Z | X 1 ) I ( X 2 ; Y ) ) to the receiver. This is possible with a point-to-point wiretap code; as in Case 1, when Transmitter 1 remains silent and when h Λ > h 2 . If, on the other hand, h Λ h 2 , then by (ii) in Lemma 2, h Λ > h 1 and Transmitter 2 can transmit a confidential message of length n m ( I ( X 2 ; Z | X 1 ) I ( X 2 ; Y ) ) as follows. Transmitter 1 transmits a confidential message of length n k ( I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) ) , where k N * is such that n k ( I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) ) n m ( I ( X 2 ; Z | X 1 ) I ( X 2 ; Y ) ) . Using this secret key shared by Transmitter 1 and the receiver, Transmitter 2 can transmit a confidential message of length n k ( I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) ) as in Case 2. Note that Step 1 is operated in a fixed number of blocks of length n.
Step 2. As in Case 2, the transmitters achieve transmission T 1 of confidential messages of length ( n m ( I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) ) , 0 ) by using the secret key exchanged during T 0 between Transmitter 2 and the receiver. Then, as in Case 2 and because m ( I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) ) m ( I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) ) 0 by (iii) in Lemma 2, the transmitters achieve a transmission T 2 of confidential messages of length ( 0 , n m ( I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) ) ) using a secret key of length n m ( I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) ) exchanged between Transmitter 1 and the receiver during T 1 . Hence, after T 1 and T 2 , the transmitters achieved the transmission of confidential messages of length ( n m ( I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) ) n m ( I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) ) , n m ( I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) ) ) .
Step 3. The transmitters repeat T 1 and T 2 t times, where t is arbitrary, since m ( I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) ) m ( I ( X 2 ; Z | X 1 ) I ( X 2 ; Y ) ) > 0 by (iii) in Lemma 2. After these t repetitions, the rate pair achieved is arbitrarily close to
R ̲ = 1 m + m ( m ( I ( X 1 ; Y | X 2 ) I ( X 1 ; Z ) ) m ( I ( X 1 ; Z | X 2 ) I ( X 1 ; Y ) ) , m ( I ( X 2 ; Y | X 1 ) I ( X 2 ; Z ) ) m ( I ( X 2 ; Z | X 1 ) I ( X 2 ; Y ) ) )
provided that t is large enough since Step 1 only requires a fixed number of transmission blocks. Observe that R ̲ D ( P 1 , P 2 ) .
Step 4. Any point of D ( P 1 , P 2 ) can then be achieved as in Case 2 by doing the substitutions C ̲ 1 R ̲ and C ̲ 2 R ̲ in Case 2.a and Case 2.b, respectively.
The proof that secrecy holds over the joint transmissions is similar to Case 2 and thus omitted.

8. Proof of Theorem 3

We first show that determining a converse for our model reduces to determining a converse for a similar model when the jammer is inactive, i.e., when Λ = 0 .
Lemma 3.
Let O ( R 1 , R 2 ) : R 1 B 1 , R 2 B 2 , R 1 + R 2 B 1 , 2 be an outer bound, i.e., a set that contains all possibly achievable rate pairs, for the Gaussian MAC-WT-JA with parameters ( Γ 1 , Γ 2 , h 1 , h 2 , 0 , σ Y 2 + Λ , σ Z 2 ) . Then,
( R 1 , R 2 ) : R 1 B 1 if Γ 1 > Λ 0 if Γ 1 Λ , R 2 B 2 if Γ 2 > Λ 0 if Γ 2 Λ , R 1 + R 2 B 1 , 2
is an outer bound for the Gaussian MAC-WT-JA with parameters ( Γ 1 , Γ 2 , h 1 , h 2 , Λ , σ Y 2 , σ Z 2 ) .
Proof. 
Consider any encoders and decoder for the Gaussian MAC-WT-JA with the parameters ( Γ 1 , Γ 2 , h 1 , h 2 , Λ , σ Y 2 , σ Z 2 ) that achieve the rate pair ( R 1 , R 2 ) . Note that by [24] [Theorem 2.3], for any l { 1 , 2 } such that Γ l Λ , we must have R l = 0 , since an outer bound for the model in [24] is also an outer bound for the Gaussian MAC-WT-JA, which has the additional security constraint (2b). Then, to derive an outer bound, it is sufficient to consider a specific jamming strategy and study the best achievable rates for this jamming strategy, since the boundaries of the capacity region correspond to the best (from the jammer’s point of view) jamming strategies and any other jamming strategy can only enlarge the set of achievable rates. We assume that in each transmission block, the jamming sequence is S n with the components independent and identically distributed according to a zero-mean Gaussian random variable with the variance Λ < Λ . The average probability of error at the legitimate receiver is thus upper-bounded by sup S S P [ M ^ M ] + k P [ S n 2 > n Λ ] n 0 where we used the notation of Definition 1 and the fact that k P [ S n 2 > n Λ ] n 0 since Λ < Λ . Hence, since the secrecy constraint is independent of Λ , we obtain the reliability and secrecy constraints for a Gaussian MAC-WT-JA with parameters ( Γ 1 , Γ 2 , h 1 , h 2 , 0 , σ Y 2 + Λ , σ Z 2 ) , meaning that ( R 1 , R 2 ) O , where O is an outer bound for the Gaussian MAC-WT-JA with parameters ( Γ 1 , Γ 2 , h 1 , h 2 , 0 , σ Y 2 + Λ , σ Z 2 ) . Finally, we conclude the proof by choosing Λ arbitrarily close to Λ . □
We now obtain Theorem 3 as follows. (i) holds from Lemma 3. (ii) holds from Lemma 3 and [33] [Theorem 6] by remarking that x log 1 + x ( 1 + Λ ) 1 1 + x h is non-decreasing when ( 1 + Λ ) 1 > h and negative when ( 1 + Λ ) 1 h .

9. Concluding Remarks

In this paper, we defined Gaussian wiretap channels in the presence of an eavesdropper aided by a jammer. The jamming signal is power-constrained and assumed to be oblivious of the legitimate users’ communication but is not restricted to be Gaussian. We studied several models in this framework, namely point-to-point, multiple-access, broadcast, and symmetric interference settings. We derived inner and outer bounds for these settings, and identified conditions for these bounds to coincide. We stress that no shared randomness among the legitimate users is required in our coding schemes.
Our achievability scheme for the Gaussian MAC-WT-JA relies on novel time-sharing strategies and an extension of successive decoding for multiple-access channels to multiple-access wiretap channels via secret-key exchanges. An open problem remains to provide a scheme that avoids time-sharing. Section 4.2 provides such a scheme for some rate pairs and channel parameters; however, it might not be possible to achieve the entire region of Theorem 2 by solely relying on point-to-point codes, in which case the design of multi-transmitter codes for arbitrarily varying multiple-access channels would be necessary.
Finally, beyond proving the existence of achievability schemes for our models, finding explicit coding schemes largely remains an open problem. We note that [34] investigates this problem for short communication blocklengths over point-to-point channels via a practical approach that relies on deep learning. Another open problem is to achieve the same regions as that derived in this paper under strong and semantic security guarantees.

Author Contributions

The ideas in this work were formed by the discussions between R.A.C. and A.Y. Both authors collaborated on the writing of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by NSF grants CIF-1319338, CNS-1314719, CCF-2105872, and CCF-2047913.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Supporting Results

Lemma A1
([1]). Let ϵ > 0 , η ] 8 ϵ , 1 [ , K > 2 ϵ , R [ 2 ϵ , K ] , and N e n R . Let X 1 n , , X N n be independent random variables uniformly distributed on the unit sphere. With probability arbitrarily close to one as n , we have
  • | { j : X j n , u n α } | e n R + 1 2 log ( 1 α 2 ) + + ϵ for any unit vector u n R n , α > 0 .
  • 1 N | { i : | X j n , X i n | α , | X j n , u n | β , for some j i } | e n ϵ for any unit vector u n R n , α , β [ 0 , 1 ] such that α η , α 2 + β 2 > 1 + η e 2 R .
Theorem A1
([1,24]). Consider a channel whose output is defined as Y n = X n + V n + s n , where X n is the input such that X n 2 n , V n represents noise (to be defined next), and s n is a state unknown to the encoder and decoder such that s n 2 n Λ , Λ < 1 . Let σ , δ > 0 . Consider a codebook C n made of N e n ( 1 2 log ( 1 + ( Λ + σ 2 ) 1 ) δ ) codewords ( x 1 n , , x N n ) that satisfy the two conditions of Lemma A1, and define the average probability of error e ( C n ) of a minimum distance decoder as e ( C n ) 1 N i = 1 N P [ x i n + s n + V n x j n 2 s n + V n 2 , for some j i ] .
  • (From [1]). If V n is a vector with i.i.d. zero-mean Gaussian coordinates with variance σ 2 , then lim n e ( C n ) = 0 .
  • (From [24]). If V n W n + U , where W n is a vector with i.i.d. zero-mean Gaussian coordinates with variance a 2 and U is independently distributed uniformly at random on a sphere with radius n b 2 such that a 2 + b 2 = σ 2 , then lim n e ( C n ) = 0 .

Appendix B. Proof of Theorem 4

We first recall some definitions and results on polymatroids.
Definition A1
([35]). Let f : 2 M R . P ( f ) ( R i ) i M R M : R S f ( S ) , S M associated with the function f is an extended polymatroid if f is submodular, i.e., S , T M , f ( S T ) + f ( S T ) f ( S ) + f ( T ) .
Property A1
([29] [Property 1]). Define g : 2 L ( Λ ) R , T I ( X T ; Y | X T c ) I ( X T ; Z ) , where Y l L ( Λ ) X l + N Y , Z l L ( Λ ) h l X l + N Z , with ( X l ) l L ( Λ ) , N Y , N Z independent zero-mean Gaussian random variables with variances ( P l ) l L ( Λ ) , ( 1 + Λ ) , 1, respectively.
C ( Λ ) ( R l ) l L ( Λ ) R | L ( Λ ) | : T L ( Λ ) , R T g ( T )
associated with g is an extended polymatroid.
Property A2
([35]). Define the dominant face D ( Λ ) of C ( Λ ) as
D ( Λ ) ( R l ) l L ( Λ ) C ( Λ ) : R L ( Λ ) = g ( L ( Λ ) ) .
For π Sym ( | L ( Λ ) | ) , where Sym ( | L ( Λ ) | ) is the symmetric group on L ( Λ ) , for i , j L ( Λ ) , define π i : j π ( k ) k i , j .   D ( Λ ) is the convex hull of the vertices V ( C π ( i ) ) i 1 , | L ( Λ ) | : π Sym ( | L ( Λ ) | ) , where for π Sym ( | L ( Λ ) | ) , for i 1 , | L ( Λ ) | , C π ( i ) = g { π i : | L ( Λ ) | } g { π i + 1 : | L ( Λ ) | } .
Define D + ( Λ ) D ( Λ ) R + | L ( Λ ) | . By Property A2, for any R ̲ D + ( Λ ) , for any V ̲ = ( V l ) l L ( Λ ) V , there exists α V ̲ [ 0 , 1 ] , such that V ̲ V α V ̲ = 1 and R ̲ = V ̲ V α V ̲ V ̲ . As remarked in [29], g is, in general, not non-decreasing; hence, some V ̲ V might have negative components and the successive decoding method [5] [Appendix C] cannot be applied to the multiple-access wiretap channel. We show in the following how to overcome this issue. For l L ( Λ ) , define R l * V ̲ V α V ̲ 𝟙 { V l < 0 } V l , and R ̲ * ( R l * ) l L ( Λ ) . Our coding scheme operates in three steps, the idea of which is described below.
Step 1. For l L ( Λ ) , a secret message of length n R l * is exchanged between Transmitter l and the receiver.
Step 2. For all V ̲ V , secret messages of length n ( α V ̲ 𝟙 { V l > 0 } V l ) l L ( Λ ) are exchanged between the transmitters and the receiver, provided that secret sequences of length n R ̲ * are shared between the transmitters and the receiver, which is ensured by Step 1. The overall length of secret communication is n ( V ̲ V α V ̲ 𝟙 { V l > 0 } V l ) l L ( Λ ) , i.e., n ( R ̲ + R ̲ * ) .
Step 3. Repeat t times Step 2. It is possible to do so because secret sequences of length at least n R ̲ * were exchanged between the transmitters and the receiver in Step 2. The overall rate of secret sequences exchanged between the transmitters and the receiver is thus R ̲ , provided that t is large enough, since Step 1 only requires the transmission of a finite number of blocks.
The coding schemes and their analyses to realize Steps 1 and 2 are described in Appendix B.1 and Appendix B.2, respectively. In the remainder of the section, Y and Z are defined as in Property A1 with ( X l ) l L ( Λ ) zero-mean Gaussian random variables with variances ( P l ) l L ( Λ ) .

Appendix B.1. Proof of Step 1

The proof of Step 1 directly follows from the point-to-point setting, i.e., Theorem 1, applied to each l L ( Λ ) since we assumed h l < h Λ .

Appendix B.2. Proof of Step 2

We fix V ̲ V . The following procedure must be reiterated for each V ̲ V by applying a permutation π Sym ( | L ( Λ ) | ) on the labeling of the transmitters. For convenience, we relabel the transmitter from 1 to | L ( Λ ) | and redefine L ( Λ ) as 1 , | L ( Λ ) | . We show how to exchange secret messages with rate ( 𝟙 { V l > 0 } V l ) l L ( Λ ) between the transmitters and the receiver, when they have access to pre-shared secrets (obtained from Step 1) with rate ( 𝟙 { V l < 0 } V l ) l L ( Λ ) . Define I { l L ( Λ ) : V l 0 } and I c L ( Λ ) I . We also use the notation X L ( Λ ) ( X l ) l L ( Λ ) , X L ( Λ ) n ( X l n ) l L ( Λ ) , and for i , j L ( Λ ) , X i : j ( X l ) l i , j .
Codebook construction: For Transmitter i I c , construct a codebook C n ( i ) with 2 n R i 2 n R ˜ i codewords drawn independently and uniformly on the sphere of radius n P i in R n . The codewords are labeled x i n ( m i , m ˜ i ) , where m i 1 , 2 n R i , m ˜ i 1 , 2 n R ˜ i . We choose the rates as R i I ( X i ; Y | X 1 : i 1 ) I ( X i ; Z | X i + 1 : | L ( Λ ) | ) δ , R ˜ i I ( X i ; Z | X i + 1 : | L ( Λ ) | ) δ . For Transmitter i I , construct a codebook C n ( i ) with 2 n R ˘ i 2 n R ˚ i codewords drawn independently and uniformly on the sphere of radius n P i in R n . The codewords are labeled x i n ( m ˘ i , m ˚ i ) , where m ˘ i 1 , 2 n R ˘ i , m ˚ i 1 , 2 n R ˚ i . We define the rates R ˘ i I ( X i ; Y | X 1 : i 1 ) δ , R ˚ i I ( X i ; Z | X i + 1 : | L ( Λ ) | ) I ( X i ; Y | X 1 : i 1 ) δ , and R ˜ i R ˘ i + R ˚ i = I ( X i ; Z | X i + 1 : | L ( Λ ) | ) 2 δ . Define C n ( C n ( i ) ) i L ( Λ ) .
Encoding at the transmitters: For Transmitter i I c , given ( m i , m ˜ i ) , transmit x i n ( m i , m ˜ i ) . For Transmitter i I , given ( m ˘ i , m ˚ i ) , transmit x i n ( m ˘ i , m ˚ i ) , where m ˚ i is assumed to be known at the receiver by the transmissions in Step 1. In the following, we define for i I , m ˜ i ( m ˘ i , m ˚ i ) . By convention, define for i I , m i . Also define m ( m i ) i L ( Λ ) , m ˜ ( m ˜ i ) i L ( Λ ) . In the following, we refer to m ˜ as randomization sequence.
Decoding: The receiver performs minimum distance decoding, i.e., given y n , determine starting from i = 1 to i = | L ( Λ ) | , ( m ^ i , m ˜ ^ i ) ϕ i ( y n , j = 1 i 1 x j n ( m ^ j , m ˜ ^ j ) ) where
ϕ i : ( y n , x ) ( m i , m ˜ i ) if y n x x i n ( m i , m ˜ i ) 2 < y n x x i n ( m i , m ˜ i ) 2 for ( m i , m ˜ i ) ( m i , m ˜ i ) 0 if no such ( m i , m ˜ i ) 1 , 2 n R i × 1 , 2 n R ˜ i exists .
Define m ^ ( m ^ i ) i L ( Λ ) , m ˜ ^ ( m ˜ ^ i ) i L ( Λ ) . Let e ( C n , s n ) P M ^ M | C n , we now prove that on average on C n , we have E C n [ sup s n e ( C n , s n ) ] + 1 n I ( M ; Z n | C n ) n 0 . We will thus conclude that there exists a sequence of realizations ( C n ) of ( C n ) such that both sup s n e ( C n , s n ) and 1 n I ( M ; Z n | C n ) can be made arbitrarily close to zero as n .
Average probability of error: We have
e ( C n , s n ) P M ^ M or M ˜ ^ M ˜ C n
                                                        = i L ( Λ ) e i C n , s n , j = i + 1 | L ( Λ ) | x j n ( M j , M ˜ j ) ,
where for i L ( Λ )
e i ( C n , s n , x ) 1 2 n R i 2 n R ˜ i m i m ˜ i P x i n ( m i , m ˜ i ) + s n + x + N Y n x i n ( m i , m ˜ i ) 2 s n + x + N Y n 2 for some ( m i , m ˜ i ) ( m i , m ˜ i ) .
Assume that the receiver has reconstructed ( m j , m ˜ j ) j 1 , i , for i L ( Λ ) . Assume first that i + 1 I c . Using minimum distance decoding, on average over the codebooks, we show that the receiver can reconstruct x i + 1 n . We have
E C n e i C n , s n , j = i + 1 | L ( Λ ) | x j n ( M j , M ˜ j )
E C n e i C n , s n , j = i + 1 | L ( Λ ) | x j n ( M j , M ˜ j ) C n ( i ) C i * + P C n ( i ) C i *
n 0 ,
where in Equation (A6a) C i * represents all the sets of unit norm vectors scaled by n P i that satisfy the two conditions of Lemma A1 (in Appendix A), Equation (A6b) holds because P [ C n ( i ) C i * ] n 1 by Lemma A1, and E C n e i C n , s n , j = i + 1 | L ( Λ ) | x j n ( M j , M ˜ j ) | C n ( i ) C i * n 0 by Theorem A1 (in Appendix A) using the definition of R i + R ˜ i and by interpreting the signal of transmitters in i + 1 , | L ( Λ ) | as noise.
Similarly, when i + 1 I , using minimum distance decoding, on average over the codebooks, the receiver can reconstruct x i + 1 n ( m ˘ i + 1 , m ˚ i + 1 ) with a vanishing average probability of error because m ˚ i + 1 is known at the receiver and by definition of R ˘ i + 1 , hence,
E C n [ e ( C n , s n ) ] n 0 .
Equivocation: We first study the average error probability of decoding m ˜ given ( z n , m ) with the following procedure. From i = | L ( Λ ) | to i = 1 , given ( z n , m ) , determine m ˜ ˙ i ψ i z n , j = i + 1 | L ( Λ ) | h j x j n ( m j , m ˜ ˙ j ) , where for i L ( Λ )
ψ i : ( z n , x ) m ˜ i if z n x h i x i n ( m i , m ˜ i ) 2 < z n x h i x i n ( m i , m ˜ i ) 2 for m ˜ i m ˜ i 0 if no such m ˜ i 1 , 2 n R ˜ i exists .
We define e ˜ ( C n ) P M ˜ ˙ M ˜ C n . We have
e ˜ ( C n ) = i L ( Λ ) e ˜ i C n , j = 1 i 1 h j x j n ( M j , M ˜ j ) ,
where for i L ( Λ )
e ˜ i ( C n , x ) 1 2 n R ˜ i m ˜ i P h i x i n ( m i , m ˜ i ) + x + N Z n h i x i n ( m i , m ˜ i ) 2 x + N Z n 2 for some m ˜ i m ˜ i .
Similar to the justifications for obtaining Equation (A6b), E C n e ˜ i ( C n , j = 1 i 1 h j x j n ( M j , M ˜ j ) ) vanishes to zero as n by interpreting the signal of transmitters in 1 , i 1 as noise and by using the definition of R ˜ i . We thus obtain
E C n [ e ˜ ( C n ) ] n 0 .
Let the superscript T denote the transpose operation and define X [ h 1 ( X 1 n ) T h 2 ( X 2 n ) T h | L ( Λ ) | ( X | L ( Λ ) | n ) T ] T R n | L ( Λ ) | × 1 , such that
Z n = G X + N Z n ,
with G [ I n , I n , , I n ] R n × n | L ( Λ ) | and I n the identity matrix with dimension n. Let K X denote the covariance matrix of X . Similar to Equation (50), we have
K X = diag ( h 1 P 1 I n , , h | L ( Λ ) | P | L ( Λ ) | I n ) .
Then, we have
I ( M ; Z n | C n ) I ( X ; Z n ) H ( M ˜ | C n ) + H ( M ˜ | Z n M C n )
                                                                              1 2 log | G K X G T + I n | H ( M ˜ | C n ) + H ( M ˜ | Z n M C n )
                                                                              = n 2 log 1 + l L ( Λ ) h l P l H ( M ˜ | C n ) + H ( M ˜ | Z n M C n )
                                                                                        n I ( X L ( Λ ) ; Z ) n ( I ( X L ( Λ ) ; Z ) 2 | L ( Λ ) | δ ) + O ( n E C n [ e ˜ ( C n ) ] )
= 2 | L ( Λ ) | δ + o ( n ) ,
where Equation (A14a) holds similar to Equation (51d), Equation (A14b) holds similar to Equation (51f), Equation (A14c) holds by Equation (A13), in Equation (A14d), we used the definition of i L ( Λ ) R ˜ i and the uniformity of M ˜ to obtain the second term, and Fano’s inequality to obtain the third term, Equation (A14e) holds by Equation (A11).
The proof of joint secrecy for Step 1 and the repetitions of Step 2 is similar to the proof of Theorem 2.

Appendix C. Proof of Theorem 5

The proof that Equation (20) is an upper bound on the secrecy sum-rate is similar to the case L = 2 in Theorem 3.
Remark that from the statement of Corollary 1, it is unclear whether the sum-rate of Theorem 5 is achievable. However, by inspecting the proof of Theorem 4, observe that we achieve a point in D + ( Λ ) D ( Λ ) R + | L ( Λ ) | , where D ( Λ ) is defined in Equation (A2). Hence, the sum-rate of Theorem 5 is indeed achievable.

References

  1. Csiszár, I.; Narayan, P. Capacity of the Gaussian arbitrarily varying channel. IEEE Trans. Inf. Theory 1991, 37, 18–26. [Google Scholar] [CrossRef] [Green Version]
  2. Leung-Yan-Cheong, S.; Hellman, M. The Gaussian wire-tap channel. IEEE Trans. Inf. Theory 1978, 24, 451–456. [Google Scholar] [CrossRef]
  3. Tekin, E.; Yener, A. The general Gaussian multiple-access and two-way wiretap channels: Achievable rates and cooperative jamming. IEEE Trans. Inf. Theory 2008, 54, 2735–2751. [Google Scholar] [CrossRef] [Green Version]
  4. Bagherikaram, G.; Motahari, A.S.; Khandani, A.K. Secure broadcasting: The secrecy rate region. In Proceedings of the Communication, Control, and Computing, 2008 46th Annual Allerton Conference, Monticello, IL, USA, 23–26 September 2008; pp. 834–841. [Google Scholar]
  5. Grant, A.; Rimoldi, B.; Urbanke, R.; Whiting, P. Rate-splitting multiple access for discrete memoryless channels. IEEE Trans. Inf. Theory 2001, 47, 873–890. [Google Scholar] [CrossRef]
  6. MolavianJazi, E.; Bloch, M.; Laneman, J. Arbitrary jamming can preclude secure communication. In Proceedings of the 47th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 1–3 October 2009; pp. 1069–1075. [Google Scholar]
  7. Bjelaković, I.; Boche, H.; Sommerfeld, J. Capacity results for arbitrarily varying wiretap channels. In Information Theory, Combinatorics, and Search Theory; Springer: Berlin/Heidelberg, Germany, 2013; pp. 123–144. [Google Scholar]
  8. Nötzel, J.; Wiese, M.; Boche, H. The Arbitrarily Varying Wiretap Channel: Secret Randomness, Stability, and Super-Activation. IEEE Trans. Inf. Theory 2016, 62, 3504–3531. [Google Scholar] [CrossRef] [Green Version]
  9. Wiese, M.; Notzel, J.; Boche, H. A Channel Under Simultaneous Jamming and Eavesdropping Attack—Correlated Random Coding Capacities Under Strong Secrecy Criteria. IEEE Trans. Inf. Theory 2016, 62, 3844–3862. [Google Scholar] [CrossRef] [Green Version]
  10. Goldfeld, Z.; Cuff, P.; Permuter, H.H. Arbitrarily varying wiretap channels with type constrained states. IEEE Trans. Inf. Theory 2016, 62, 7216–7244. [Google Scholar] [CrossRef]
  11. Chou, R. Explicit Wiretap Channel Codes via Source Coding, Universal Hashing, and Distribution Approximation, When the Channels Statistics are Uncertain. IEEE Trans. Inf. Forensics Secur. 2022, 17. [Google Scholar] [CrossRef]
  12. Csiszár, I. Almost Independence and Secrecy Capacity. Probl. Inf. Transm. 1996, 32, 40–47. [Google Scholar]
  13. Yassaee, M.; Aref, M.; Gohari, A. Achievability proof via output statistics of random binning. IEEE Trans. Inf. Theory 2014, 60, 6760–6786. [Google Scholar] [CrossRef] [Green Version]
  14. Hayashi, M. General nonasymptotic and asymptotic formulas in channel resolvability and identification capacity and their application to the wiretap channel. IEEE Trans. Inf. Theory 2006, 52, 1562–1575. [Google Scholar] [CrossRef]
  15. Bloch, M.; Laneman, J.N. Strong secrecy from channel resolvability. IEEE Trans. Inf. Theory 2013, 59, 8077–8098. [Google Scholar] [CrossRef] [Green Version]
  16. He, X.; Yener, A. MIMO wiretap channels with unknown and varying eavesdropper channel states. IEEE Trans. Inf. Theory 2014, 60, 6844–6869. [Google Scholar] [CrossRef]
  17. Mukherjee, A.; Swindlehurst, A.L. Jamming games in the MIMO wiretap channel with an active eavesdropper. IEEE Trans. Sig. Process. 2013, 61, 82–91. [Google Scholar] [CrossRef] [Green Version]
  18. Banawan, K.; Ulukus, S. Achievable secrecy rates in the multiple access wiretap channel with deviating users. In Proceedings of the IEEE International Symposium on Information Theory, Barcelona, Spain, 10–15 July 2016; pp. 2814–2818. [Google Scholar]
  19. Amariucai, G.T.; Wei, S. Half-duplex active eavesdropping in fast-fading channels: A block-Markov Wyner secrecy encoding scheme. IEEE Trans. Inf. Theory 2012, 58, 4660–4677. [Google Scholar] [CrossRef] [Green Version]
  20. Basciftci, Y.O.; Gungor, O.; Koksal, C.E.; Ozguner, F. On the secrecy capacity of block fading channels with a hybrid adversary. IEEE Trans. Inf. Theory 2015, 61, 1325–1343. [Google Scholar] [CrossRef] [Green Version]
  21. Zhang, Y.; Vatedka, S.; Jaggi, S.; Sarwate, A.D. Quadratically constrained myopic adversarial channels. IEEE Trans. Inf. Theory 2022, 68, 4901–4948. [Google Scholar] [CrossRef]
  22. Zhang, Y.; Vatedka, S.; Jaggi, S. Quadratically constrained two-way adversarial channels. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Los Angeles, CA, USA, 21–26 June 2020; pp. 1587–1592. [Google Scholar]
  23. Li, T.; Dey, B.K.; Jaggi, S.; Langberg, M.; Sarwate, A.D. Quadratically constrained channels with causal adversaries. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Vail, CO, USA, 17–22 June 2018; pp. 621–625. [Google Scholar]
  24. La, R.; Anantharam, V. A game-theoretic look at the Gaussian multiaccess channel. DIMACS Ser. Discret. Math. Theor. Comput. Sci. 2004, 66, 87–106. [Google Scholar]
  25. Chou, R.A.; Yener, A. The degraded Gaussian multiple access wiretap channel with selfish transmitters: A coalitional game theory perspective. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 1703–1707. [Google Scholar]
  26. Ekrem, E.; Ulukus, S. Secrecy capacity region of the Gaussian multi-receiver wiretap channel. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Seoul, Korea, 28 June–3 July 2009; pp. 2612–2616. [Google Scholar]
  27. Sarwate, A.D.; Gastpar, M. Randomization Bounds on Gaussian arbitrarily varying channels. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Seattle, WA, USA, 9–14 July 2006; pp. 2161–2165. [Google Scholar]
  28. Sato, H. The capacity of the Gaussian interference channel under strong interference. IEEE Trans. Inf. Theory 1981, 27, 786–788. [Google Scholar] [CrossRef]
  29. Chou, R.; Yener, A. Polar coding for the multiple access wiretap channel via rate-splitting and cooperative jamming. IEEE Trans. Inf. Theory 2018, 64, 7903–7921. [Google Scholar] [CrossRef]
  30. Ekrem, E.; Ulukus, S. On the secrecy of multiple access wiretap channel. In Proceedings of the Annual Allerton Conf. on Communication Control and Computing, Monticello, IL, USA, 23–26 September 2008; pp. 1014–1021. [Google Scholar]
  31. El Gamal, A.; Kim, Y.H. Network Information Theory; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  32. Wyner, A. The wire-tap channel. Bell Syst. Tech. J. 1975, 54, 1355–1387. [Google Scholar] [CrossRef]
  33. Tekin, E.; Yener, A. The Gaussian multiple access wire-tap channel. IEEE Trans. Inf. Theory 2008, 54, 5747–5755. [Google Scholar] [CrossRef]
  34. Rana, V.; Chou, R.A. Short Blocklength Wiretap Channel Codes via Deep Learning: Design and Performance Evaluation. arXiv 2022, arXiv:2206.03477. [Google Scholar]
  35. Schrijver, A. Combinatorial Optimization: Polyhedra and Efficiency; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2003; Volume 24. [Google Scholar]
Figure 1. The Gaussian wiretap channel in the presence of a jammer-aided eavesdropper.
Figure 1. The Gaussian wiretap channel in the presence of a jammer-aided eavesdropper.
Entropy 24 01595 g001
Figure 2. The Gaussian multiple-access wiretap channel in the presence of a jammer-aided eavesdropper.
Figure 2. The Gaussian multiple-access wiretap channel in the presence of a jammer-aided eavesdropper.
Entropy 24 01595 g002
Figure 3. The Gaussian broadcast wiretap channel in the presence of a jammer-aided eavesdropper.
Figure 3. The Gaussian broadcast wiretap channel in the presence of a jammer-aided eavesdropper.
Entropy 24 01595 g003
Figure 4. The shaded area represents R 1 , 2 M A C ( P 1 , P 2 ) , where ( P 1 , P 2 , Λ , h 1 , h 2 ) = ( 4 , 3.3 , 1.5 , 0.12 , 0.11 ) . The solid segments represent the rate pairs achievable with Proposition 1.
Figure 4. The shaded area represents R 1 , 2 M A C ( P 1 , P 2 ) , where ( P 1 , P 2 , Λ , h 1 , h 2 ) = ( 4 , 3.3 , 1.5 , 0.12 , 0.11 ) . The solid segments represent the rate pairs achievable with Proposition 1.
Entropy 24 01595 g004
Figure 5. Region R 1 , 2 ( P 1 , P 2 ) .
Figure 5. Region R 1 , 2 ( P 1 , P 2 ) .
Entropy 24 01595 g005
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chou, R.A.; Yener, A. Gaussian Multiuser Wiretap Channels in the Presence of a Jammer-Aided Eavesdropper. Entropy 2022, 24, 1595. https://doi.org/10.3390/e24111595

AMA Style

Chou RA, Yener A. Gaussian Multiuser Wiretap Channels in the Presence of a Jammer-Aided Eavesdropper. Entropy. 2022; 24(11):1595. https://doi.org/10.3390/e24111595

Chicago/Turabian Style

Chou, Rémi A., and Aylin Yener. 2022. "Gaussian Multiuser Wiretap Channels in the Presence of a Jammer-Aided Eavesdropper" Entropy 24, no. 11: 1595. https://doi.org/10.3390/e24111595

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop