Next Article in Journal
Quantitative and Qualitative Running Gait Analysis through an Innovative Video-Based Approach
Previous Article in Journal
Wildlife Monitoring on the Edge: A Performance Evaluation of Embedded Neural Networks on Microcontrollers for Animal Behavior Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bhattacharyya Parameter of Monomial Codes for the Binary Erasure Channel: From Pointwise to Average Reliability

by
Vlad-Florin Drăgoi
1,2,* and
Gabriela Cristescu
1
1
Faculty of Exact Sciences, Aurel Vlaicu University of Arad, 2 Elena Dragoi Street, 310130 Arad, Romania
2
LITIS, University of Rouen Normandie, Avenue de l’Université, 76801 Saint-Etienne-du-Rouvray, France
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(9), 2976; https://doi.org/10.3390/s21092976
Submission received: 25 January 2021 / Revised: 15 April 2021 / Accepted: 21 April 2021 / Published: 23 April 2021
(This article belongs to the Section Communications)

Abstract

:
Monomial codes were recently equipped with partial order relations, a fact that allowed researchers to discover structural properties and efficient algorithm for constructing polar codes. Here, we refine the existing order relations in the particular case of the binary erasure channel. The new order relation takes us closer to the ultimate order relation induced by the pointwise evaluation of the Bhattacharyya parameter of the synthetic channels, which is still a partial order relation. To overcome this issue, we appeal to a related technique from network theory. Reliability network theory was recently used in the context of polar coding and more generally in connection with decreasing monomial codes. In this article, we investigate how the concept of average reliability is applied for polar codes designed for the binary erasure channel. Instead of minimizing the error probability of the synthetic channels, for a particular value of the erasure parameter p, our codes minimize the average error probability of the synthetic channels. By means of basic network theory results, we determine a closed formula for the average reliability of a particular synthetic channel, that recently gain the attention of researchers.

1. Introduction

One of the most striking developments in coding theory in the last two decades is probably the theory around polar codes. In his seminal article [1], Arikan demonstrated, for the first time, that one could achieve the capacity of binary discrete memoryless channels (BDMC), using both efficient encoding and efficient decoding algorithms. The so-called polar codes are now present in the fifth generation (5G) technology [2]. Indeed, polar code was elected as the standard coding technique for the control channel in support of the enhanced mobile broadband service, one of the major parts in 5G wireless network technology. Getting back to the three principal directions on which coding theory evolved, polar coding seemed to be unrelated to classical algebraic coding. Typically, the construction of polar codes does not come from any particular structure in the code, but rather from the process of channel polarization. However, polar codes are closely related to Reed–Muller codes, as pointed out even by Arikan [1]. Hence, polar and Reed-Muller code share a common algebraic description [3,4]. More precisely, they are sub-classes of a larger family of algebraic codes called decreasing monomial codes (DMC). The structure underlying DMCs and its algebraic formalism was applied in conjunction with other fields, e.g., in the context of quantum error correcting codes [5,6,7], post-quantum cryptography [8,9,10] and network reliability [11,12,13].
Several challenges regarding polar coding, among which the efficient construction of polar codes given a specific BDMC, were proposed. Arikan’s initial technique [1] was improved by several authors [14,15,16,17,18,19,20,21,22]. Let W denote a BDMC, m a fixed integer and u a binary vector of length m . The main idea of the construction of polar codes is to estimate the reliability of the synthetic channels { W u | u { 0 , 1 } m } . For that, one might use the Bhattacharyya parameter B W u ( p ) , where p denotes the error probability of the channel W. The message parts of a polar code of length 2 m and dimension k are allocated to the k sub-channels W u having the smallest B W u . Hence, one might classify the set of W u into “good” (reliable) or “bad” (non-reliable). For a fixed value of p, the values B W u ( p ) are put in order. In other words, when the parameter p is fixed, any distinct pair of channels W u , W v satisfies either B W u ( p ) B W v ( p ) or B W v ( p ) B W u ( p ) . In this case, we say that a channel W u is point-wise more reliable than a channel W v . However, when considering the whole interval p [ 0 , 1 ] , ranking the synthetic channels becomes complicated. In this case, we say that W u is globally more reliable than W v , and write u v , if and only if p [ 0 , 1 ] , B W u ( p ) B W v ( p ) .
Estimating how reliable a synthetic channel is can be done in several ways, Monte Carlo simulations being among the most common ones. Arikan, in his seminal paper [1], proposed such a method for determining the error probability of each synthetic channel which yields a relatively efficient generating algorithm. It could be possible to employ recent development in Monte Carlo such as those in [23,24] in order to improve Arikan’s idea. However, the most efficient techniques for constructing polar codes are exploiting order relations between the synthetic channels. One of the most efficient techniques that orders the set of synthetic channels (with respect to the concept of globally more reliable), provides sub-linear complexity construction [14]. It exploits the existence of a partial order (denoted by ⪯) on the set of synthetic channels [4]. This partial order is compatible with the notion of being globally more reliable, i.e., u v u v . Even though ⪯ provided a contribution to understanding polar codes, i.e., their structure and construction, simulations show that ⪯ is far from ordering B W u optimally. Hence, in a recent article, ⪯ was refined [25]. Compared to Monte Carlo methods, the techniques based on order relations valid for polar codes require a small number of computations, only for a fraction of the synthetic channels, as they exploit the rules induced by the order relations.
In the analysis of the performance of several families of codes, among which are polar, Reed–Muller, cyclic and BCH codes, the communication channel that received a lot of attention is the binary erasure channel ( BEC ). When polar codes are designed for BEC ( p ) (in this particular case, p denotes the erasure probability), all the synthetic channels { W u | u { 0 , 1 } m } are also BEC . In this case, the erasure probability of W u is equal to the Bhattacharyya parameter of W u . Here, we analyze this particular channel. Our choice is motivated by several results and methods. First of all, the simplicity of this channel makes the theoretical proofs significantly simpler and easier. Moreover, many of the properties that hold for the BEC turn out to be valid for more general channel models. For example, the proof of Reed–Muller codes achieving the capacity of a communication channel started with the BEC [26,27]. Codes that admit a doubly-transitive automorphism group or having large orbits under the action of their permutation group achieve the capacity of the BEC [27,28]. In [29], the authors analyze threshold points for W u in the case of BEC , a fact that allows them to propose sets of asymptotically “good” channels. Recently, in [30] the authors analyzed the Bhattacharyya parameter of polar codes for the BEC using network reliability theory. They have proposed simple approximations of B W u . These were used to determine sub-intervals of [ 0 , 1 ] , where polar codes coincide with Reed–Muller codes. They have also managed to determine new sets of asymptotically “good” channels.

1.1. Polar Codes Are Strongly Decreasing Monomial Codes

Polar codes over the BEC satisfy an order relation that is finer than . . Hence, we define another order relation d on the set of monomials on m variables M m = { 1 , x 0 , , x m 1 , x 0 x 1 , , x 0 x 1 x m 1 } , coming closer to the ≤ relation, i.e., we have
f , g M m f g f d g f g .
The relation d allows one to compare monomials with equal degrees that were not comparable with respect to ⪯, e.g., x 1 x 2 d x 0 x 3 . The idea of d came from the link between the set of monomials of degree d in M m and the set of partitions/Young diagrams inside the d × ( m d ) grid (see Proposition 3.7.8 in [3]). From that, looking at order relations on partitions came as a natural idea, and the most common one is the dominance order [31]. The order d is exactly defined as the dominance order on partitions inside a fixed grid.
The main result in this section can be stated as follows
Theorem 1.
Polar codes over the binary erasure channel are strongly decreasing monomial codes.
In the proof of this theorem, we will need to demonstrate two useful properties of this new order
  • given two monomials in f , g M m such that f d g , then, for any multiples f h , g h with gcd ( h , f ) = 1 and gcd ( g , h ) = 1 we have f h d g h , where gcd ( f , g ) denotes the greatest common divisor of f , g .
  • two particular monomials are the key ingredients in the proof, x 1 x 2 d x 0 x 3 . We show that for all p [ 0 , 1 ] , B W x 1 x 2 ( p ) B W x 0 x 3 ( p ) , and in general that any pair of monomials of degree 2 f , g , satisfying that f d g has the property for all p [ 0 , 1 ] , B W f ( p ) B W g ( p ) .
Even though d gets us closer to the ultimate order relation ≤, we know that d is a partial order relation. d seems to perform as well as the order relation from [25], being much simpler to describe and analyze than the order in [25]. Furthermore, in [25] the authors determine new order relations based on some hypotheses which are not algebraically easy to express, and which are to be tested each time we change the parameters of the code. Figure 1 illustrates how our results fit into state-of-the-art order relations in conjunction with polar coding.
Now, as d is thinner than ⪯, one could use it in order to determine new sets of comparable monomials and thus generate polar codes in a more efficient manner. Indeed, as more monomials are comparable with respect to d , they induce new chains in the poset { M m , d } . This enables us to reduce the number of non comparable elements and also the number of strongly decreasing monomial codes compared to decreasing monomial codes. These two ideas are being illustrated as possible methods for reducing the complexity of the construction algorithm, hence directing towards possible practical applications.

1.2. Average Reliability of Synthetic Channels

Hence, we are still left with elements that are not comparable and for which we need to compute B W u . In order to overcome this issue, we propose an alternative solution. Suppose that the erasure probability of the channel p changes with respect to the uniform distribution over the closed interval [ 0 , 1 ] . Instead of constructing, for each p, the corresponding polar code, we propose to construct the best polar code in average. More exactly, we consider the average reliability of the synthetic channels W u , Avr ( W u ) = 0 1 B W u d p , and choose those u that minimize this quantity. As the average reliability induces a total order relation (see Figure 2), there is only one polar code for a given dimension and length. It is the linear code that minimizes the average error probability for all p [ 0 , 1 ] . Hence, it might be less efficient than polar codes designed for a particular value of p, but it has the best performance on average.
The preorder u Avr v Avr ( W u ) Avr ( W v ) induces a complementarity property with respect to the integral operator over [ 0 , 1 ] , as defined in [32,33] in the case of two-terminal networks. We retrieve a similar property, i.e., Avr ( W u ) = 1 Avr W u ¯ , where u ¯ is the bit-wise complement of u , in the context of monomial codes. Our simulations have shown that, considering the relation Avr in the set of the synthetic channels, in each sub-interval ( i / 10 , ( i + 1 ) / 10 ) , for 0 i 9 , we have a rough proportion of 2 m / 10 binary vectors u . So, roughly speaking, a uniform distribution could be used to approximate the number of u inside each sub-interval, with respect to Avr . However, our result is not constructive, in the sense that it does not fully characterize exactly the u that belong to a specific interval. An answer to this question might provide an extremely efficient method for constructing polar codes and give much more insight into the synthetic channels W u .

Threshold Points for Sharp Transitions

Determining the threshold point of B W u is in general a difficult task [29,34]. In [29], the authors analyze a particular synthetic channel W ( 1 i 0 m 1 ) , for which asymptotic threshold points were determined. The conditions on i and m were further improved in [25]. Based on some basic notions and facts from network theory, we determine an exact formula for the average reliability of W ( 1 i 0 m 1 ) . The main result is
Theorem 2.
Let u = ( 1 i 0 m i ) . Then
Avr W u = 1 1 2 i + 2 i m 2 i
This allows us to determine the exact threshold point of this particular channel. Moreover, we demonstrate that for any i m log 2 ( m ) log 2 ( log 2 ( m ) ) , the channel W ( 1 i 0 m i ) has an average Bhattacharyya parameter that tends to zero when m goes to infinity, i.e., W ( 1 i 0 m i ) is asymptotically “good” on average. Another consequence of our formula is that for any monomial g d x m i + 1 x m with i log 2 ( log 2 ( m ) ) is such that Avr ( W g ) tends to zero when m goes to infinity.
Another significant implication of our result is that any synthetic channel in the RM ( i , m ) is asymptotically “good” on average, for any i log 2 ( log 2 ( m ) ) .

1.3. Outline of the Article

The main concepts, notations and properties that are used all over the paper are introduced in Section 2. We introduce the background on coding, referring to the monomial codes in Section 2.1, the polar codes in Section 2.2 and to various manners of comparing them in Section 2.3. The two-terminal networks and their reliability are discussed in Section 2.4. The similarity between the behavior of the Bhattacharyya parameters and the reliability polynomial in assessing the monomial codes is emphasized in Section 2.5. The relationship between the order relations and the corresponding order structures over the set of polar codes over the binary erasure channel is studied in Section 3. The main result in this section proves that the polar codes over the binary erasure channel are strongly decreasing monomial codes. In the same section, two ideas pointing to possible practical applications of the order relation d are developed. The concept of average reliability of a synthetic channel is introduced in Section 4. The properties of this operator are studied in Section 4.1 and the relation to the β -expansion is presented in Section 4.2. Finally, the threshold points of the binary erasure polarization sub-channels are determined in Section 4.3. Simulations and numerical examples are included to illustrate all the new results. We conclude our article in Section 5.

2. Background and Preliminary Results

Let us begin by listing some of the usual notations from coding theory that are going to be used in this article. F 2 will denote the finite field with two elements { 0 , 1 } . Let k , n be two strictly positive integers and k n . A code C of length n and dimension k is a vector sub-space of F 2 n of dimension k . In this article, we focus out attention on a particular family or linear codes, namely monomial codes. W will be used to denote a communication channel with binary input x F 2 and output from an alphabet y Y . In particular, we will focus on BEC ( p ) , where the output is Y = { 0 , 1 , ? } , ? denoting an erasure and p being the erasure probability. For a more detailed reading of the subject, we recommend [35,36].

2.1. Monomial Codes

Monomial codes are a special class of structured codes. Informally, any code that admits a basis, in which each vector is the evaluation of a monomial, is called a monomial code. In general, monomial codes have a predefined length, i.e., 2 m . Many of the notations, definitions, properties, and results presented in this section are taken from [3].
In this article, binary vectors of length m will be denoted using bold small letters, e.g., u = ( u 0 , , u m 1 ) F 2 m , with the convention that bits are ordered from left to right, u 0 being the least significant bit. We also define the bit-wise complement of u { 0 , 1 } m by u ¯ = 1 m u (as in [4]), where 1 m is the all-ones vector. The set { u F 2 m } will be ordered in a natural manner, using the mapping
( u 0 , , u m 1 ) u = i = 0 m 1 u i 2 i ,
and the natural order on the integers. Notice that we compute the value u regardless of the fact that u i F 2 . Notice that the relation between u and u ¯ induces u + u ¯ = 2 m 1 .
We consider multivariate polynomials and monomials defined over the polynomial ring R m = F 2 [ x 0 , x 1 , , x m 1 ] / ( x 0 2 x 0 , , x m 1 2 x m 1 ) . The usual operators will be employed, i.e., for f , g R m , we denote by deg f the degree of f, gcd ( f , g ) the greatest common divisor of f and g. f / g denotes the quotient of f and g.
Notation 1.
Let m be a strictly positive integer. We denote
  • monomials: x u = x 0 u 0 x m 1 u m 1 , where u F 2 m .
  • support of a monomial: ind ( g ) = { l 1 , l s } , where g = x l 1 x l s and 0 l 1 < l 2 < l s m 1 .
  • a subset of the support of a monomial: g [ 0 , s ] = gcd ( g , i = 0 s x i ) .
  • the set of monomials: M m = def x u | u = ( u 0 , , u m 1 ) F 2 m .
Proposition 1
([37]). Let g R m and order the elements in F 2 m with respect to the decreasing index order. Define the evaluation function
R m F 2 2 m g ev ( g ) = g ( u ) u F 2 m
Then, ev is a bijection defining an isomorphism between the vector spaces ( R m , + , · ) and ( F 2 n , + , · ) .
Now, we are ready to define the concept of monomial codes.
Definition 1
(Monomial code). Let I M m be a finite set of monomials in m variables. The linear code defined by I is the vector subspace C ( I ) F 2 2 m generated by { ev ( f ) | f I } that is called monomial code.
Proposition 2
([3]). For all I M m , the dimension of the monomial code C ( I ) is equal to | I | .
Remark 1.
The r t h order Reed–Muller code RM ( r , m ) = d e f ev ( g ) | g R m , deg g r is a monomial code with dimension k = i = 0 r m i .

2.2. Polar Codes

In order to define polar codes, we have to introduce the concept of synthetic channels. Consider the channel transformation W ( W 2 ( 0 ) , W 2 ( 1 ) ) defined in the following manner.
Definition 2
(Synthetic channels). Let W be a BDMC with output alphabet Y and x 1 , x 2 F 2 be the inputs and y 1 , y 2 Y be the outputs of two copies of W . Define two new channels
W ( 1 ) ( y 1 , y 2 | x 2 ) = d e f 1 2 x 1 F 2 W ( y 1 | x 1 ) W ( y 2 | x 1 x 2 ) W ( 0 ) ( y 1 , y 2 , x 2 | x 1 ) = d e f 1 2 W ( y 1 | x 1 ) W ( y 2 | x 1 x 2 ) .
For any u = ( u 0 , , u m 1 ) { 0 , 1 } m , we define W u = ( ( W u m 1 ) ) u 0 as in [4]. Moreover, we extend the notation to monomials, by W m f = W u where f = x u M m . We are using the index m in W m f to precisely identify the number of variables on which f is expressed. For example, if u = ( 1 , 0 , 0 , 1 , 1 ) , we have W 5 f = W 5 x 0 x 3 x 4 = W 1 x 4 4 x 0 x 3 = ( W 1 x 4 ) 1 x 3 3 x 0 .
Definition 3.
Let W be a BDMC with output alphabet Y . Then, the Bhattacharyya parameter of the channel W is
B W = y Y W ( y | 0 ) W ( y | 1 ) .
Remark 2.
Let W be a BEC ( p ) , then, we have that B W x 0 = B W ( 1 ) = 2 p p 2 and B W 1 = B W ( 0 ) = p 2 .
Definition 4.
The polar code of length n = 2 m and dimension k devised for the channel W is the linear code obtained by selecting the set of k synthetic channels with the smallest B W u values among all u { 0 , 1 } m .
Moreover, we define the relation
f g u v B W u ( p ) B W v ( p ) , p [ 0 , 1 ] .
The relation (3) is called universal, i.e., two monomials f , g satisfying f g are always comparable for any p and any m . This property can be used when constructing polar codes by storing a table with all such monomials. However, there might be several monomials bigger than f which are not comparable pairwise (see, for example, [4,14]). Indeed, one can easily verify that ≤ is a well-defined order relation (reflexive, anti-symmetric, and transitive), and thus, induces a poset on the set of monomials. In some particular cases, the order ≤ becomes total (all elements are ordered in a chain), e.g., when W = BEC and m 4 . However, in general, ≤ is a partial order, even in the case of W = BEC (starting from m = 5 ), as pointed out in [3,11,30].
Proposition 3.
Let m 5 be an integer and W = BEC . Then, { M m , } is a poset.
For simplification, when we refer to ordering the Bhattacharyya parameters, we will just write B W u B W v .

2.3. Weakly Decreasing and Decreasing Monomial Codes

Definition 5.
Let f and g be two monomials in M m .
  • The w order between f and g is defined as
    f w g iff f | g .
  • The ⪯ order between f and g is defined as
    -
    when deg ( f ) = deg ( g ) = s and f = x i 1 x i s , g = x j 1 x j s we have
    f g iff 1 s i j .
    -
    when deg ( f ) < deg ( g ) we have
    f g iff g * M m s . t . f g * w g .
The two order relations w and ⪯ are well defined. w was already used in the case of polar codes, but in a completely different context by Mori and Tanaka in [17]. In their case, the purpose was to tighten the bounds of the error block probability of a polar code designed for the BEC family.
Notice that w is weaker than ⪯, meaning that f , g M m f w g f g . The inverse is not always true: taking, for example, f = x 0 x 2 and g = x 1 x 2 it follows by definition that f g but Sensors 21 02976 i001. We also remark that 1 is the smallest element both for ⪯ and for w , and we have
1 x 0 x 1 x m 1 .
Definition 6.
Let f and g be two monomials in M m such that f g and I M m .
  • We define the closed interval [ f , g ] = { h M m | f h g } .
  • I M m is called a decreasing set if and only if ( f I and g f ) implies g I .
  • Let I M m be a decreasing set. Then, C ( I ) is called a decreasing monomial code.
Polar codes were recently related to network theory. In [30], the authors make a connection between the Bhattacharyya parameter of a synthetic channel and the reliability polynomial of a two-terminal network. Following the same path, we introduce in the next subsection all the required preliminaries in reliability and network theory.

2.4. Two-Terminal Networks

Definition 7.
Let n be a strictly positive integer. We say that N is a two-terminal network (2TN) of size n if N is a network made of n identical devices, that has two distinct terminals: an input S, and an output T.
To any network, N made of n devices we associate two parameters: width (w) and length (l), where w is the cardinal of a “minimal cut” separating S from T, and l is the cardinal of a “minimal path” from S to T, that satisfy
n w l
(see Theorem 3 in [38]). The number of devices n is known in the literature as the size of the network. When n = w l , we say that N is a minimal 2TN [38].
The composition of N 1 and N 2 can be defined as in [38]. The resulting network is obtained by replacing each device in N 1 by a copy of N 2 . We will denote a composition by C , the simplest possible being two devices in series C ( 0 ) , and two devices in parallel C ( 1 ) . The composition of C ( 0 ) with C ( 1 ) is C u = C ( 0 ) C ( 1 ) , where u = ( 0 , 1 ) . The set of all 2 m -size compositions will be denoted by C 2 m , and the set of all compositions of width 2 i and length 2 m i by C 2 i , 2 m i (see Figure 3b).
Proposition 4
([39]). Let m > 0 and C u C 2 m . Then, C u is a minimal 2TN of size 2 m , length l = 2 m | u | and width w = 2 | u | . We also have C 2 m = i = 0 m C 2 i , 2 m i .
Theorem 3
([11]). There is a natural bijection between C 2 m and the set of all W u , for any fixed positive integer m .
In Figure 3, we illustrate the bijection between the two aforementioned sets. More significant is the equality between the reliability polynomial of a composition C u and the Bhattacharyya parameter of W u , a fact that is visible from Figure 3b and proven in the next paragraph.

Reliability Polynomial

The reliability of N is defined as the probability that S and T are connected (also known as s , t -connectivity) [40]. One of the most common hypotheses considered in network theory is that devices are uniformly and identically supposed to close with a probability p [ 0 , 1 ] . Hence, the reliability of N , denoted by Rel ( N ; p ) , can be expressed as a polynomial
Rel ( N ; p ) = i = 0 n N i ( N ) p i ( 1 p ) n i .
The coefficients N i ( N ) represent the number of paths from S to T of length i. Several properties regarding the coefficients N i ( N ) , as well as complementarity relations between a 2TN N and its dual N , are detailed in [32,41] in the case of hammock networks.

2.5. Bhattacharyya Parameters and Reliability Polynomials

Theorem 4
([11]). Let m > 0 , u { 0 , 1 } m , and W = BEC ( p )
B W u ( p ) = Rel ( C u ; p )
where Rel ( C ( 0 ) ; p ) = p 2 and Rel ( C ( 1 ) ; p ) = 1 ( 1 p ) 2 .
Proposition 5
([25]). Let m > 0 and u { 0 , 1 } m . Then
B W u ¯ ( p ) = 1 B W u ( 1 p ) .
This condition expresses the duality of the two corresponding networks, namely C u ¯ , the dual of C u (see [11,32]). Notice that by (7), one has to analyze only u with | u | m / 2 .

3. Polar Codes Are Strongly Decreasing Monomial Code Over the BEC

3.1. Definitions and Results

Definition 8.
The d order between f and g is defined as
  • when deg ( f ) = deg ( g ) = s and f = x i 1 x i s , g = x j 1 x j s we have
    f d g iff { 1 , , s } we have k = 0 i s k k = 0 j s k .
  • when deg ( f ) < deg ( g ) we have
    f d g iff g * M m s . t . f d g * w g .
Definition 9.
Let f and g be two monomials in M m , such that f g and I M m .
  • We define the closed interval [ f , g ] d = { h M m | f d h d g } .
  • I M m is called a strongly decreasing set if, and only if, ( f I and g d f ) implies g I .
  • Let I M m be a strongly decreasing set. Then, C ( I ) is called strongly decreasing monomial code.
Lemma 1.
The order d is a well-defined order relation and { M m , d } forms a Poset.
The proof of this lemma comes directly from the definition of d .
Remark 3.
Notice that x i 1 x i s x j 1 x j s implies that x i 1 x i s d x j 1 x j s . The converse is no longer true, take for example, the monomials x 0 x 3 and x 1 x 2 .
Proposition 6.
Let f and g be two monomials with the same degree and x h be such that x h Sensors 21 02976 i002 and x h Sensors 21 02976 i003. Then, we have
f d g iff x h f d x h g .
The proof of Proposition 6 can be found in Appendix A. In particular, notice that if f , g are co-prime, i.e., gcd ( f , g ) 1 , then
f d g iff f / gcd ( f , g ) d g / gcd ( f , g ) .
Remark that when the condition on variable x h is not satisfied, the result does not hold, e.g., x 2 x 3 d x 1 x 4 , but x 1 x 2 x 3 and x 1 x 4 are not comparable with respect to d . Before we get to our main theorem of this section, the following lemma is required.
Lemma 2.
Let f , g M m , deg f = deg g = 2 , such that f d g . Then, B W m f B W m g .
Corollary 1.
Let f = x i 1 x i 2 and g = x j 1 x j 2 s.t. f d g . Then, for any monomial h = x l 1 x l t satisfying i 1 < l 1 < < l t < i 2 , we have f h d g h and B W m f h B W m g h .
Theorem 5.
Polar codes over the binary erasure channel are strongly decreasing monomial codes.
The proof of Theorem 5 is given in Appendix A.
Theorem 6.
Reed–Muller codes are strongly decreasing monomial codes, i.e.,
RM ( i , m ) = C ( [ 1 , x m i x m 1 ] d ) .
Proof. 
The proof follows from [ 1 , x m i x m 1 ] d = [ 1 , x m i x m 1 ] and RM ( i , m ) = C ( [ 1 , x m i x m 1 ] ) (see Proposition 3.3.12 in [3]). □

3.2. Perspectives of Application of d in the Construction of Polar Codes

State-of-the-art algorithms for constructing polar codes [14] are using the structure induced by the existing partial order relations on the set of monomials. As explained in [14], the complexity of the algorithm for construction of polar codes is dominated by the cardinality of the largest set of non comparable monomials with respect to . Hence, a finer order relation than ⪯ could potentially decrease the complexity of such an algorithm. As d is thinner than ⪯, we will seek, through examples, how many non-comparable monomials with respect to ⪯ are comparable with respect to d . Typically, our procedure can be used for a more efficient enumeration of sets of non-comparable elements in the new poset { M m , d } . The longest antichain in the poset gives a direct intuition on how efficient the construction algorithm can be. Indeed, when estimating the reliability of the synthetic channels in order to construct a polar code, one need to estimate the reliability of the non comparable elements. Hence, having a finner poset, where the maximum length antichain becomes smaller, induces a more efficient construction algorithm. In order to make things clear, we will explain, in view of two distinct perspectives (reducing the number of non comparable elements and reducing the number of codes of fixed dimension), how d induces more efficient construction rules for the polar code.

3.2.1. Reducing the Number of Non-Comparable Monomials

For { M m , } , the middle of the poset is also a maximum length antichain. Let us explain the concept of middle of { M m , } . A monomial g is situated in the middle of the poset if any chain from g to 1 (the infimum of the poset) has length equal to any chain from g to x 0 x m 1 (the supremum of the poset). Clearly, two distinct monomial f , g that are in the middle of the poset are non-comparable. Moreover, notice that not every poset admits a middle, with respect to our definition. For example, { M m , } has a middle, since it is a graded poset (see [11,14,31] for more details). However, { M m , d } is not graded and it does not admit a middle. When m = 4 (see Figure 4), the middle of { M m , } is the set { x 1 x 2 , x 0 x 3 } . As d is thinner than ⪯, it could be possible to reduce the number of non-comparable elements from the middle of { M m , } . Indeed, this fact can be validated through simulations, as we point out in Table 1.

Simulations

We have implemented an algorithm for generating all elements in the middle of { M m , } . For each value of m in { 6 . . 8 } , the elements in the set are displayed in Table 1. We choose to display the Shift ( ind ( g ) ) instead of g, where for g = x i 1 x i l , Shift ( ind ( g ) ) = ( i 1 + 1 , , i l + 1 ) . The main reason for this convention is that the elements in the middle of { M m , } are the answers of a well-known problem in computer science, i.e., perfect subset sum problem. Indeed, if we carefully check the elements for each value of m, we discover that j Shift ( ind ( g ) ) j = m + 1 2 / 2 for any g in the middle of the poset { M m , } .
As one can notice from Table 1, the number of non comparable elements from the middle of { M m , } decreases rapidly when the order relation d is used. For example, when m = 8 , we have decreased this number from 14 non comparable monomials, with respect to ⪯, to 3 distinct non comparable chains, with respect to d .

3.2.2. Reducing the Number of Codes

Another pertaining aspect when we deal with decreasing and strongly decreasing monomial codes is the estimation of codes for a fixed length 2 m and dimension k { 1 . . 2 m } . It seems quite natural, in view of the relation between ⪯ and d , to state that there are fewer strongly decreasing monomial codes of fixed length and dimension than decreasing monomial codes. Formally, we have
Lemma 3.
Let m be a strictly positive integer and 0 k 2 m . Then the number of decreasing monomial code C ( I ) with I M m and | I | = k greater than or equal to the number of strongly decreasing monomial codes C ( J ) with J M m and | J | = k .
The proof of this lemma is obvious and comes directly from the fact that any strongly decreasing monomial set I is necessarily a decreasing monomial set. However, the inverse is not always true.

Simulations

We have written an algorithm that computes the number of decreasing monomial codes for a fixed length and dimension. Our algorithm works recursively, by adding new monomials from { M o n , } to the previous sets of monomials of cardinality k 1 , in such a manner that the cardinality of the new sets does not exceed k . The algorithm outputs all possible decreasing monomials sets of cardinality k and then it checks which of them are also strongly decreasing monomial sets.
For small values of k, i.e., k = O ( 1 ) when n , there are almost no differences between strongly decreasing and decreasing sets. However, for bigger values of k, we observe a significant reduction of the number of strongly decreasing monomial codes compared to decreasing monomial codes. In Table 2, we illustrate on a small example m = 5 and k { 9 , 10 , 11 , 12 } the difference between the two sets. We choose to represent each monomial set I by ind ( I ) = { ind ( g ) , g I } .

4. Average Reliability of the Synthetic Channels

The geometric approach of the properties of a function by means of its subgraph and/or epigraph generated useful mathematical tools from the very beginning of the theory of functions. Measure, intersection, support and shape properties lead to applications in various domains: optimization, shape description and recognition, etc. Here, we propose a geometric approach in the field of polar coding. Recently, the concept of average reliability was introduced and analyzed in the context of all terminal reliability [42]. In view of Theorem 4, the Bhattacharyya parameter of a synthetic channel can be mapped into the reliability polynomial of a minimal two-terminal network. As a consequence, almost all constructive and efficient methods from network reliability can be applied to polar codes over BEC, by means of the Bhattacharyya parameter.
As the set of the synthetic channels cannot be totally ordered [4,25], we propose a different method to define the optimality of a synthetic channel. For that, we will check how reliable a channel is on average, i.e., we define
Definition 10.
Let m be a strictly positive integer and u { 0 , 1 } m . The average reliability of W u is
Avr W u = 0 1 B W u ( p ) d p .
Moreover, we define the relation Avr
u Avr v Avr ( W u ) Avr ( W v )
This notion of optimality has a meaning in the following context. Imagine that the communication channel is a BEC with variable erasure probability, coming from different physical reasons. This means that either we choose a different polar code in function of the variations of p and in this case we obtain the best performance for each instance, or we choose a polar code and hope that on average it performs in an optimal way. The former strategy comes with the cost of computing for each value of p the corresponding polar code; as for the latter, the cost is minimal, since we construct only one polar code.

4.1. Properties

Lemma 4.
The relation Avr is reflexive and transitive. In other words, Avr is a preorder relation.
Our simulations have shown that up to m = 13 , Avr is also antisymmetric. However, this property might not be true in general. Indeed, one can easily find two distinct polynomials with integer coefficients defined over [ 0 , 1 ] with values in [ 0 , 1 ] , such that their integrals are equal.
Conjecture 1.
The relation Avr is a preorder relation on the set of all synthetic channels.
All the same, we can overcome this by applying the following procedure.
Remark 4.
Let u Avr v if, and only if, Avr ( B W u ) = Avr ( B W v ) . Let us extend the relation Avr to the factor set M m / Avr naturally, using the relation between class representatives. Then, Avr is a total order relation over M m / Avr . Indeed, one can easily check that Avr is antisymmetric over M m / Avr .
Lemma 5.
Let m be a strictly positive integer, n = 2 m , and u { 0 , 1 } m . Then,
Avr W u = 1 n + 1 i = 2 m | u | n N i ( C u ) n i .
Avr W u + Avr W u ¯ = 1 .
Proposition 7.
Let m be a strictly positive integer and u , v be two binary vectors of length m, such that u v . Then,
u v Avr W u Avr W v .
In Table 3, we compute the Avr W u of all the binary vectors u { 0 , 1 } m for m { 2 , 3 , 4 } . Notice that in this case, Proposition 7 applies, since we know that up to m = 4 , the synthetic channels can be totally ordered over the BEC [3,25]. Starting from m = 5 , this property is no longer true. When u and v are no longer comparable, i.e., there is p 0 ( 0 , 1 ) such that B W u ( p 0 ) = B W v ( p 0 ) , we can still decide whether on average u is optimal compared with v . The set of non-comparable pairs ( u , v ) for m = 5 is { ( 3 , 16 ) , ( 12 , 17 ) , ( 7 , 20 ) , ( 7 , 24 ) , ( 11 , 24 ) , ( 14 , 19 ) , ( 15 , 28 ) } . Notice that half of the pairs are coming from duality, i.e., if ( u , v ) are not comparable, then ( u ¯ , v ¯ ) are also non-comparable. However, these are ordered with respect to average reliability. The average reliability for the first 4 non-comparable pairs are ( 0.221 , 0.216 ) , ( 0.396 , 0.383 ) , ( 0.4712 , 0.4710 ) , ( 0.4712 , 0.5288 ) . Hence, for m = 5 the ordering with respect to the average reliability is 0,1,2,4,8,16,3,5,6,9, 10,17,12,18,20,7,24, and the rest can be completed by symmetry.
Example 1.
The ordering induced by the average reliability.
  • m = 5
    0 , 1 , 2 , 4 , 8 , 16 RM ( 1 , 5 ) , 3 , 5 , 6 , 9 , 10 , 17 , 12 , 18 , 20 RM ( 2 , 5 ) , 7 RM ( 3 , 5 ) , 24 RM ( 2 , 5 )
  • m = 6
    0 , 1 , 2 , 4 , 8 , 16 RM ( 1 , 6 ) , 3 , 5 RM ( 2 , 6 ) , 32 RM ( 1 , 6 ) , 6 , 9 , 10 , 17 , 12 , 18 , 33 , 20 RM ( 2 , 6 ) , 7 RM ( 3 , 6 ) , 34 , 24 RM ( 2 , 6 ) , 11 RM ( 3 , 6 ) , 36 RM ( 2 , 6 ) , 13 , 19 , 14 RM ( 3 , 6 ) , 40 RM ( 2 , 6 ) ,
    21 RM ( 3 , 6 ) , 48 RM ( 2 , 6 ) , 22 , 35 , 25 , 37 , 26 , 38 , 28 , 41 RM ( 3 , 6 )
Our simulations have shown that, considering the relation Avr in the set of the synthetic channels, in each sub-interval ( i / 10 , ( i + 1 ) / 10 ) , for 0 i 9 , we have a rough proportion of 2 m / 10 binary vectors u . So, roughly speaking, a uniform distribution could be used to approximate the number of u inside each sub-interval (illustrated in Figure 5), with respect to Avr (see Table 4 for 5 m 11 . )

4.2. Relation to β -Expansion

β -expansion [15] is a well-known method for an efficient construction of polar codes. Hence, it is with no surprise that our results on average reliability determine possibly more refined choices of the variable β . Let us begin by defining the method.
β ( u ) = i = 0 m 1 u i β i
In [15], the authors proved that for any β ( 1 , ) , the order induced by β on the sequence of synthetic channels respects the order relation . In particular, this means that if u v then β ( u ) β ( v ) and this for any value of β > 1 . Some values of β are of high interest, in particular β = 2 1 / 4 , when W is designed for additive white Gaussian noise (AWGN). In the case of AWGN, the authors in [15] proposed a procedure in which an interval for β is determined, an interval that converges to a value close to 2 1 / 4 . Notice that in [15], the order induced by β is not valid for any signal to noise ratio value, but it tries to cover as much as possible the interval [ 0 , 1 ] . A natural question that one could raise is whether there is a β -expansion for the average reliability, i.e., is there a real value β such that β and Avr are identical over the set of binary vectors of length m . There is a significant difference between the two relations. In our case, not only that W is a BEC, but also the preorder induced by the average reliability is total over M m / Avr and holds for the entire interval [ 0 , 1 ] .
Remark 5.
By computer simulations, one can easily prove that for m 5 , there is β ( 1 , ) , such that the order induced by b e t a and the preorder induced by the average reliability coincide. It can be done by simply tacking β = 1.22 .
Conjecture 2.
For m > 6 , we did not find a value of β for which the two aforementioned relations are equal. Moreover, for β 1.22 , the number of elements with similar mutual relations with respect to the two relations is minimized (see Table 5).

4.3. Threshold Points of the Binary Erasure Polarization Sub-Channels

The fact that when m goes to infinity the Bhattacharyya polynomial has a sharp transition from zero to one when m goes to infinity has already been proven ([34]). More exactly, for any u , there exists a point p 0 ( u ) ( 0 , 1 ) for which in its vicinity B W u passes from very small values (close to zero) to very high values (close to one). Formally speaking, we have
Lemma 6
([34]).
lim m B W u = 0 p [ 0 , p 0 ( u ) ) 1 p ( p 0 ( u ) , 1 ]
However, finding the point p 0 ( u ) where this transition holds is not trivial (see [15,29]). Here, we will use the average reliability to determine this point for some specific channels.
Lemma 7.
lim m Avr W u = 1 p 0 ( u ) .
A particular interesting channel analyzed in [25,29] is the synthetic channel W ( 1 i 0 m i ) . More exactly, the authors analyze the sharp transition of W ( 1 i 0 m i ) from 0 to 1 when m tends to infinity, in function of the limit i / m i . Here, we will give an exact formula for the average reliability of W ( 1 i 0 m i ) . This result combined with Lemma 7 will allow us to obtain a finer approximation of p 0 ( u ) . To achieve our goal, we will look at the corresponding 2TN, namely at C ( 1 i 0 m i ) . For simplification, we use l = 2 m i , w = 2 i and n = 2 m . Notice that
Rel C ( 1 i 0 m i ) ; p = 1 ( 1 p l ) w .
Theorem 7.
Rel C ( 1 i 0 m i ) ; p = i = l n j = 1 i l ( 1 ) j + 1 w j n j l n i p i ( 1 p ) n i .
Proof. 
In order to prove our result, we need to demonstrate that l i n
N i ( C ( 1 i 0 m i ) ) = j = 1 i l ( 1 ) j + 1 w j n j l n i
The proof is based on an inclusion-exclusion argument. Denote by P i the set of paths of length i from S to T for the C ( 1 i 0 m i ) . This leads to P i = N i ( C ( 1 i 0 m i ) ) .
Any path of length i with l i is composed of at least one path of length l, hence we have w choices for fixing a path of length l and n l i choices for the remaining positions. However, in the n l i choices, we might count other l length paths. Hence, we need to subtract the over-counting, which is all the combinations of two length l paths, i.e., w 2 , times the number of choices for the remaining positions, i.e., n 2 l i 2 l . Now, we need to add all the paths that are composed of at least 3 l paths which equals w 3 n 3 l i 3 l , and so on till we reached the last level, i.e., w i l n l i l i l i l .
Theorem 8.
Avr W ( 1 i 0 m i ) = 1 1 2 i + 2 i m 2 i
Proof. 
Avr W u = 1 n + 1 i = l n N i ( C u ) n i = 1 n + 1 i = l n j = 1 i l ( 1 ) j + 1 w j n j l n i n i = 1 n + 1 j = 1 w i = j l n ( 1 ) j + 1 w j n j l n i n i = 1 n + 1 j = 1 w i = j l n ( 1 ) j + 1 w j i j l n j l = 1 n + 1 j = 1 w ( 1 ) j + 1 w j n j l i = j l n i j l = 1 n + 1 j = 1 w ( 1 ) j + 1 w j n + 1 j l + 1 n j l = j = 1 w ( 1 ) j + 1 w j 1 j l + 1 = 1 j = 0 w ( 1 ) j w j 1 j l + 1 = 1 1 n + 1 l w
Basically, we have
Corollary 2.
Avr W ( 0 i 1 m i ) = 1 2 i + 2 i m 2 i
Based on Theorem 8, we can establish new classes of asymptotically “good” channels. For that, we will need the following result.
Lemma 8.
(22) lim n n log 2 ( n ) ( log 2 ( log 2 ( n ) ) ) + 1 log 2 ( n ) ( log 2 ( log 2 ( n ) ) ) n log 2 ( n ) ( log 2 ( log 2 ( n ) ) ) = 1 . (23) lim n n log 2 ( log 2 ( n ) ) + 1 log 2 ( log 2 ( n ) ) n log 2 ( log 2 ( n ) ) = . (24) lim n n log 2 ( n ) + 1 log 2 ( n ) n log 2 ( n ) = 2 .
Theorem 8, Lemma 8 and Lemma 7 imply the following result.
Corollary 3.
Let m be a strictly positive integer and u = 1 i 0 m i Then,
  • for any i m log 2 ( m ) log 2 ( log 2 ( m ) ) we have p 0 ( u ) 1 and p 0 ( u ¯ ) 0 .
  • for any i m log 2 ( log 2 ( m ) ) we have p 0 ( u ) 0 and p 0 ( u ¯ ) 1 .
Another direct consequence of our results is that for any i m log 2 ( m ) log 2 ( log 2 ( m ) ) , the monomial f = x 0 x i 1 is highly reliable on average. Hence, all the monomials g d f are also highly reliable in average, as their average reliability tends to zero when m goes to infinity. Moreover, f becomes unreliable on average for i m log 2 ( log 2 ( m ) ) . The values m log 2 ( m ) log 2 ( log 2 ( m ) ) < i < m log 2 ( log 2 ( m ) ) are to be considered in more detail.
Corollary 4.
Let m be a strictly positive integer and i log 2 ( log 2 ( m ) ) . Then, for any f M m with f d x m i + 1 x m , we have that Avr ( W f ) 0 when m . In other words, any synthetic channel in the RM ( i , m ) is asymptotically “good” on average.

5. Conclusions and Perspectives

A complete characterization of the Bhattacharyya parameter of synthetic channels of a monomial code is an open problem that has attracted a lot of attention in the last decade. Even for the particular case of binary erasure channel, the question remains unanswered. However, the implications of such a result are of high importance in coding theory, especially in polar coding. In this article, we make a step forward by proposing an order relation d that decreases the gap between state-of-the-art and the ultimate partial order relation for the Bhattacharyya parameter of synthetic channels. The advantage of this approach is that our algebraic description is rather easy to implement and analyze, compared to other order relations such as [25]. Simulations show that d is a valid order relation on binary symmetric channel, and a deeper inspection of [25], and our work could potentially determine an algebraic description that fits the latest results.
The order relation proposed here could be employed as a new construction rule for polar codes. As d is thinner than ⪯, it enables two reductions:
  • the number of non-comparable monomials in the middle of { M m , } is significantly reduced by means of d ,
  • the number of strongly decreasing monomial codes is less than the number of decreasing monomial codes for fixed length and dimension.
Hence, these two properties open the perspectives of a more efficient construction algorithm for strongly decreasing monomial codes, and hence for polar codes.
As the relations on the Bhattacharyya parameter are all partial orders, we have proposed an alternative solution for ordering the synthetic channels. For that, we have used the concept of average reliability, borrowed from network theory. Instead of the local evaluation of the Bhattacharyya parameter, we propose a global one, by evaluating the integral, i.e., by measuring its global average behavior. Hence, we rank the synthetic channels using a preorder relation Avr , given by the value of the integral. Our result is not constructive, in the sense that it does not fully characterize the channels that belong to a specific interval. An answer to this question might provide an extremely efficient method for constructing polar codes and give much more insight into the synthetic channels W u .

Author Contributions

Conceptualization, V.-F.D. and G.C.; methodology, V.-F.D.; software, V.-F.D.; validation, V.-F.D. and G.C.; formal analysis, V.-F.D. and G.C.; investigation, V.-F.D.; resources, V.-F.D. and G.C.; writing—original draft preparation, V.-F.D. and G.C. All authors have read and agreed to the published version of the manuscript.

Funding

V-F. Dragoi is supported by a grant of the Romanian Ministry of Education and Research, CNCS- UEFISCDI, project number PN-III-P1-1.1-PD-2019-0285, within PNCDI III.

Data Availability Statement

The data presented in this study are available within the article.

Acknowledgments

The authors are thankful to anonymous reviewers for their valuable suggestions and comments to improve the quality of the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proofs of Results from Section 3

Appendix A.1. Proof of Proposition 6

Proposition 
(6). Let f and g be two monomial having the same degree and x h be such that x h Sensors 21 02976 i002 and x h Sensors 21 02976 i003. Then we have
f d g iff x h f d x h g .
Proof. 
Let f = x i 1 x i s and g = x j 1 x j s with f d g . Also, let f * = f x h and g * = g x h . There are several cases to be examined here:
  • If x j s x h or x h x i 1 then the relation f * d g * can easily be verified by using the definition of d .
  • If there is an integer r { 1 , , s } s.t. x i r x h x i r + 1 and x j r x h x j r + 1 then the relation f * d g * can easily be verified as in the previous step.
  • If there are two distinct integers r , t { 1 , , s } s.t. x i r x h x i r + 1 and x j t x h x j t + 1 then two cases have to be considered.
    -
    If t < r then we have x i h x j t + 1 x j r and x i t + 1 x i r x h , which implies the following ordering
    x i + t + 1 x i r x h x j t + 1 x j r .
    Combining Equation (A2) with the definition of d we obtain the desired result, i.e., f * d g * .
    -
    t > r then we have x i h x i r + 1 x i t and x j r + 1 x j t x h which implies the following ordering
    x j + r + 1 x j t x h x i r + 1 x i t .
    Now, since x h x i t it might be possible to have j s + j t + 1 + h < i s + + i t + 1 + i t , which implies a violation of the partial sum conditions in the definition of d . If the next partial sum changes the sign, i.e., j s + j t + 1 + h + j t i s + + i t + 1 + i t + i t 1 , by setting Δ t + 1 = ( j s i s ) + + ( j t + 1 i t + 1 ) we have the following inequalities
    Δ t + 1 + h i t < 0
    Δ t + 1 + h i t + j t i t 1
    This implies
    Δ t + 1 + h < i t Δ t + 1 + h + j t i t 1 .
    However, since j t < i t 1 the Equation A6) becomes impossible, which completes the proof.

Appendix A.2. Proof of Lemma 2

Lemma 
(2). Let f , g M m , deg f = deg g = 2 , such that f d g . Then B W m f B W m g .
Proof. 
Let f = x i 1 x i 2 and g = x j 1 x j 2 . If gcd ( f , g ) 1 then f g , which implies B W m f B W m g .
Now suppose gcd ( f , g ) = 1 , and let j 1 i 1 j 2 i 2 . Denote ϵ = j 1 i 1 . By definition of d and ⪯ we have
f = x i 1 x i 2 d x i 1 1 x i 2 + 1 d d x j 1 x i 2 + ϵ x j 1 x j 2 = g .
If we prove x i 1 x i 2 d x i 1 1 x i 2 + 1 B W m x i 1 x i 2 B W m x i 1 1 x i 2 + 1 the proof is finished. By definition, one can easily notice that B W 4 x 1 x 2 B W 4 x 0 x 3 B W m x i 1 x i 2 B W m x i 1 1 x i 2 + 1 . Hence we are left to prove that B W 4 x 1 x 2 B W 4 x 0 x 3 . We have that B ( W 4 x 0 x 3 ) ( p ) = 1 ( 1 ( 1 ( 1 p ) 2 ) 4 ) 2 and B ( W 4 x 1 x 2 ) ( p ) = ( 1 ( 1 p 2 ) 4 ) 2 .
By writing the two polynomials in the Bernstein basis, and using Theorem 4 we have B W 4 x 1 x 2 = Rel ( C ( 0 , 1 , 1 , 0 ) ) and B W 4 x 0 x 3 = Rel ( C ( 1 , 0 , 0 , 1 ) ) with
N i ( C ( 0 , 1 , 1 , 0 ) ) = 0 , 0 , 0 , 0 , 16 , 192 , 1008 , 3040 , 5828 , 7456 , 6552 , 4048 , 1788 , 560 , 120 , 16 , 1 N i ( C ( 1 , 0 , 0 , 1 ) ) = 0 , 0 , 0 , 0 , 32 , 320 , 1456 , 3984 , 7042 , 8400 , 7000 , 4176 , 1804 , 560 , 120 , 16 , 1
As N i ( C ( 0 , 1 , 1 , 0 ) ) N i ( C ( 1 , 0 , 0 , 1 ) ) for all i { 0 , , 16 } we conclude the proof. □

Appendix A.3. Proof of Theorem 5

Theorem 
(5). Polar codes over the Binary Erasure Channel are strongly decreasing monomial codes.
Proof. 
The proof is based on two induction steps. First the parameter m is fixed and we prove that the result holds for any 1 s m . Secondly we use induction on m.
Firstly, fix m and use an induction argument on the degree of monomial, namely on s . We also suppose that gcd ( f , g ) = 1 . For s = 1 we have that = d so the result is obvious. For s = 2 use Lemma 2.
Now suppose that for any f d g with deg f = deg g = s 1 we have that B W m f B W m g . Let f = x i 1 x i s and g = x j 1 x j s such that g d f with the usual convention i 1 < < i s and j 1 < < j s . Then we have two cases. Either if f / x i s d g / x j s or if x i 1 x j 1 then we have B W m f B W m g . Indeed, in the first case we have that
B W m f = B W m j s 1 1 x i s j s 1 + 1 f / x i s B W m j s 1 1 x j s j s 1 + 1 f / x i s B W m j s 1 1 x j s j s 1 + 1 g / x j s = B W m g .
In the second case when x i 1 x j 1 the proof works in the same way. If we are not in the previous case it means that j 1 < i 1 and i s < j s . We know that there is l { 1 , , s 1 } for which i l > j l . First we treat the two extreme cases l = 1 or l = s 1 . If l = 1 this implies that j k i k for all k > 1 . Let δ = min { j 2 i 2 , i 1 j 1 } . Then
f = x i 1 x i 2 x i s d x i 1 δ x i 2 + δ x i 3 x i s d x j 1 x j s = g .
In this case, either x i 1 δ = x j 1 or x i 2 + δ = x j 2 . Hence, by Lemma 2 and using the order relation ⪯ we obtain
B W m f = B W m j 2 1 f / ( x i 1 x i 2 ) j 2 + 1 x i 2 x i 1 B W m j 2 1 f / ( x i 1 x i 2 ) j 2 + 1 x i 2 + δ x i 1 δ B W m j 2 1 x i s x i 3 j 2 + 1 x j 2 x j 1 B W m j 2 1 x j s x j 3 j 2 + 1 x j 2 x j 1 = B W m g .
If l = s 1 , by putting δ = i s 1 j s 1 and taking into account that δ j s i s we obtain
B W m f = B W m j s 2 1 x i s x i s 1 ) j s 2 + 1 f / ( x i s x i s 1 ) B W m j s 2 1 x i s + δ x i s 1 δ ) j s 2 + 1 f / ( x i s x i s 1 ) = B W m j s 2 1 x i s + δ x j s 1 ) j s 2 + 1 x i s 2 x i 1 B W m j s 2 1 x j s x j s 1 ) j s 2 + 1 x j s 2 x j 1 = B W m g .
When 1 < l < s 1 suppose that i s j l < j l + 1 i s and denote by δ l , s = i s j l . Using the definition of d we have
h = x j 1 x j l + δ l , s x j l + 1 δ l , s x j s d d x j 1 x j l + 1 x j l + 1 1 x j s d x j 1 x j s = g .
Notice that h = x j 1 x j l 1 x i s x j l + 1 i s + j l x j s . Next, we prove that f d h . Since gcd ( f , h ) = x i s , we can use Lemma 6 and demonstrate f / x i s d h / x i s , i.e., x i 1 x i s 1 d x j 1 x j l 1 x j l + 1 i s + j l x j l + 2 x j s . As,
x i l + 1 x i s x j l + 1 x j s ,
we obtain x i l + 1 x i s 1 d x j l + 2 x j s , simply by verifying
t { 0 , , s l 2 } k = 0 t i s 1 k k = 0 t j s k .
The next partial sums inequalities,
( i k + + i l 1 ) + i l + i l + 1 + + i s 1 < ( j k + + j l 1 ) + j l + 1 i s + j l + j l + 2 + + j s
are verified from the relation f d g . So we check the partial sums step by step:
  • i s 1 < i s < j l + 1 < j s and thus x i s 1 d x j s
  • i s 1 + i s 1 < j s 1 + j s and thus x i s 2 x i s 1 d x j s 1 x j s
  • i l + 1 + + i s 1 < j l + 2 + + j s and thus x i l + 1 x i s 1 d x j l + 2 x j s
  • i l + i l + 1 + + i s 1 < j l + 1 i s + j l + j l + 2 + + j s by definition of f d g and thus x i l x i s 1 d x j l + 1 i s + j l x j s
Hence we have that f / x i s d h / x i s which implies, using the induction hypothesis, that B W m f / x i s B W m h / x i s , from which we deduce B W m f B W m h . Also,
B W m h = B W x j s x j l + 2 x j l + 1 δ l , s x j l + δ l , s x j l 1 x j 1 = B ( W * ) x j l + 1 δ l , s x j l + δ l , s x j l 1 x j 1 B ( W * ) x j l + 1 x j l x j l 1 x j 1 = B W x j s x j l + 2 x j l + 1 x j l x j l 1 x j 1 = B W m g .
Secondly, we use induction on the number of variables m . For the first values of m, i.e., m 4 it is straightforward to check the result.
Let f = x i 1 x i s and g = x j 1 x j s such that g d f with the usual convention i 1 < < i s and j 1 < < j s . The following cases are possible
  • If i s = j s = m then we have W m + 1 f = W 1 f m m f [ 0 , m 1 ] and W m + 1 g = W 1 g m m g [ 0 , m 1 ] . Since f [ 0 , m 1 ] d g [ 0 , m 1 ] we have by the induction hypothesis
    B W 1 x m m f [ 0 , m 1 ] B W 1 x m m g [ 0 , m 1 ] .
  • Else, by the definition of the order we necessary have j s > i s . We also have that
    h = x j 1 x j s 1 + 1 x j s 1 d x j 1 x j s 1 x j s = g .
    Which implies that B W m + 1 h B W m + 1 g . In the same time notice that f d h and ind ( f ) , ind ( h ) { 0 , , m 1 } . Therefore we obtain
    B W m + 1 f B W m + 1 h B W m + 1 g .

References

  1. Arıkan, E. Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels. IEEE Trans. Inform. Theory 2009, 55, 3051–3073. [Google Scholar] [CrossRef]
  2. Bioglio, V.; Condo, C.; Land, I. Design of Polar Codes in 5G New Radio. IEEE Commun. Surv. Tutor. 2020, 23, 29–40. [Google Scholar] [CrossRef] [Green Version]
  3. Dragoi, V. Algebraic Approach for the Study of Algorithmic Problems Coming from Cryptography and the Theory of Error Correcting Codes. Ph.D. Thesis, Université de Rouen, Normandie, France, 2017. [Google Scholar]
  4. Bardet, M.; Dragoi, V.; Otmani, A.; Tillich, J. Algebraic properties of polar codes from a new polynomial formalism. In Proceedings of the 2016 IEEE International Symposium on Information Theory (ISIT), Barcelona, Spain, 10–15 July 2016; pp. 230–234. [Google Scholar] [CrossRef] [Green Version]
  5. Rengaswamy, N.; Calderbank, R.; Newman, M.; Pfister, H.D. Classical Coding Problem from Transversal T Gates, 2020. In Proceedings of the 2020 IEEE International Symposium on Information Theory (ISIT), Los Angeles, CA, USA, 21–26 June 2020; pp. 1891–1896. [Google Scholar] [CrossRef]
  6. Rengaswamy, N. Classical Coding Approaches to Quantum Applications. arXiv 2020, arXiv:2004.06834. [Google Scholar]
  7. Krishna, A.; Tillich, J.P. Magic state distillation with punctured polar codes. arXiv 2019, arXiv:1811.03112. [Google Scholar]
  8. Bardet, M.; Chaulet, J.; Dragoi, V.; Otmani, A.; Tillich, J.P. Cryptanalysis of the McEliece Public Key Cryptosystem Based on Polar Codes. In Post-Quantum Cryptography, PQCrypto 2016; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2016; Volume 9606, pp. 118–143. [Google Scholar] [CrossRef] [Green Version]
  9. Drăgoi, V.; Beiu, V.; Bucerzan, D. Vulnerabilities of the McEliece Variants Based on Polar Codes. In Innovative Security Solutions for Information Technology and Communications, SecITC 2018; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 11359, pp. 376–390. [Google Scholar] [CrossRef]
  10. Bucerzan, D.; Dragoi, V.; Kalachi, H.T. Evolution of the McEliece Public Key Encryption Scheme. In Innovative Security Solutions for Information Technology and Communications, SecITC 2017; Lecture Notes in Computer Science; Springer: Cham, Switzerland; Volume 10543, pp. 129–149. [CrossRef]
  11. Drăgoi, V.F.; Beiu, V. Fast Reliability Ranking of Matchstick Minimal Networks. arXiv 2019, arXiv:1911.01153. [Google Scholar]
  12. Dragoi, V.; Cowell, S.; Beiu, V. Ordering series and parallel compositions. In Proceedings of the 2018 IEEE 18th International Conference on Nanotechnology (IEEE-NANO), Cork, Ireland, 23–26 July 2018; pp. 1–4. [Google Scholar] [CrossRef]
  13. Beiu, V.; Cowell, S.R.; Drăgoi, V.F. On Posets for Reliability: How Fine Can They Be? In Soft Computing Applications SOFA 2018, Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2021; Volume 1221, pp. 115–129. [Google Scholar] [CrossRef]
  14. Mondelli, M.; Hassani, S.H.; Urbanke, R. Construction of polar codes with sublinear complexity. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 1853–1857. [Google Scholar] [CrossRef] [Green Version]
  15. He, G.; Belfiore, J.; Land, I.; Yang, G.; Liu, X.; Chen, Y.; Li, R.; Wang, J.; Ge, Y.; Zhang, R.; et al. Beta-Expansion: A Theoretical Framework for Fast and Recursive Construction of Polar Codes. In Proceedings of the GLOBECOM 2017-2017 IEEE Global Communications Conference, Singapore, 4–8 December 2017; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  16. Tal, I.; Vardy, A. How to Construct Polar Codes. IEEE Trans. Inform. Theory 2013, 59, 6562–6582. [Google Scholar] [CrossRef] [Green Version]
  17. Mori, R.; Tanaka, T. Performance and construction of polar codes on symmetric binary-input memoryless channels. In Proceedings of the 2009 IEEE International Symposium on information theory, Seoul, Korea, 28 June–3 July 2009; pp. 1496–1500. [Google Scholar] [CrossRef] [Green Version]
  18. Mahdavifar, H.; El-Khamy, M.; Lee, J.; Kang, I. On the construction and decoding of concatenated polar codes. In Proceedings of the 2013 IEEE International Symposium on Information Theory, Istanbul, Turkey, 7–12 July 2013; pp. 952–956. [Google Scholar] [CrossRef] [Green Version]
  19. Korada, S.B.; Sasoglu, E.; Urbanke, R.L. Polar Codes: Characterization of Exponent, Bounds, and Constructions. IEEE Trans. Inform. Theory 2010, 56, 6253–6264. [Google Scholar] [CrossRef] [Green Version]
  20. Afşer, H.; Deliç, H. On the Channel-Specific Construction of Polar Codes. IEEE Commun. Lett. 2015, 19, 1480–1483. [Google Scholar] [CrossRef]
  21. Trifonov, P.; Trofimiuk, G. A randomized construction of polar subcodes. In Proceedings of the 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, Germany, 25–30 June 2017; pp. 1863–1867. [Google Scholar] [CrossRef] [Green Version]
  22. Huang, L.; Zhang, H.; Li, R.; Ge, Y.; Wang, J. AI Coding: Learning to Construct Error Correction Codes. IEEE Trans. Commun. 2020, 68, 26–39. [Google Scholar] [CrossRef] [Green Version]
  23. Romano, G.; Ciuonzo, D. Minimum-Variance Importance-Sampling Bernoulli Estimator for Fast Simulation of Linear Block Codes over Binary Symmetric Channels. IEEE Trans. Wirel. Commun. 2014, 13, 486–496. [Google Scholar] [CrossRef]
  24. Minja, A.; Šenk, V. Quasi-Analytical Simulation Method for Estimating the Error Probability of Star Domain Decoders. IEEE Trans. Commun. 2019, 67, 3101–3113. [Google Scholar] [CrossRef]
  25. Wu, W.; Siegel, P.H. Generalized Partial Orders for Polar Code Bit-Channels. IEEE Trans. Inf. Theory 2019, 65, 7114–7130. [Google Scholar] [CrossRef]
  26. Saptharishi, R.; Shpilka, A.; Volk, B.L. Efficiently Decoding Reed–Muller Codes From Random Errors. IEEE Trans. Inf. Theory 2017, 63, 1954–1960. [Google Scholar] [CrossRef] [Green Version]
  27. Kudekar, S.; Kumar, S.; Mondelli, M.; Pfister, H.D.; Sasoglu, E.; Urbanke, R. Reed-Muller Codes Achieve Capacity on Erasure Channels. IEEE Trans. Inf. Theory 2017, 63, 4298–4316. [Google Scholar] [CrossRef]
  28. Kumar, S.; Calderbank, R.; Pfister, H.D. Beyond double transitivity: Capacity-achieving cyclic codes on erasure channels. In Proceedings of the 2016 IEEE Information Theory Workshop (ITW), Cambridge, UK, 11–14 September 2016; pp. 241–245. [Google Scholar] [CrossRef]
  29. Ordentlich, E.; Roth, R.M. On the Pointwise Threshold Behavior of the Binary Erasure Polarization Subchannels. IEEE Trans. Inf. Theory 2019, 65, 6044–6055. [Google Scholar] [CrossRef]
  30. Drăgoi, V.F.; Beiu, V. Studying the Binary Erasure Polarization Subchannels Using Network Reliability. IEEE Commun. Lett. 2020, 24, 62–66. [Google Scholar] [CrossRef]
  31. Stanley, R.P. Enumerative Combinatorics; Cambridge University Press: Cambridge, NY, USA, 2012. [Google Scholar]
  32. Cristescu, G.; Drăgoi, V.F. Cubic Spline Approximation of the Reliability Polynomials of Two Dual Hammock Networks. Transylv. J. Math. Mech. 2019, 11, 77–90. [Google Scholar]
  33. Cristescu, G.; Drăgoi, V.F. Efficient approximation of two-terminal networks reliability polynomials using cubic splines. IEEE Trans. Reliab. 2021, 1–11. [Google Scholar] [CrossRef]
  34. Mondelli, M. From Polar to Reed-Muller Codes: Unified Scaling, Non-standard Channels, and a Proven Conjecture. Ph.D. Thesis, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland, 2016. [Google Scholar]
  35. Richardson, T.; Urbanke, R. Modern Coding Theory; Cambridge University Press: New York, NY, USA, 2008. [Google Scholar]
  36. Roth, R.M. Introduction to Coding Theory; Cambridge University Press: New York, NY, USA, 2006. [Google Scholar]
  37. Carlet, C. Boolean functions for cryptography and error correcting codes. In Boolean Models and Methods in Mathematics, Computer Science, and Engineering; Cambridge University Press: Cambridge, NY, USA, 2010; Chapter 8; pp. 257–397. [Google Scholar]
  38. Moore, E.F.; Shannon, C.E. Reliable circuits using less reliable relays - Part I. J. Frankl. Inst. 1956, 262, 191–208. [Google Scholar] [CrossRef]
  39. Drăgoi, V.; Cowell, S.R.; Beiu, V.; Hoară, S.; Gaşpar, P. How Reliable are Compositions of Series and Parallel Networks Compared with Hammocks? Int. J. Comput. Commun. Control 2018, 13, 772–791. [Google Scholar] [CrossRef] [Green Version]
  40. Colbourn, C.J. The Combinatorics of Network Reliability; Oxford University Press: New York, NY, USA, 1987. [Google Scholar]
  41. Dăuş, L.; Jianu, M. The shape of the reliability polynomial of a hammock network. In Intelligent Methods in Computing, Communication and Control. ICCC 2020, Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2021; Volume 1243, pp. 93–105. [Google Scholar] [CrossRef]
  42. Brown, J.; Cox, D.; Ehrenborg, R. The average reliability of a graph. Discret. Appl. Math. 2014, 177, 19–33. [Google Scholar] [CrossRef]
Figure 1. Order ( w , , d , ) and preorder relations ( Avr ) for monomials codes over the BEC . The connections in red are the results coming from this article. The dotted edge from β to ≤ represents an order relation that is valid only for a sub-interval of [ 0 , 1 ] .
Figure 1. Order ( w , , d , ) and preorder relations ( Avr ) for monomials codes over the BEC . The connections in red are the results coming from this article. The dotted edge from β to ≤ represents an order relation that is valid only for a sub-interval of [ 0 , 1 ] .
Sensors 21 02976 g001
Figure 2. Average Bhattacharyya parameter. On the x-axis are the integer values of the binary vectors u { 0 , 1 } m , and on the y-axis are the values Avr( B ( W u ) ).
Figure 2. Average Bhattacharyya parameter. On the x-axis are the integer values of the binary vectors u { 0 , 1 } m , and on the y-axis are the values Avr( B ( W u ) ).
Sensors 21 02976 g002
Figure 3. Combined circuit as defined by Arikan [1], the Bhattacharyya parameter of the corresponding synthetic channels and the compositions in C 3 .
Figure 3. Combined circuit as defined by Arikan [1], the Bhattacharyya parameter of the corresponding synthetic channels and the compositions in C 3 .
Sensors 21 02976 g003
Figure 4. The two-order relations ⪯ and d for m = 4 .
Figure 4. The two-order relations ⪯ and d for m = 4 .
Sensors 21 02976 g004
Figure 5. Sorted Avr ( B W u ) for all u ∈ {0, 1}m.
Figure 5. Sorted Avr ( B W u ) for all u ∈ {0, 1}m.
Sensors 21 02976 g005
Table 1. Non comparable elements in the middle of { M m , } .
Table 1. Non comparable elements in the middle of { M m , } .
m = 6
( 4 , 3 , 2 , 1 ) , ( 5 , 3 , 2 ) , ( 5 , 4 , 1 ) , ( 6 , 3 , 1 ) , ( 6 , 4 )
d ( 4 , 3 , 2 , 1 )
( 5 , 3 , 2 ) d ( 5 , 4 , 1 ) d ( 6 , 3 , 1 )
( 6 , 4 )
m = 7
( 5 , 4 , 3 , 2 ) , ( 6 , 4 , 3 , 1 ) , ( 6 , 5 , 2 , 1 ) , ( 6 , 5 , 3 ) , ( 7 , 4 , 2 , 1 ) , ( 7 , 4 , 3 ) , ( 7 , 5 , 2 ) , ( 7 , 6 , 1 )
d ( 5 , 4 , 3 , 2 ) d ( 6 , 4 , 3 , 1 ) d ( 6 , 5 , 2 , 1 ) d ( 7 , 4 , 2 , 1 )
( 6 , 5 , 3 ) d ( 7 , 4 , 3 ) d ( 7 , 5 , 2 ) d ( 7 , 6 , 1 )
m = 8
( 8 , 4 , 3 , 2 , 1 ) , ( 7 , 5 , 3 , 2 , 1 ) , ( 6 , 5 , 4 , 2 , 1 ) , ( 8 , 7 , 3 ) , ( 8 , 6 , 4 ) , ( 7 , 6 , 5 )
( 6 , 5 , 4 , 3 ) , ( 7 , 5 , 4 , 2 ) , ( 7 , 6 , 3 , 2 ) , ( 7 , 6 , 4 , 1 ) , ( 8 , 5 , 3 , 2 ) , ( 8 , 5 , 4 , 1 ) , ( 8 , 6 , 3 , 1 ) , ( 8 , 7 , 2 , 1 )
d ( 6 , 5 , 4 , 2 , 1 ) d ( 7 , 5 , 3 , 2 , 1 ) d ( 8 , 4 , 3 , 2 , 1 )
( 7 , 6 , 5 ) d ( 8 , 6 , 4 ) d ( 8 , 7 , 3 )
( 6 , 5 , 4 , 3 ) d ( 7 , 5 , 4 , 2 ) d ( 7 , 6 , 3 , 2 ) d ( 7 , 6 , 4 , 1 ) d ( 8 , 5 , 4 , 1 ) d ( 8 , 6 , 3 , 1 ) d ( 8 , 7 , 2 , 1 )
( 8 , 5 , 3 , 2 )
Table 2. Decreasing and strongly decreasing monomial sets for m = 5 .
Table 2. Decreasing and strongly decreasing monomial sets for m = 5 .
kind(I) for ⪯ind(J) for d
9 { 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 04 }
{ 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 012 }
{ 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 12 } { 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 12 }
{ 0 , 1 , 2 , 3 , 01 , 02 , 03 , 12.012 } { 0 , 1 , 2 , 3 , 01 , 02 , 03 , 12 , 012 }
{ 0 , 1 , 2 , 3 , 01 , 02 , 03 , 12 , 13 } { 0 , 1 , 2 , 3 , 01 , 02 , 03 , 12 , 13 }
10 { 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 04 , 12 }
{ 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 12 , 012 } { 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 12 , 012 }
{ 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 12 , 13 } { 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 12 , 13 }
{ 0 , 1 , 2 , 3 , 01 , 02 , 03 , 12 , 13 , 012 } { 0 , 1 , 2 , 3 , 01 , 02 , 03 , 12 , 13 , 012 }
{ 0 , 1 , 2 , 3 , 01 , 02 , 03 , 12 , 13 , 23 }
11 { 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 04 , 12 , 012 }
{ 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 04 , 12 , 13 } { 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 04 , 12 , 13 }
{ 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 12 , 13 , 012 } { 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 12 , 13 , 012 }
{ 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 12 , 13 , 23 }
{ 0 , 1 , 2 , 3 , 01 , 02 , 03 , 12 , 13 , 23 , 012 }
{ 0 , 1 , 2 , 3 , 01 , 02 , 03 , 12 , 13 , 012 , 013 } { 0 , 1 , 2 , 3 , 01 , 02 , 03 , 12 , 13 , 012 , 013 }
12 { 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 04 , 12 , 13 , 012 } { 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 04 , 12 , 13 , 012 }
{ 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 04 , 12 , 13 , 23 } { 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 04 , 12 , 13 , 23 }
{ 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 04 , 12 , 13 , 14 }
{ 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 12 , 13 , 012 , 013 } { 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 12 , 13 , 012 , 013 }
{ 0 , 1 , 2 , 3 , 4 , 01 , 02 , 03 , 12 , 13 , 23 , 012 }
{ 0 , 1 , 2 , 3 , 01 , 02 , 03 , 12 , 13 , 23 , 012 , 013 }
Table 3. Average reliability of the synthetic channels.
Table 3. Average reliability of the synthetic channels.
m = 2
0123
0.20 0.47 0.53 0.80
m = 3
01243567
0.11 0.29 0.34 0.41 0.59 0.66 0.71 0.89
m = 4
0124835691012711131415
0.06 0.16 0.20 0.24 0.30 0.38 0.44 0.48 0.52 0.56 0.62 0.70 0.76 0.80 0.84 0.94
Table 4. Number of u { 0 , 1 } m that satisfy Avr ( B W u ) ( i / 10 , ( i + 1 ) / 10 ] , for 0 i ˂ 5 , ϵ m = 2 m 4 / 10 .
Table 4. Number of u { 0 , 1 } m that satisfy Avr ( B W u ) ( i / 10 , ( i + 1 ) / 10 ] , for 0 i ˂ 5 , ϵ m = 2 m 4 / 10 .
m ( 0 , 0.1 ] ( 0.1 , 0.2 ] ( 0.2 , 0.3 ] ( 0.3 , 0.4 ] ( 0.4 , 0.5 ] [ 2 m /10- ϵ m , 2 m /10+ ϵ m ]
523443 [ 3 , 4 ]
657686 [ 6 , 7 ]
71113141313 [ 12 , 14 ]
82325272726 [ 24 , 28 ]
94951505551 [ 48 , 55 ]
109910498107104 [ 97 , 109 ]
11199209204204208 [ 194 , 218 ]
Table 5. Number of pairs ( u , v ) satisfying Avr ( B W u ) Avr ( B W v ) for which Sensors 21 02976 i004 s.t. β ( u ) β ( v ) .
Table 5. Number of pairs ( u , v ) satisfying Avr ( B W u ) Avr ( B W v ) for which Sensors 21 02976 i004 s.t. β ( u ) β ( v ) .
m β Number of Incompatible Pair of Elements 2 m
4 ( 1 , 1.32 ] 16
5 ( 1.18 , 1.22 ] 32
6 1.22 264
7 1.22 10128
8 1.22 36256
9 1.22 99512
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Drăgoi, V.-F.; Cristescu, G. Bhattacharyya Parameter of Monomial Codes for the Binary Erasure Channel: From Pointwise to Average Reliability. Sensors 2021, 21, 2976. https://doi.org/10.3390/s21092976

AMA Style

Drăgoi V-F, Cristescu G. Bhattacharyya Parameter of Monomial Codes for the Binary Erasure Channel: From Pointwise to Average Reliability. Sensors. 2021; 21(9):2976. https://doi.org/10.3390/s21092976

Chicago/Turabian Style

Drăgoi, Vlad-Florin, and Gabriela Cristescu. 2021. "Bhattacharyya Parameter of Monomial Codes for the Binary Erasure Channel: From Pointwise to Average Reliability" Sensors 21, no. 9: 2976. https://doi.org/10.3390/s21092976

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop