Next Article in Journal
Cuproof: Range Proof with Constant Size
Previous Article in Journal
An Exploratory Study on the Complexity and Machine Learning Predictability of Stock Market Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simple Majority Consensus in Networks with Unreliable Communication †

1
Signal and Information Processing Laboratory, ETH Zürich, 8092 Zürich, Switzerland
2
The Andrew and Erna Viterbi Faculty of Electrical and Computer Engineering, Technion-Israel Institute of Technology, Technion City, Haifa 3200003, Israel
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in the proceedings of the 2021 International Symposium on Distributed Computing.
Entropy 2022, 24(3), 333; https://doi.org/10.3390/e24030333
Submission received: 24 January 2022 / Revised: 16 February 2022 / Accepted: 24 February 2022 / Published: 25 February 2022
(This article belongs to the Section Multidisciplinary Applications)

Abstract

:
In this work, we analyze the performance of a simple majority-rule protocol solving a fundamental coordination problem in distributed systems—binary majority consensus—in the presence of probabilistic message loss. Using probabilistic analysis for a large-scale, fully-connected, network of 2 n agents, we prove that the Simple Majority Protocol (SMP) reaches consensus in only three communication rounds, with probability approaching 1 as n grows to infinity. Moreover, if the difference between the numbers of agents that hold different opinions grows at a rate of n , then the SMP with only two communication rounds attains consensus on the majority opinion of the network, and if this difference grows faster than n , then the SMP reaches consensus on the majority opinion of the network in a single round, with probability converging to 1 as exponentially fast as n . We also provide some converse results, showing that these requirements are not only sufficient, but also necessary.

1. Introduction

The digital age drove forth the need for easy and fast access to information. The world wide web has facilitated the existence of many useful multiagent systems from messaging apps to cryptocurrency [1] and distributed data storage (or cloud services) [2,3]. However, the design of multiagent systems inherently requires agents to communicate and coordinate according to a prescribed shared protocol to achieve a common goal. For example, messaging apps must always show messages in the same order to all participants in a conversation, which is challenging when user clocks are not necessarily synchronized [4,5]. Cryptocurrencies employ decentralized data structures to register currency transactions, which require a vast majority of users to agree upon its current state [6]. Distributed data storage services must show consistent views of stored files in the presence of multiple concurrent reading and writing operations [7,8].
In the pursuit of developing such distributed protocols, much of the literature routinely makes two powerful assumptions. The first is that communication links are reliable [9,10,11], i.e., all messages between agents are eventually delivered. The second is that there exists an upper bound on the transmission delay of messages from one agent to another (usually the maximum propagation time of links) [12]. Nonetheless, communication networks are notoriously unreliable [13,14,15]. In fact, actual communication links may suffer from sudden crashes, resulting in messages in transit to be lost forever. In an effort to ensure reliability, distributed applications are generally built upon a reliable broadcast layer implemented by the Transmission Control Protocol (TCP) [16]—one of the main protocols in the internet protocol suite. However, while TCP guarantees eventual delivery of all sent messages, it does not provide any upper time bound on delivery time [17] (p. 9). In practice, these assumptions do not hold simultaneously.
In this work, we assume no such underlying structure exists and analyze the performance of a simple majority-rule protocol solving a fundamental coordination problem in distributed systems-binary majority consensus, in the presence of probabilistic message loss. Using probabilistic analysis for a large scale, fully-connected network of 2 n agents, we prove that the Simple Majority Protocol (SMP) converges rapidly to a consensus on the majority opinion of the network with probability approaching 1 as n , given that the difference between the numbers of agents that hold different opinions grows as fast as n . Otherwise, if the difference between the numbers of agents that hold different opinions is relatively close to zero, then the SMP still converges extremely fast to a consensus, but not necessarily on the initial majority opinion of the network.

1.1. Importance of Reliable Communication

Reliability of communication is essential to guarantee coordination in almost all cases. The pitfalls and design challenges of coordination when communication is unreliable is best illustrated by the two generals’ problem, which was popularized by Jim Gray [18].
Consider two generals who must coordinate a joint attack on an enemy. Both generals must attack simultaneously for the attack to succeed. While the two generals agreed that they will attack, they have not agreed upon a time for the attack. To coordinate, they can send messages to one another by running messengers. However, the messengers can be captured by the enemy and their messages will therefore not reach their destination.
Due to the uncertainty of message delivery, there exists no deterministic joint communication protocol that guarantees coordinated attack. To see this, assume there exists such a protocol by contradiction. Since a deterministic protocol must solve the problem in a finite number of steps, then the protocol prescribes a fixed number of message exchanges between the two generals, after which both must attack together. Some of these messages are successfully delivered and some are lost. Consider the last successfully delivered message in a run of the protocol, after which the recipient is confident enough to attack without the need for any further correspondence. Suppose this message was lost instead, then the recipient will hold off and not attack. However, the sender does not know about this last communication failure. By the protocol definition he must attack anyway, despite his counterpart’s reluctance—contradicting the assumption that the protocol was a solution to the problem.

1.2. Majority Consensus

The impossibility result of the two generals’ problem had far-reaching implications in the field of distributed protocols and databases, including the study of binary consensus [19]. In the binary consensus problem, every agent is initially assigned some binary value, referred to as the agent’s initial opinion. The goal of a protocol that solves consensus is to have every agent eventually decide on the same opinion, thus reaching agreement throughout the system. More formally, given any initial assignment of agent opinions, a run of a protocol which solves consensus must exhibit the following three properties:
  • Decision: every agent eventually decides on some opinion v { 0 , 1 } ;
  • Agreement: if some agent decided on v, no opinion other than v can be decided on by any other agent;
  • Nontriviality: if some agent has decided on v, then v was an opinion initially assigned to some agent.
Consensus is a fundamental problem in distributed systems, as many other coordination problems were shown to be directly reducible to and from consensus. The list includes agreeing on what transactions to commit to a database [20], state machine replication [21], atomic snapshots [22], total ordering of concurrent events [23], and the two generals’ problem, implying that no protocol can guarantee all three properties when communication is unreliable [24].
In light of this, it is interesting to consider a variation of the two generals’ problem where the probability of a messenger getting captured is p (independently of other messengers) [25,26]. While coordinated attack is still deterministically impossible, it is straightforward to design a protocol that guarantees success with probability at least q, which can be as close as desired to 1. The first general simply sends log p ( 1 q ) messengers, then attacks at the specified time without waiting for a reply, and the second general attacks if any messenger from the first general arrives.
In this work, we investigate whether leveraging such an assumption helps to solve binary majority consensus, in which the nontriviality clause stipulates that if a majority of agents initially hold the same opinion, then all agents must decide on this opinion. This variant of consensus is utilized when the agreed upon opinion holds importance beyond facilitating agreement. For example, a distributed system of sensors capable of detecting natural gas could use majority consensus to answer the question “Is the amount of gas in the air greater than 10,000 ppm?” to help detect a gas leak in a gas processing center.
We analyze the performance of the SMP in a complete graph of communication, i.e., where each agent has an active communication channel to every other agent in the system. In SMP, agents communicate in equal-length time intervals called rounds. All messages are sent at the beginning of a communication round, and they either arrive by the end of the round or are considered lost. We assume that all message loss events are statistically independent and identically distributed with some constant probability.
The SMP can be briefly described as follows: in each round, every agent sends its current opinion to all other agents. Then, it waits to receive all messages from other agents proposing their own opinions. If a majority of received messages propose the same opinion, then the agent adopts this opinion for the next round. All ties are reconciled by readopting the agent’s own opinion. After a fixed number of rounds r, each agent decides on its currently adopted opinion.
Similarly to the probabilistic protocol for the two generals’ problem discussed above, the SMP does not solve consensus deterministically, but rather provides probabilistic guarantees instead. The Decision and Nontriviality properties of classical consensus are assured, since all agents decide by the end of round r and any opinion that was decided on, was proposed by some agent. However, Agreement is not assured, since there always exists a nonzero probability of a run of the protocol in which message losses cause one agent to see only one opinion and another agent to see only the other, thus making them disagree. Likewise, Nontriviality of majority consensus is not guaranteed, since the majority opinion could be hidden from some agent. We will show in this article that the probability of these runs is negligible as the number of agents, n, tends to infinity, thus demonstrating that unreliable communication is not an insurmountable obstacle for coordination.
Specifically, we prove that the SMP with r = 3 reaches classical consensus with probability converging to 1 as n tends to infinity. In a system of 2 n agents, let δ n be the number of agents that are initially assigned the majority opinion minus n. For simplicity, assume the majority opinion is always the same for all n. We show that if δ n grows at a rate of n , then the SMP with r = 2 reaches majority consensus with probability approaching 1 as n . We also show that if δ n grows at a rate faster than n , then the SMP with r = 1 reaches majority consensus with probability that converges to 1 exponentially fast.
We also show that these achievability results are, in fact, tight. We will prove that if δ n = 0 , then r = 3 communication rounds is a necessary condition, since the probability to reach consensus with only r = 2 rounds converges to 0 as n . Similarly, if δ n grows as slow as n , then r = 2 rounds are a necessary condition to reach majority consensus.

1.3. Related Work

The problem of binary majority consensus was extensively researched in many different fields and contexts including autonomous systems [27,28,29,30], distributed systems [31,32,33], and information theory [34,35,36]. Almost always the problem is studied in the context of possible failure of some aspect of the network. In distributed systems, failure most often arises from agents behaving maliciously, failing to follow the protocol, or outright crashing. Consequently, protocols that solve consensus (and majority consensus by extension) are designed to tolerate a certain fraction of the set of agents failing [37,38]. Transmission faults (i.e., message loss, erasure, or addition) can be considered an extension of agent failure, but doing so may lead to false conclusions. For example, in a system of n agents, the entire system may be considered faulty even if only one message from each agent is lost. However, as shown by Santoro and Widmayer [39], the system may tolerate up to n 1 messages losses in a round and still reach consensus. Additionally, assuming a probability distribution on message loss is consistent with how network protocols are analyzed. The most notable example is that TCP throughput was shown to be inversely proportional to the square root of the link’s average packet (i.e., message) loss probability [40].
In [27,29,30,34], the authors studied the effects of message loss, random topology, Gaussian noise, and faulty agents, on the SMP’s convergence rate, i.e., the fraction of initial assignments of agent opinions (out of 2 n ) resulting in successful agreement. Specifically, in [30] computer simulations showed an improvement in the convergence rate of the SMP as the message loss probability increased up to 0.8 , after which the rate begins to decrease to zero. In contrast, we are interested in the maximal probability of failure over any initial assignment of agent opinions, since we cannot assume any distribution or frequency on the input to the consensus problem.
Mustafa and Pekeč [28] studied the requirements on the connectivity of the network such that, under assumption of reliable communication, SMP achieves consensus on any initial assignment of agent opinions. Their main result is that the SMP computes the majority consensus successfully only in highly-connected networks. This conclusion led us to analyze the SMP under the assumption of a fully-connected network. However, message loss may actually improve the chances of consensus in graphs with lesser degrees of connectivity, as shown in [30]. We leave the proof of this hypothesis to future work. Additionally, the complete graph assumption is a valid approximation for unstructured overlays in peer to peer networks, e.g., Freenet, Gnutella, and Fast Track [41].
Our work closely resembles the work performed in [35,36]. These articles have shown that in a lossless fully-connected network where agents poll a portion of their neighbors uniformly at random, the SMP converges quickly to majority consensus with probability of error (in the sense that agreement was reached, but not on the majority opinion) that decays exponentially with n. While assuming the existence of infinite agents in a system may initially seem ludicrous and impractical, our own computer simulations of the SMP showed that these kind of results hold true even if the number of agents is on order of 10 6 , which is already the case in cryptocurrency protocols. We add another assumption of unreliable communication and show that this, essentially, does not change the outcome.
Yet, another line of relatively recent work deserves a special attention. In [42], a local polling protocol is proposed, and it is proved that it reaches consensus on the initial global majority in general graphs with certain degree properties. An estimation on the number of required steps to reach consensus is provided. In [43], similar results were given for random regular graphs. In both of these papers, it is assumed that a clear bias exists between the two initial opinions, in contrast to our main assumption in the current work, that the initial condition may be completely unbiased. In [44], the binary consensus problem was tackled from a different angle. For a random graph G ( n , p ) with a connectivity parameter p ( 0 , 1 ) and any given ϵ ( 0 , 1 ) , this work reveals what the initial difference between the two camps should be, such that the larger camp will eventually win with probability at least as high as 1 ϵ . In [45], the binary consensus problem was solved for relatively sparse random graphs but with random initial states, which is slightly different than the assumptions in the current work. A remarkable result was proved in [45], stating that a consensus can be reached in at most four communication rounds.
The remaining part of the paper is organized as follows. In Section 2, we establish notation conventions. In Section 3, we formalize the model, the protocol, and the objectives of this work. In Section 4, we provide and discuss the main results of this work, and in Section 5, we prove them.

2. Notation Conventions

Throughout the paper, random variables will be denoted by capital letters, realizations will be denoted by the corresponding lower case letters, and their alphabets will be denoted by calligraphic letters. Random vectors and their realizations will be denoted, respectively, by boldface capital and lower case letters. Their alphabets will be superscripted by their dimensions. The binary Kullback–Leibler divergence function between two binary probability distributions with parameters α , β [ 0 , 1 ] is defined as:
D ( α β ) = α log α β + ( 1 α ) log 1 α 1 β ,
where logarithms, here and throughout the sequel, are understood to be taken to the natural base. The cumulative distribution function of a standard normal random variable is defined by:
Φ ( t ) = t 1 2 π exp s 2 2 d s .
The probability of an event E will be denoted by P { E } , and the expectation operator with respect to a probability distribution Q will be denoted by E Q [ · ] , where the subscript will often be omitted. The variance of a random variable X is denoted by Var [ X ] . The indicator function of an event A will be denoted by 𝟙 { A } . The set { 1 , 2 , , n } will often be denoted by [ 1 : n ] . For x = ( x 1 , x 2 , , x n ) X n and for any a X , let us denote:
N ( x ; a ) = i = 1 n 𝟙 { x i = a } .
For two non-negative sequences a n and b n , the sequence A n = n + a n is called asymmetric of exact order of b n if there exists some α > 0 , such that lim n a n b n = α . Moreover, the sequence A n = n + a n is called asymmetric of order larger than b n if lim n a n b n = .

3. Model, Protocol, and Objectives

Assume a set of 2 n agents, and denote their assignment of initial opinions by x 0 , n { 0 , 1 } 2 n . The vector x 0 , n is called the initial state. Denote the numbers of zeros and ones in x 0 , n by I 0 and I 1 , respectively. At each round, each agent transmits its current state to all other agents. If a message sent between any pair of agents arrives, then it is assumed to be delivered correctly. Otherwise, if x { 0 , 1 } is transmitted between any pair of agents, but got lost, then the designated receiver receives the default symbol e. This assumption is only made for the purpose of making the definitions that follow brighter. For a sent message x { 0 , 1 } and a received message Y { 0 , e , 1 } , we assume that all message losses are statistically independent and identically distributed according to P ( Y = 0 | x = 0 ) = P ( Y = 1 | x = 1 ) = 1 q and P ( Y = e | x = 0 ) = P ( Y = e | x = 1 ) = q , where q [ 0 , 1 ] is the loss parameter of the network. The binary erasure channel is characterized by a similar conditional distribution, but note that the actual faults in our model are message losses, not to be confused with erasures, which are different kinds of faults. The two extreme cases of a reliable network (i.e., with q = 0 ) and a completely unreliable network (i.e., with q = 1 ) are of less interest, for obvious reasons; hence, we assume throughout that q ( 0 , 1 ) .
At round 1 , the agent i [ 1 : 2 n ] receives the (random) vector:
y i = ( y i ( 1 ) , y i ( 2 ) , , y i ( i 1 ) , y i ( i + 1 ) , , y i ( 2 n ) ) { 0 , e , 1 } 2 n 1 ,
and for a { 0 , 1 } , he calculates the enumerators:
N , i ( a ) = 𝟙 { x 1 ( i ) = a } + j i 𝟙 { y i ( j ) = a } .
In the SMP, each agent updates (note that we use this terminology even if the value of an agent does not change between two consecutive rounds) its value according to the more common value at hand, i.e., agent i chooses:
x ( i ) = 0 if N , i ( 0 ) > N , i ( 1 ) 1 if N , i ( 0 ) < N , i ( 1 ) x 1 ( i ) if N , i ( 0 ) = N , i ( 1 ) .
The vector x { 0 , 1 } 2 n is called the state at the end of round .
A specific SMP defines a priori the number of rounds until termination. Let us denote by SMP ( r ) the SMP with r rounds of communication until termination. We say that the SMP ( r ) attains consensus if:
x r ( 1 ) = x r ( 2 ) = = x r ( 2 n ) ,
and denote this event by C n . Similarly, we say that the SMP ( r ) attains majority consensus if the following holds:
I 0 > I 1 x r ( 1 ) = x r ( 2 ) = = x r ( 2 n ) = 0 ,
I 0 < I 1 x r ( 1 ) = x r ( 2 ) = = x r ( 2 n ) = 1 ,
I 0 = I 1 x r ( 1 ) = x r ( 2 ) = = x r ( 2 n ) ,
and denote this event by C n m .
For a specific initial state x 0 , n , the probability of error in achieving consensus is defined as P e ( x 0 , n ) = P [ C n c ] . The maximal error probability with respect to the initial state is defined by:
P e , m a x = max x 0 , n { 0 , 1 } 2 n P e ( x 0 , n ) .
The error probability in achieving majority consensus is defined similarly and denoted P e m ( x 0 , n ) .
Now, the first objective of this work is to prove that the SMP requires only very few rounds of communication to attain consensus, with a maximal error probability that converges to 0 when n . The second objective is to determine for which initial states it is possible to also achieve majority consensus with a small probability of error.

4. Main Results

Our first main result is the following, which is proved in Section 5.1.
Theorem 1.
Let { x 0 , n } n 1 , be a sequence of initial states over 2 n agents. Assume that the 2 n agents communicate over a network with a loss parameter q ( 0 , 1 ) . Then:
  • If { x 0 , n } n 1 is asymmetric of order larger than n , the SMP ( 1 ) attains P [ C n m ] n 1 .
  • If { x 0 , n } n 1 is asymmetric of exact order of n , the SMP ( 2 ) attains P [ C n m ] n 1 .
  • For any { x 0 , n } n 1 , the SMP ( 3 ) attains P [ C n ] n 1 .
We now provide a short discussion on the results of Theorem 1.
Theorem 1 shows that the SMP requires at most three rounds of communications to attain consensus, in the limit of an infinite number of agents. Consensus on the majority cannot be ensured for all possible initial states, but only for those initial states that have a significant majority to one of the sides. To understand this fact better, consider the following special case. Assume a network with 2 n agents, such that I 0 = n + log ( n ) and I 1 = n log ( n ) . Since this majority in favor of the zeros is so weak, then it is most likely that the random losses in the network will completely hide it; we expect that about half of the agents will have N 1 , i ( 0 ) > N 1 , i ( 1 ) , thus updating their current opinion to ‘0’, while the other half will update their current opinion to ‘1’s. We conclude that the state at the end of round 1 is probabilistically equivalent to a sequence of 2 n fair coin tosses, and hence, with a probability of about one half, the majority at the end of round 1 will be different from the initial majority.
More quantitatively, let I 0 = n + a n and I 1 = n a n , where { a n } n 1 is a non-negative, nondecreasing sequence. Moreover, for an agent with an initial opinion ‘0’, let p n denote the sequence of probabilities of the events that such an agent updates its opinion to ‘0’. Then, the following trichotomy is seen inside the proof of Theorem 1.
Lemma 1.
The following trichotomy holds:
  • If lim n a n n = 0 , then p n n 1 2 .
  • If lim n a n n = α ( 0 , ) then p n n β ( α , q ) ( 1 2 , 1 ) .
  • If lim n a n n = , then p n n 1 .
One of the most surprising facts, at least to the authors of this work, is the following. For highly symmetric initial states, although p n n 1 2 (which is proved in Appendix C), it turns out (see Proposition 3 in Section 5.1) that after a single round of communication, the initial symmetry breaks equiprobably into one of the sides. Moreover, for the symmetric case of I 0 = I 1 = n , we prove in Propositions 3 and 4 that with a probability converging to 1, the state at the end of round 1 will be asymmetric of exact order of n . Then, according to the second point in Lemma 1, the state at the end of round two is going to have a significant majority to one of the sides, and thus, according to the third point in Lemma 1, only one more round of communication is required to achieve consensus. If the initial state is already asymmetric of exact order of n , then only two rounds of communication are needed for attaining consensus, and in this case, it is guaranteed (with high probability) that all agents agree on the initial majority opinion.
The phenomenon that the initial symmetry breaks into a sufficient majority after the first round is of key importance, since it makes the convergence of the SMP so rapid. In fact, we also conclude that the faulty communication between the agents even helps in attaining consensus, by breaking the symmetry in some extreme cases, e.g., consider the case of I 0 = I 1 = n and a reliable network (i.e., the case of q = 0 ). Then, ad infinitum, the state at the end of any round will be symmetric. Otherwise when losses exist according to some q ( 0 , 1 ) , this will not be the case, even if the percentage of losses is extremely small (but fixed at all n).
A significant difference exists between the first point of Theorem 1 and its last two points, which is the following. The first point of Theorem 1 is based on Proposition 1 in Section 5.1, which is mainly proved by using the Chernoff bound. Since the Chernoff bound is a nonasymptotic tool, we acquire a large-deviations result, i.e., for a given sequence { a n } n 1 (with the condition lim n a n n = ), we propose a tight upper bound on P e m ( x 0 , n ) , which holds for any finite n (this tightness follows from the fact that a lower bound with a matching exponent can be derived as well). This result is obviously stronger than just P [ C n m ] n 1 . On the other hand, the second and the third points of Theorem 1 are based on Propositions 2 and 3 in Section 5.1, respectively. Since the proofs of these propositions involve central limit theorems, we merely arrive at asymptotic results. As a consequence, we do not know at what rates the probabilities in the second and the third points of Theorem 1 converge to one.
Since the results of the second and the third points of Theorem 1 are merely asymptotic, a few words on finite n effects are in order. We base the following facts on computer simulations of the SMP. On the one hand, convergence to consensus at more than three rounds is definitely possible, but only when the initial state is symmetric or almost symmetric. The reason for that is the fact mentioned above, according to which, the state at round 1 is probabilistically equivalent to a sequence of 2 n fair coin tosses, and hence, the probability that the state at round 1 is again symmetric behaves asymptotically as 1 / n (upper and lower bounds can be derived using the Stirling’s bounds to n ! ), which is not negligible at all, even for a relatively large number of agents. For relatively small values of n, we observed several realizations with even more than a single returning to a fully symmetric state. Although quite rare, these events should be taken into consideration in practical implementations.
All the results provided in Theorem 1 are, in fact, achievability results, i.e., they only tell under what conditions consensus can be attained. Hence, it is worth investigating whether consensus may be attained by the SMP with even less communication rounds than required in Theorem 1. In the following result, which is the second main result of this work and is proved in Section 5.2, we show that for highly symmetric initial states, three rounds of communications are not only sufficient, but also necessary.
Theorem 2.
Let { x 0 , n } n 1 be a sequence of symmetric initial states over 2 n agents, i.e., N ( x 0 , n ; 0 )   = N ( x 0 , n ; 1 ) = n for all n. Assume that the 2 n agents communicate over a network with a loss parameter q ( 0 , 1 ) . Then, the SMP ( 2 ) attains P [ C n ] n 0 .
While Theorem 2 provides a converse result with regard to the third point of Theorem 1, a similar converse result can also be established with regard to the second point of Theorem 1. If the initial state is asymmetric of exact order of n , then the SMP will likely not attain consensus after only a single round of communication, and furthermore, the probability of reaching consensus will tend to 0 as n . We omit the proof of this negative result.

5. Proofs

5.1. Proof of Theorem 1

The first point of Theorem 1 is proved via the following result, which is proved in Appendix A.
Proposition 1.
Let { A n } n = 1 be a sequence such that lim n A n n = . For an initial state x 0 , n { 0 , 1 } 2 n with at least n + A n zeros or at least n + A n ones and a channel parameter q [ 0 , 1 ) , the SMP ( 1 ) attains P [ C n m ] n 1 . Specifically, if lim n A n n < 1 , then:
P e m ( x 0 , n ) 2 n n + A n n A n · exp ( 1 q ) · A n 2 n .
To prove the second point of Theorem 1, we rely on the following result, which is proved in Appendix B.
Proposition 2.
Let q [ 0 , 1 ) be a channel parameter. Let α > 0 be fixed and let 0 < ϵ < Φ ( t 0 ) 1 2 , where t 0 = 2 α 2 ( 1 q ) / q . Then, the SMP ( 1 ) attains the following.
  • If x 0 , n { 0 , 1 } 2 n has at least n + α n zeros, then:
    P N ( X 1 ; 0 ) 2 n ( Φ ( t 0 ) ϵ ) n 1 .
  • If x 0 , n { 0 , 1 } 2 n has at least n + α n ones, then
    P N ( X 1 ; 1 ) 2 n ( Φ ( t 0 ) ϵ ) n 1 .
Then, combining the results of Propositions 1 and 2 using the law of total probability, the second point of Theorem 1 follows immediately.
To prove the third point of Theorem 1, we provide one more result. The following proposition shows that if the initial state is symmetric, then the state at round one will be asymmetric of order at least n . This result is proved in Appendix C.
Proposition 3.
Let x 0 , n { 0 , 1 } 2 n be an initial state with n zeros and n ones and let q ( 0 , 1 ) be a channel parameter. Let ϵ > 0 be given. Then, there exist δ = δ ( ϵ ) with δ ( ϵ ) ϵ 0 0 and M ( ϵ ) , such that for all n M ( ϵ ) ,
P { N ( X 1 ; 0 ) n δ n } { N ( X 1 ; 0 ) n + δ n } 1 ϵ .
We are now able to prove the third point of Theorem 1. Let ϵ 1 , ϵ 3 > 0 be given, and let δ be as in Proposition 3 corresponding to ϵ 3 . Also, let t 0 = 2 δ 2 ( 1 q ) / q , choose ϵ 2 > 0 such that Φ ( t 0 ) ϵ 2 > 1 / 2 , and denote β = 2 ( Φ ( t 0 ) ϵ 2 ) 1 . Define the following events:
A n = N ( X 1 ; 0 ) n δ n or N ( X 1 ; 0 ) n + δ n ,
and
B n = N ( X 2 ; 0 ) ( 1 β ) n or N ( X 2 ; 0 ) ( 1 + β ) n .
Then, consider the following:
P { C n } = P { N ( X 3 ; 0 ) = 0 or N ( X 3 ; 0 ) = 2 n } = P { N ( X 3 ; 0 ) = 0 or N ( X 3 ; 0 ) = 2 n | B n } · P { B n }
+ P { N ( X 3 ; 0 ) = 0 or N ( X 3 ; 0 ) = 2 n | B n c } · P { B n c }
P { N ( X 3 ; 0 ) = 0 or N ( X 3 ; 0 ) = 2 n | B n } · P { B n }
1 ϵ 1 · P { B n } ,
where (19) follows from the law of total probability and (21) holds for all large enough n, due to Proposition 1. Furthermore,
P { B n } = P { B n | A n } · P { A n } + P { B n | A n c } · P { A n c }
P { B n | A n } · P { A n }
( 1 ϵ 2 ) · P { A n }
( 1 ϵ 2 ) · ( 1 ϵ 3 ) ,
where (22) is again due to the law of total probability, (24) follows from Proposition 2 for all n sufficiently large, and (25) follows from Proposition 3, also for all n sufficiently large. Substituting (25) back into (21), we conclude that P { C n } can be made arbitrarily close to 1, which implies the result in the third point in Theorem 1.

5.2. Proof of Theorem 2

The following proposition, which is proved in Appendix D, shows that if the initial state is symmetric, then the state at round one cannot be asymmetric of order larger than n .
Proposition 4.
Let { B n } n = 1 be a sequence such that lim n B n n = . For an initial state x 0 , n { 0 , 1 } 2 n with n zeros and n ones and a channel parameter q ( 0 , 1 ) , the following holds:
P { N ( X 1 ; 0 ) n B n } { N ( X 1 ; 0 ) n + B n } 2 exp B n 2 n .
We also have the following result, which is proved in Appendix E.
Proposition 5.
Let { C n } n = 1 be a sequence such that lim n C n n = 0 . Let x 0 , n { 0 , 1 } 2 n be an initial state with n + C n zeros or n + C n ones. Let q ( 0 , 1 ) be a channel parameter and denote the constant f q = 32 / min { q , 1 q } . Then, the SMP ( 1 ) is characterized by:
P { C n } exp C n 2 · exp f q · C n 2 n C n .
We are now in a good position to prove Theorem 2. Let C ( q ) = 1 2 f q 1 , choose the sequence:
Θ n = C ( q ) n log ( n ) ,
and define the sequence of events:
F n = { N ( X 1 ; 0 ) n Θ n } { N ( X 1 ; 0 ) n + Θ n } .
According to Proposition 4, we have that:
P F n 2 exp C ( q ) n log ( n ) n
= 2 exp C ( q ) log ( n )
= 2 n C ( q ) ,
which converges to zero as n . In addition, it follows from Proposition 5 that:
P { C n | F n c } exp C ( q ) n log ( n ) · exp f q · C ( q ) n log ( n ) n C ( q ) n log ( n )
exp C ( q ) n log ( n ) · exp 1 2 n log ( n ) n 1 2 n
= exp C ( q ) n log ( n ) · exp log ( n )
= exp C ( q ) n log ( n ) n 1
= exp C ( q ) log ( n )
= 1 n C ( q ) ,
where (34) holds for all large enough n. Then, consider the following:
P { C n } = P { C n | F n } · P { F n } + P { C n | F n c } · P { F n c }
P { F n } + P { C n | F n c }
2 n C ( q ) + 1 n C ( q )
= 3 n C ( q ) n 0 ,
where (39) is due to the law of total probability and (41) follows from (32) and (38). The proof of Theorem 2 is complete.

Author Contributions

All authors contributed equally to this research work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Proposition 1

Due to symmetry, we only analyze the case I 0 > I 1 . It follows from the union bound that:
P e m ( x 0 , n ) = P i = 1 2 n { X 1 ( i ) = 1 }
i = 1 2 n P X 1 ( i ) = 1 .
In the following, let us denote by Ber ( p ) a Bernoulli random variable with a success probability p and by Bin ( n , p ) a binomial random variable with n independent experiments, each one with a success probability p. We adopt the following convention: if an event contains at least two binomial random variables, then we assume that they are statistically independent.
Let us denote q = 1 q . If an agent starts with a ‘0’, then the probability to decide in favor of ‘1’ is upper-bounded by:
P Bin n A n , q Bin n + A n 1 , q + 1 + 1
P Bin n A n , q Bin n + A n 1 , q + Bin 1 , q
= P Bin n A n , q Bin n + A n , q ,
where the addition of the second 1 in (A3) follows from the need to strictly break the tie to adopt ‘1’ and (A4) is due to the fact that Bin 1 , q 2 with probability one.
If an agent starts with a ‘1’, then the probability to decide ‘1’ is upper-bounded by:
P Bin n A n 1 , q + 1 Bin n + A n , q P Bin n A n , q + 1 Bin n + A n , q .
Since (A6) cannot be smaller than (A5), we continue with (A6). From now on, we prove that the probability in (A6), to be denoted by P n , converges to zero as n . Let:
X n = = 1 n A n I , Y n = k = 1 n + A n J k ,
where I Ber ( q ) , for all { 1 , 2 , , n A n } , J k Ber ( q ) , for all k { 1 , 2 , , n + A n } , and all of these binary random variables are independent. Now:
P n = P { X n + 1 Y n }
= P e λ ( X n Y n + 1 ) 1
E e λ ( X n Y n + 1 ) ,
where (A10) is due to Markov’s inequality. Since (A10) holds for every λ 0 , it follows that:
P n inf λ 0 E e λ ( X n Y n + 1 ) .
We obtain that:
E e λ ( X n Y n + 1 ) = e λ · E exp λ = 1 n A n I k = 1 n + A n J k
= e λ · E = 1 n A n e λ I · k = 1 n + A n e λ J k
= e λ · = 1 n A n E e λ I · k = 1 n + A n E e λ J k
= e λ · 1 + q ( e λ 1 ) n A n · 1 + q ( e λ 1 ) n + A n
e λ · exp { q ( e λ 1 ) } n A n · exp { q ( e λ 1 ) } n + A n
= exp λ + q ( e λ 1 ) ( n A n ) + q ( e λ 1 ) ( n + A n ) ,
where (A14) is due to the independence of all binary random variables and (A16) follows from the inequality 1 + x e x . Upon defining:
f ( λ ) = λ + q ( e λ 1 ) ( n A n ) + q ( e λ 1 ) ( n + A n ) ,
we find that
f ( λ ) = 1 + q e λ ( n A n ) q e λ ( n + A n ) .
To facilitate expressions, we solve for f ( λ ) = 1 and find that:
e λ * = n + A n n A n .
Substituting it back into (A17) yields that:
P n exp λ * + q ( e λ * 1 ) ( n A n ) + q ( e λ * 1 ) ( n + A n )
= n + A n n A n · exp q n + A n n A n 1 ( n A n ) + q n A n n + A n 1 ( n + A n )
= n + A n n A n · exp q ( n + A n ) ( n A n ) n + A n × exp q ( n A n ) ( n + A n ) n A n
= n + A n n A n · exp 2 q n 2 A n 2 n .
Consider the following:
n 2 A n 2 n = n 2 1 A n 2 n 2 n
= n 1 A n 2 n 2 n
n 1 A n 2 2 n 2 n
= A n 2 2 n ,
where (A27) follows from the inequality 1 t 1 t / 2 . Continuing from (A2), we arrive at
P e m ( x 0 , n ) 2 n n + A n n A n · exp ( 1 q ) · A n 2 n ,
which converges to zero when n , as long as lim n A n n = and lim n A n n < 1 .
For the case of lim n A n n = 1 , consider the following. Let { A n } n = 1 be any sequence with lim n A n n = 1 and let { A n } n = 1 be a sequence with lim n A n n = α , for α ( 0 , 1 ) . Then, for sufficiently large n, A n A n , and thus, it follows that:
P Bin n A n , q + 1 Bin n + A n , q P Bin n A n , q + 1 Bin n + A n , q n 0 ,
which completes the proof of Proposition 1.

Appendix B. Proof of Proposition 2

  • Step 1: The Limit of the Probability to Decide ‘1’
If an agent starts with a ‘1’, then the probability to decide in favor of ‘1’ is given by:
P Bin n α n 1 , q + 1 Bin n + α n , q ,
and if an agent starts with a ‘0’, then the probability to decide in favor of ‘1’ is given by
P Bin n α n , q Bin n + α n 1 , q + 2 = P Bin n α n , q Bin n , q + Bin α n 1 , q + 2 .
From now on, we prove that the probability in (A32), to be denoted by P n , converges to a value, which is strictly smaller than 1 2 for all sufficiently large n. An identical result also holds for the probability in (A31), the proof of which is very similar and hence omitted.
Let I Ber ( q ) , for all { 1 , 2 , , n α n } , J s Ber ( q ) , for all s { 1 , 2 , , n } , as well as K m Ber ( q ) , for all m { 1 , 2 , , α n 1 } , and all of these binary random variables are independent. Consider the following:
P n = P = 1 n α n I s = 1 n J s + m = 1 α n 1 K m + 2
= P = 1 n α n I ( n α n + α n ) q s = 1 n J s n q + m = 1 α n 1 K m + 2
= P = 1 n α n ( I q ) α n q s = 1 n ( J s q ) + m = 1 α n 1 K m + 2
= P 1 n = 1 n α n ( I q ) 1 n s = 1 n ( J s q ) + 1 n m = 1 α n 1 K m + 2 + α q .
Let us denote:
X n = 1 n = 1 n α n ( I q ) , Y n = 1 n s = 1 n ( J s q ) , Z n = 1 n m = 1 α n 1 K m + 2 .
It follows directly from the central limit theorem [46] (p. 112, Theorem 2.4.1.) that Y n converges in distribution to Y N ( 0 , σ 2 ) , where σ 2 = q ( 1 q ) . Concerning the sequence X n , we first write it as follows:
X n = 1 n = 1 n α n ( I q )
= n α n n · 1 n α n = 1 n α n ( I q )
= n α n n X ˜ n ,
where X ˜ n converges in distribution to X N ( 0 , σ 2 ) , again, from the central limit theorem. To conclude that X n itself converges in distribution to X N ( 0 , σ 2 ) , we only need to prove that | X ˜ n X n | converges in distribution to 0. We have that:
lim n E X ˜ n X n 2 = lim n E 1 n α n n · 1 n α n = 1 n α n ( I q ) 2
= lim n 1 n α n n 2 · 1 n α n · E = 1 n α n ( I q ) 2
= lim n 1 n α n n 2 · 1 n α n · = 1 n α n E ( I q ) 2
= lim n 1 n α n n 2 · E ( I 1 q ) 2
= 0 ,
which proves that that | X ˜ n X n | converges in L 2 to 0, thus also in distribution. It then follows from [47] (Theorem 3.1) that X n converges in distribution to X N ( 0 , σ 2 ) .
Concerning the sequence Z n , consider the following:
lim n E [ Z n ] = lim n E 1 n m = 1 α n 1 K m + 2
= lim n 1 n m = 1 α n 1 q + 2
= α q ,
and furthermore
lim n Var [ Z n ] = lim n Var 1 n m = 1 α n 1 K m + 2
= lim n 1 n m = 1 α n 1 q ( 1 q )
= 0 .
It follows that Z n converges in L 2 to Z = α q , i.e., a deterministic random variable. Hence, Z n also converges to Z = α q in probability [46] (Lemma 1.3.5). Now, for ϵ > 0 arbitrarily small, consider the following:
P n = P X n Y n + Z n + α q = P X n Y n + Z n + α q | Z n α q ϵ P Z n α q ϵ
+ P X n Y n + Z n + α q | Z n < α q ϵ P Z n < α q ϵ
P X n Y n + α q ϵ + α q | Z n α q ϵ P Z n α q ϵ + P Z n < α q ϵ
= P X n Y n + 2 α q ϵ P Z n α q ϵ + P Z n < α q ϵ ,
where (A53) is due to the law of total probability and (A55) follows from the fact that ( X n , Y n ) are independent of Z n . Since { I } and { J s } are all independent, the joint law of the pair ( X n , Y n ) converges to the joint law of ( X , Y ) and X , Y are independent. Hence, by Portmanteau’s theorem [47] (p. 16, Theorem 2.1), and the fact that Z n converges to Z = α q in probability:
lim sup n P n P X Y 2 α q ϵ
= P N ( 0 , 2 q ( 1 q ) ) 2 α q ϵ
= Q t 0 ( ϵ ) ,
where
t 0 ( ϵ ) = 2 α q ϵ 2 q ( 1 q ) ,
and
Q ( t ) = t 1 2 π exp s 2 2 d s .
In a similar fashion:
P n = P X n Y n + Z n + α q = P X n Y n + Z n + α q | Z n α q + ϵ P Z n α q + ϵ
+ P X n Y n + Z n + α q | Z n > α q ϵ P Z n > α q ϵ
P X n Y n + α q + ϵ + α q | Z n α q + ϵ P Z n α q + ϵ
= P X n Y n + 2 α q + ϵ P Z n α q + ϵ ,
and thus
lim inf n P n P X Y 2 α q + ϵ
= P N ( 0 , 2 q ( 1 q ) ) 2 α q + ϵ
= Q t 0 + ( ϵ ) ,
where
t 0 + ( ϵ ) = 2 α q + ϵ 2 q ( 1 q ) .
From the continuity of the Q-function and the fact that ϵ > 0 is arbitrarily small, we conclude that:
lim sup n P n Q t 0 lim inf n P n ,
where
t 0 = 2 α 2 ( 1 q ) q ,
and hence
lim n P n = Q t 0 .
Now, for any α > 0 and q ( 0 , 1 ) , the expression in (A70) is strictly positive, and thus lim n P n = Q ( t 0 ) < 1 2 . We conclude that for all 0 < δ < 1 2 Q ( t 0 ) , P n Q ( t 0 ) + δ < 1 2 holds for all sufficiently large n.
  • Step 2: Many Zeros with High Probability
Let 0 < δ < 1 2 Q ( t 0 ) be given. Let Q n 0 , Q n 1 denote the probabilities of deciding ‘0’, for the two possible initial states. Since P n Q ( t 0 ) + δ < 1 2 for all sufficiently large n, it follows that min { Q n 0 , Q n 1 } Φ ( t 0 ) δ > 1 2 for all sufficiently large n, where Φ ( t ) is defined in (2).
Let ϵ > δ > 0 such that Φ ( t 0 ) δ > Φ ( t 0 ) ϵ > 1 2 . We now prove that the probability of drawing a relatively small number of zeros tends to 0 as n . Denote N 0 = N ( X 1 ; 0 ) and consider the following for s 0 :
P N 0 2 n ( Φ ( t 0 ) ϵ ) = P e s N 0 e 2 n s ( Φ ( t 0 ) ϵ )
E e s N 0 e 2 n s ( Φ ( t 0 ) ϵ ) ,
where (A73) is due to Markov’s inequality. Since (A73) holds for every s 0 , it follows that:
P N 0 2 n ( Φ ( t 0 ) ϵ ) inf s 0 E e s N 0 e 2 n s ( Φ ( t 0 ) ϵ ) .
Note that:
N 0 = = 1 n + α n I + k = 1 n α n J k ,
where I Ber ( Q n 0 ) , for all { 1 , 2 , , n + α n } , J k Ber ( Q n 1 ) , for all k { 1 , 2 , , n α n } , and all of these binary random variables are independent. We obtain that:
E e s N 0 = E exp s = 1 n + α n I + k = 1 n α n J k
= E = 1 n + α n e s I · k = 1 n α n e s J k
= = 1 n + α n E e s I · k = 1 n α n E e s J k
= 1 + Q n 0 ( e s 1 ) n + α n · 1 + Q n 1 ( e s 1 ) n α n
1 + ( Φ ( t 0 ) δ ) ( e s 1 ) n + α n · 1 + ( Φ ( t 0 ) δ ) ( e s 1 ) n α n
= 1 + ( Φ ( t 0 ) δ ) ( e s 1 ) 2 n ,
where (A78) is due to the independence of all binary random variables and (A80) is true since min { Q n 0 , Q n 1 } Φ ( t 0 ) δ for all sufficiently large n and e s 1 0 . Substituting (A81) back into (A74) yields that:
P N 0 2 n ( Φ ( t 0 ) ϵ )
inf s 0 exp 2 n log 1 + ( Φ ( t 0 ) δ ) ( e s 1 ) + 2 n s ( Φ ( t 0 ) ϵ )
= exp 2 n · inf s 0 { log 1 + ( Φ ( t 0 ) δ ) ( e s 1 ) + s ( Φ ( t 0 ) ϵ ) } .
Upon defining:
g ( s ) = log 1 + ( Φ ( t 0 ) δ ) ( e s 1 ) + s ( Φ ( t 0 ) ϵ ) ,
we find that the solution to g ( s ) = 0 is given by
s * = log ( Φ ( t 0 ) δ ) [ 1 ( Φ ( t 0 ) ϵ ) ] [ 1 ( Φ ( t 0 ) δ ) ] ( Φ ( t 0 ) ϵ ) .
Substituting it back into (A84) yields that:
g ( s * ) = log 1 + ( Φ ( t 0 ) δ ) [ 1 ( Φ ( t 0 ) δ ) ] ( Φ ( t 0 ) ϵ ) ( Φ ( t 0 ) δ ) [ 1 ( Φ ( t 0 ) ϵ ) ] 1 + ( Φ ( t 0 ) ϵ ) log ( Φ ( t 0 ) δ ) [ 1 ( Φ ( t 0 ) ϵ ) ] [ 1 ( Φ ( t 0 ) δ ) ] ( Φ ( t 0 ) ϵ )
= log 1 ( Φ ( t 0 ) δ ) 1 ( Φ ( t 0 ) ϵ ) + ( Φ ( t 0 ) ϵ ) log Φ ( t 0 ) δ Φ ( t 0 ) ϵ + ( Φ ( t 0 ) ϵ ) log 1 ( Φ ( t 0 ) ϵ ) 1 ( Φ ( t 0 ) δ )
= ( Φ ( t 0 ) ϵ ) log Φ ( t 0 ) ϵ Φ ( t 0 ) δ ( 1 ( Φ ( t 0 ) ϵ ) ) log 1 ( Φ ( t 0 ) ϵ ) 1 ( Φ ( t 0 ) δ )
= D ( Φ ( t 0 ) ϵ Φ ( t 0 ) δ ) .
We upper-bound the expression in (A89) using Pinsker’s inequality [48,49]. Recall that the total variation distance between two probability distributions P and Q is defined by:
| P Q | = 1 2 x X | P ( x ) Q ( x ) | ,
and the Kullback–Leibler divergence is defined by
D ( P Q ) = x X P ( x ) log P ( x ) Q ( x ) .
Then, Pinsker’s inequality asserts that:
D ( P Q ) 2 | P Q | 2 .
Thus, we arrive at:
P N 0 2 n ( Φ ( t 0 ) ϵ ) exp 2 n D ( Φ ( t 0 ) ϵ Φ ( t 0 ) δ )
exp 4 n ( ϵ δ ) 2 .
Hence, we conclude that for all n sufficiently large:
P N 0 2 n ( Φ ( t 0 ) ϵ ) 1 exp 4 n ( ϵ δ ) 2 ,
which converges to 1 as n . Proposition 2 is now proved.

Appendix C. Proof of Proposition 3

Denote N 0 = N ( X 1 ; 0 ) . Let { p n } denote the sequence of probabilities of the events that an agent with an initial value ‘0’ updates its value to ‘0’ after a single round of communication.
  • Step 1: An Upper Bound on the PMF of the Binomial Distribution
We start by upper-bounding the probability mass function (PMF) of the binomial random variable X = Bin ( n , p ) , which is given by:
P X ( k ) = n k p k ( 1 p ) n k , k [ 0 : n ] .
To upper-bound the binomial coefficient in (A96), we invoke the following Stirling’s bounds:
2 π n · n n · e n n ! e n · n n · e n ,
and obtain the following
n k = n ! k ! · ( n k ) !
e n · n n · e n 2 π k · k k · e k · 2 π ( n k ) · ( n k ) n k · e ( n k )
= e n · n n 2 π k · k k · 2 π ( n k ) · ( n k ) n k
= e 2 π n k ( n k ) n k · n n k k k · ( n k ) n k
= e 2 π n k ( n k ) exp k log k n ( n k ) log n k n
= e 2 π n k ( n k ) exp n k n log k n + 1 k n log 1 k n .
Substituting (A103) back into (A96) yields:
P X ( k )
e 2 π n k ( n k ) exp n k n log k n + 1 k n log 1 k n · p k ( 1 p ) n k
= e 2 π n k ( n k ) exp n k n log k n + 1 k n log 1 k n
× exp n k n log 1 p + 1 k n log 1 1 p
= e 2 π n k ( n k ) exp n k n log k / n p + 1 k n log 1 k / n 1 p
= e 2 π n k ( n k ) exp n D k n p ,
where D ( α β ) , for α , β [ 0 , 1 ] , is defined in (1).
  • Step 2: The Limit of { p n } is 1 2
First, we show that { p n } is lower-bounded by 1 2 . For q = 1 q , denote:
Z Bin n 1 , q , X , Y Bin n , q .
We have that:
p n = P Z + 1 Y
P Bin n 1 , q + Bin 1 , q Y
= P X Y ,
where (A110) is true since Bin 1 , q 1 with probability one. It follows by symmetry that
1 = P { X > Y } + P { X < Y } + P { X = Y }
= 2 P { X > Y } + P { X = Y } ,
or,
P { X > Y } = 1 2 1 2 · P { X = Y } ,
which implies that
p n P { X Y }
= P { X > Y } + P { X = Y }
= 1 2 + 1 2 · P { X = Y }
1 2 .
Next, we upper-bound the sequence { p n } . Note that:
p n = P Bin n 1 , q + 1 Bin n , q
P Bin n , q + 1 Bin n , q
= P X + 1 Y
= P X Y + P X + 1 = Y
= 1 2 + 1 2 · P { X = Y } + P X + 1 = Y .
As for the last term in (A123), we have that:
P X + 1 = Y = = 0 n 1 P { X = } · P { Y = + 1 }
= 0 n 1 P { X = } 2 = 0 n 1 P { Y = + 1 } 2
= = 0 n 1 P { X = } 2 = 1 n P { Y = } 2
= 0 n P { X = } 2 = 0 n P { Y = } 2
= = 0 n P { X = } 2
= = 0 n P { X = } · P { Y = }
= P { X = Y } ,
where (A125) follows from the Cauchy–Schwarz inequality. Substituting (A130) back into (A123) yields that:
p n 1 2 + 3 2 · P { X = Y } .
Now, consider the following:
P { X = Y } = = 0 n P { X = } 2
= = 0 n n ( 1 q ) q n 2
= n 0 ( 1 q ) 0 q n 2 + = 1 n 1 n ( 1 q ) q n 2 + n n ( 1 q ) n q 0 2
= q 2 n + = 1 n 1 n ( 1 q ) q n 2 + ( 1 q ) 2 n .
As for the middle term in (A135), it follows from (A107) that:
= 1 n 1 n ( 1 q ) q n 2 = 1 n 1 e 2 π 2 n ( n ) exp 2 n D n 1 q .
To upper-bound (A136) let ϵ n = 1 / n 4 , for n = 1 , 2 , and define the set of numbers:
N n = { n ( 1 q ϵ n ) , n ( 1 q ϵ n ) + 1 , , n ( 1 q ) , , n ( 1 q + ϵ n ) } ,
whose cardinality is given by
| N n | = 2 n ϵ n + 1 .
Denote M n = { 1 , 2 , , n 1 } N n c . For any M n , it follows from Pinsker’s inequality that:
D n 1 q D 1 q + ϵ n 1 q
2 ϵ n 2 .
We now continue from (A136) and arrive at:
= 1 n 1 e 2 π 2 n ( n ) exp 2 n D n 1 q
M n e 2 π 2 n ( n ) exp 4 n ϵ n 2 + N n e 2 π 2 n ( n )
M n e 2 π 2 n ( n 1 ) exp 4 n ϵ n 2 + N n e 2 π 2 n n ( 1 q ϵ n ) [ n n ( 1 q ϵ n ) ]
e 2 π 2 n exp 4 n ϵ n 2 + e 2 π 2 2 n ϵ n + 1 n ( q + ϵ n ) ( 1 q ϵ n )
= e 2 π 2 n exp 4 n 1 / 2 + e 2 π 2 2 n 3 / 4 + 1 n ( q + n 1 / 4 ) ( 1 q n 1 / 4 ) ,
where (A141) follows from (A140) and the fact that D ( α β ) 0 in general. The inequality in (A142) is because of the following reasons. First, the minimizers of ( n ) in M n are 1 or n 1 . Second, the minimizer of ( n ) in N n is the endpoint of N n which is the most distant from 1 / 2 . For simplicity, we assumed without loss of generality that q ( 1 / 2 , 1 ) . The passage to (A143) is due to the fact that | M n | n 1 as well as (A138) and in (A144), we substituted ϵ n = 1 / n 4 . Denote the expression in (A144) by G n and notice that this expression converges to zero as n . We substitute G n back into (A135) and then into (A131). Since { p n } is lower-bounded by 1 2 , we conclude that:
1 2 p n 1 2 + 3 2 · q 2 n + G n + ( 1 q ) 2 n .
Thus, { p n } converges to 1 2 as long as q 0 , 1 .
  • Step 3: Asymptotic Behavior of the Number of Zeros
We would like to prove that the random variable | N 0 n | / n is bounded away from zero with an overwhelmingly high probability at large n. Note that:
N 0 = = 1 n I n , + = 1 n J n , ,
where I n , Ber ( p n ) and J n , Ber ( 1 p n ) , for all { 1 , 2 , , n } , and all of these binary random variables are independent. Let ϵ > 0 and δ ( ϵ ) > 0 , that will be specified later on with the property that δ ( ϵ ) ϵ 0 0 . Consider the following:
P N 0 n n δ ( ϵ ) = P 1 n = 1 n ( I n , p n ) + 1 n = 1 n ( J n , ( 1 p n ) ) δ ( ϵ ) .
To conclude that the two normalized sums inside the probability in (A147) converge in distribution to normal random variables, we invoke Lindeberg–Feller central limit theorem [46] (p. 116, Theorem 2.4.5.). First, we introduce the concept of a “triangular array” of variables. A triangular array of random variables is of the form { X n , i } , n 1 , 1 i n , where for every n, the random variables X n , 1 , X n , 2 , , X n , n are independent, have zero mean, and have finite variance. Then, one has the following result.
Theorem A1.
(Lindeberg–Feller CLT) Suppose { X n , i } is a triangular array such that:
Z n = 1 n i = 1 n X n , i ,
s n 2 = 1 n i = 1 n Var [ X n , i ] ,
and s n 2 s 2 0 . If the Lindeberg condition holds: for every ϵ > 0 ,
1 n i = 1 n E X n , i 2 𝟙 { | X n , i | ϵ n } 0 ,
then n Z n d N ( 0 , s 2 ) .
Now, concerning the left-hand-side normalized sum inside the probability in (A147), notice that:
s n 2 = 1 n = 1 n Var [ I n , p n ]
= 1 n = 1 n p n ( 1 p n )
= p n ( 1 p n ) ,
which converges to s 2 = 1 4 as n . In addition, Lindeberg’s condition in (A150) is trivially satisfied since all the random variables in our setting are bounded. Thus, it follows by Lindeberg–Feller CLT that:
1 n = 1 n ( I n , p n ) d X N ( 0 , 1 4 ) .
From exactly the same considerations:
1 n = 1 n ( J n , ( 1 p n ) ) d Y N ( 0 , 1 4 ) ,
and X , Y are independent since { I n , } and { J n , } are all independent. We continue from (A147) and arrive at:
lim n P N 0 n n δ ( ϵ ) = P X + Y δ ( ϵ )
= P N ( 0 , 1 2 ) δ ( ϵ )
= 1 ϵ 2 ,
which can obviously be satisfied by a proper choice of δ ( ϵ ) . We conclude that for any ϵ > 0 , there exists some M ( ϵ ) , such that for all n M ( ϵ ) :
P N 0 n n δ ( ϵ ) 1 ϵ ,
which completes the proof of Proposition 3.

Appendix D. Proof of Proposition 4

Let us denote N = N ( X 1 ; 0 ) . For any μ 0 , it follows from Markov’s inequality that:
P N n + B n = P e μ N e μ ( n + B n )
E e μ N e μ ( n + B n ) ,
and thus, since (A161) holds for every μ 0 , it follows that
P N n + B n inf μ 0 E e μ N e μ ( n + B n ) .
Note that:
N = m = 1 n I m + m = 1 n J m ,
where I m Ber ( p n ) and J m Ber ( 1 p n ) , for all m { 1 , 2 , , n } , and all of these binary random variables are independent. We obtain that:
E e μ N = E exp μ m = 1 n I m + m = 1 n J m
= E m = 1 n e μ I m · m = 1 n e μ J m
= m = 1 n E e μ I m · m = 1 n E e μ J m
= 1 p n + p n e μ n · p n + ( 1 p n ) e μ n
= 1 + p n ( e μ 1 ) · 1 + ( 1 p n ) ( e μ 1 ) n
1 + 1 2 ( e μ 1 ) · 1 + 1 2 ( e μ 1 ) n
= 1 + 1 2 ( e μ 1 ) 2 n .
where (A166) is due to the independence of all binary random variables and (A169) follows from the fact that the expression in (A168) is maximized for p n = 1 2 .
Substituting (A170) back into (A162) yields that:
P N n + B n inf μ 0 1 + 1 2 ( e μ 1 ) 2 n exp { μ ( n + B n ) }
= inf μ 0 exp 2 n log 1 + 1 2 ( e μ 1 ) μ ( n + B n )
= exp inf μ 0 { 2 n log 1 + 1 2 ( e μ 1 ) μ ( n + B n ) } .
Upon defining:
f ( μ ) = 2 n log 1 + 1 2 ( e μ 1 ) μ ( n + B n ) ,
we find that the solution to f ( μ ) = 0 is given by
μ * = log n + B n n B n .
Substituting it back into (A173) provides that:
P N n + B n
exp 2 n log 1 + 1 2 ( e μ * 1 ) μ * ( n + B n )
= exp 2 n log 1 + 1 2 n + B n n B n 1 ( n + B n ) log n + B n n B n
= exp 2 n log 1 + B n n B n ( n + B n ) log n + B n n B n
= exp 2 n log n n B n ( n + B n ) log n + B n n B n
= exp ( n B n ) log n n B n + ( n + B n ) log n n + B n
= exp n · 1 B n n log 1 B n n + 1 + B n n log 1 + B n n .
Consider the function:
g ( t ) = 1 t log 1 t + 1 + t log 1 + t ,
which is symmetric around t = 0 . Its first order and second order derivatives are given by
g ( t ) = log 1 + t 1 t ,
and
g ( t ) = 2 ( 1 + t ) ( 1 t ) .
Hence, we conclude that g ( t ) t 2 , and thus:
P N n + B n exp n · B n n 2 = exp B n 2 n ,
which completes the proof of Proposition 4.

Appendix E. Proof of Proposition 5

  • Step 1: A Simplification for the Consensus Probability
Due to symmetry, we only analyze the case I 0 > I 1 . It follows that:
P { C n } = P { N ( X 1 ; 0 ) = 2 n }
= P i = 1 2 n { X 1 ( i ) = 0 }
= i = 1 2 n P X 1 ( i ) = 0
= i = 1 2 n 1 P X 1 ( i ) = 1 .
  • Step 2: A Lower Bound on P X 1 ( i ) = 1
If an agent starts with a ‘0’, then the probability to decide in favor of ‘1’ is lower-bounded by:
P Bin n C n , q Bin n + C n 1 , q + 2 P Bin n C n , q Bin n + C n , q + 2 .
If an agent starts with a ‘1’, then the probability to decide in favor of ‘1’ is lower-bounded by:
P Bin n C n 1 , q + 1 Bin n + C n , q
P Bin n C n 1 , q + Bin 1 , q Bin n + C n , q
= P Bin n C n , q Bin n + C n , q .
Since (A190) cannot be larger than (A192), we continue with (A190). From now on, we lower-bound the probability in (A190), to be denoted by Q n . The probability in (A190) can be written explicitly as:
Q n = = 0 n C n k = 0 n + C n n C n ( 1 q ) q n C n n + C n k ( 1 q ) k q n + C n k 𝟙 { k + 2 } .
We continue by lower-bounding the PMF of the binomial random variable X = Bin ( n , p ) , which is given by:
P X ( k ) = n k p k ( 1 p ) n k , k [ 0 : n ] .
To lower-bound the binomial coefficient in (A194), we use the Stirling’s bounds in (A97) and obtain that:
n k = n ! k ! · ( n k ) !
2 π e 2 n k ( n k ) exp n k n log k n + 1 k n log 1 k n .
Substituting (A196) back into (A194) yields:
P X ( k ) 2 π e 2 n k ( n k ) exp n D k n p ,
where D ( α β ) , for α , β [ 0 , 1 ] , is defined in (1). Substituting twice this lower bound into (A193), we arrive at:
Q n 2 π e 4 = 0 n C n k = 0 2 n C n ( n C n ) exp ( n C n ) D n C n 1 q × n + C n k ( n + C n k ) exp ( n + C n ) D k n + C n 1 q .
As for the square-root factors in (A198), we have the following:
n C n ( n C n ) · n + C n k ( n + C n k )
n C n 1 2 ( n C n ) ( n C n 1 2 ( n C n ) ) · n + C n 1 2 ( n + C n ) ( n + C n 1 2 ( n + C n ) )
= 4 ( n C n ) ( n C n ) 2 · 4 ( n + C n ) ( n + C n ) 2
= 4 1 ( n C n ) ( n + C n )
= 4 1 n 2 C n 2
4 n ,
where (A199) is due to the fact that a square has the maximal area among all rectangles with a fixed perimeter. Lower-bounding (A198) using (A203) yields:
Q n 8 π e 4 n = 0 n C n k = 0 2 exp ( n C n ) D n C n 1 q × exp ( n + C n ) D k n + C n 1 q
8 π e 4 n = ( n C n ) ( 1 q ) ( n + C n ) ( 1 q ) k = 0 2 exp ( n C n ) D n C n 1 q × exp ( n + C n ) D k n + C n 1 q
8 π e 4 n = ( n C n ) ( 1 q ) ( n + C n ) ( 1 q ) k = C n 2 2 exp ( n C n ) D n C n 1 q × exp ( n + C n ) D k n + C n 1 q
= 8 π e 4 n = ( n C n ) ( 1 q ) ( n + C n ) ( 1 q ) j = 0 C n exp ( n C n ) D n C n 1 q × exp ( n + C n ) D 2 j n + C n 1 q
= 8 π e 4 n m = 0 2 C n j = 0 C n exp ( n C n ) D ( n C n + m ) ( 1 q ) n C n 1 q × exp ( n + C n ) D ( n C n + m ) ( 1 q ) 2 j n + C n 1 q ,
where (A205) follows from the condition lim n C n / n = 0 , which implies that for all large enough n, both ( n C n ) ( 1 q ) 0 and ( n + C n ) ( 1 q ) n C n hold. The inequality in (A206) also follows from the condition lim n C n / n = 0 , since for all ( n C n ) ( 1 q ) ( n + C n ) ( 1 q ) , it holds that C n 2 0 , for all sufficiently large n. In (A207) we changed the summation index from k to j according to k = j 2 , with j { 0 , 1 , , C n } , and in (A208) we changed the summation index from to m according to = ( n C n + m ) ( 1 q ) , with m { 0 , 1 , , 2 C n } . To upper-bound the divergence terms in (A208), we invoke the following reverse Pinsker inequality [50] (p. 5974, Eq. (23)):
D ( P Q ) 2 Q m i n · | P Q | 2 ,
when
Q m i n = min x X Q ( x ) .
Let us define Δ q = min { q , 1 q } . Then, after some algebraic work, we arrive at:
Q n 8 π e 4 n m = 0 2 C n j = 0 C n exp ( n C n ) · 2 Δ q ( 1 q ) 2 m 2 ( n C n ) 2
× exp ( n + C n ) · 2 Δ q [ ( 1 q ) ( 2 C n m ) + 2 + j ] 2 ( n + C n ) 2
= 8 π e 4 n m = 0 2 C n j = 0 C n exp 2 Δ q ( 1 q ) 2 m 2 n C n · exp 2 Δ q [ ( 1 q ) ( 2 C n m ) + 2 + j ] 2 n + C n
8 π e 4 n m = 0 2 C n j = 0 C n exp 2 Δ q m 2 n C n · exp 2 Δ q [ ( 2 C n m ) + 2 C n ] 2 n C n
8 π C n e 4 n m = 0 2 C n exp 2 Δ q m 2 n C n · exp 2 Δ q ( 4 C n m ) 2 n C n
= 8 π C n e 4 n m = 0 2 C n exp 2 Δ q · m 2 + ( 4 C n m ) 2 n C n
= 8 π C n e 4 n m = 0 2 C n exp 4 Δ q · ( 2 C n m ) 2 + 4 C n 2 n C n ,
where (A213) is true since 1 1 q , n C n n + C n , and due to the fact that 2 + j is obviously upper-bounded by 2 C n . Now, the exponent in (A216) is maximized at m = 0 , and thus:
Q n 8 π C n e 4 n m = 0 2 C n exp 4 Δ q · 8 C n 2 n C n
16 π e 4 C n 2 n exp 32 Δ q · C n 2 n C n .
  • Step 3: Wrapping Up
We denote the constant f q = 32 / Δ q . Continuing from (A189), we finally arrive at:
P { C n } i = 1 2 n 1 16 π e 4 C n 2 n · exp f q · C n 2 n C n
= 1 16 π e 4 C n 2 n · exp f q · C n 2 n C n 2 n
= exp 2 n · log 1 16 π e 4 C n 2 n · exp f q · C n 2 n C n
exp 32 π e 4 C n 2 · exp f q · C n 2 n C n
exp C n 2 · exp f q · C n 2 n C n ,
where (A222) follows from the inequality log ( 1 y ) y . This completes the proof of Proposition 5.

References

  1. Vujičić, D.; Jagodić, D.; Ranić, S. Blockchain technology, bitcoin, and ethereum: A brief overview. In Proceedings of the 2018 17th International Symposium Infoteh-Jahorina (Infoteh), IEEE, East Sarajevo, Bosnia, 21–23 March 2018; pp. 1–6. [Google Scholar]
  2. Yang, C.-T.; Shih, W.-C.; Huang, C.-L.; Jiang, F.-C.; Chu, W.C.-C. On construction of a distributed data storage system in cloud. Computing 2016, 98, 93–118. [Google Scholar] [CrossRef]
  3. Dingledine, R.; Freedman, M.J.; Molnar, D. The free haven project: Distributed anonymous storage service. In Designing Privacy Enhancing Technologies; Springer: Berlin/Heidelberg, Germany, 2001; pp. 67–95. [Google Scholar]
  4. Fidge, C.J. Timestamps in message-passing systems that preserve the partial ordering. Proc. Aust. Comput. Sci. Conf. 1987, 10, 56–66. [Google Scholar]
  5. Mattern, F. Virtual Time and Global States of Distributed Systems; Department of Computer Science, University of Kaiserslautem: Kaiserslautern, Germany, 1989. [Google Scholar]
  6. Waldo, J. A hitchhiker’s guide to the blockchain universe. In Communications of the ACM; ACM: New York, NY, USA, 2019; Volume 62, pp. 38–42. [Google Scholar]
  7. Liu, Q.; Wang, G.; Wu, J. Consistency as a service: Auditing cloud consistency. In Proceedings of the IEEE Transactions on Network and Service Management; IEEE: New York, NY, USA, 2014; Volume 11, pp. 25–35. [Google Scholar]
  8. Kraska, T.; Hentschel, M.; Alonso, G.; Kossmann, D. Consistency rationing in the cloud: Pay only when it matters. In Proceedings of the VLDB Endowment, Lyon, France, 24–28 August 2009; Volume 2, pp. 253–264. [Google Scholar]
  9. Chandra, T.D.; Toueg, S. Unreliable failure detectors for reliable distributed systems. J. ACM (JACM) 1996, 43, 225–267. [Google Scholar] [CrossRef]
  10. Hurfin, M.; Raynal, M. A simple and fast asynchronous consensus protocol based on a weak failure detector. Distrib. Comput. 1999, 12, 209–223. [Google Scholar] [CrossRef]
  11. Schiper, A. Early consensus in an asynchronous system with a weak failure detector. Distrib. Comput. 1997, 10, 149–157. [Google Scholar] [CrossRef] [Green Version]
  12. Aguilera, M.K. Stumbling over consensus research: Misunderstandings and issues. In Replication; Springer: Berlin/Heidelberg, Germany, 2010; pp. 59–72. [Google Scholar]
  13. Borran, F.; Prakash, R.; Schiper, A. Consensus Problem in Wireless Ad Hoc Networks: Addressing the Right Issues. Technical Reports. 2007. Available online: https://infoscience.epfl.ch/record/114619 (accessed on 14 January 2022).
  14. Zieliński, P. Indirect Channels: A Bandwidth-Saving Technique for Fault-Tolerant Protocols; Tech. Rep.; University of Cambridge, Computer Laboratory: Cambridge, UK, 2007. [Google Scholar]
  15. Guerraoui, R.; Hurfinn, M.; Mostéfaoui, A.; Oliveira, R.; Raynal, M.; Schiper, A. Consensus in asynchronous distributed systems: A concise guided tour. In Advances in Distributed Systems; Springer: Berlin/Heidelberg, Germany, 2000; pp. 33–47. [Google Scholar]
  16. Tanenbaum, A.S.; Wetherall, D. Computer Networks, 5th ed.; Prentice Hall: Hoboken, NJ, USA, 2011. [Google Scholar]
  17. Freiling, F.C.; Guerraoui, R.; Kuznetsov, P. The failure detector abstraction. ACM Comput. Surv. (CSUR) 2011, 43, 1–40. [Google Scholar] [CrossRef] [Green Version]
  18. Gray, J.N. Notes on data base operating systems. In Operating Systems; Springer: Berlin/Heidelberg, Germany, 1978; pp. 393–481. [Google Scholar]
  19. Fischer, M.J. The consensus problem in unreliable distributed systems (a brief survey). In International Conference on Fundamentals of Computation Theory; Springer: Berlin/Heidelberg, Germany, 1983; pp. 127–140. [Google Scholar]
  20. Gray, J.; Lamport, L. Consensus on transaction commit. ACM Trans. Database Syst. (TODS) 2006, 31, 133–160. [Google Scholar] [CrossRef]
  21. Antoniadis, K.; Guerraoui, R.; Malkhi, D.; Seredinschi, D.-A. State Machine Replication Is More Expensive than Consensus; Technical Reports. 2018. Available online: https://infoscience.epfl.ch/record/256238 (accessed on 14 January 2022).
  22. Attiya, H.; Rachman, O. Atomic snapshots in o(nlogn) operations. SIAM J. Comput. 1998, 27, 319–340. [Google Scholar] [CrossRef]
  23. Lamport, L. Time, clocks, and the ordering of events in a distributed system. In Concurrency: The Works of Leslie Lamport; ACM Books: New Yrok, NY, USA, 2019; pp. 179–196. [Google Scholar]
  24. Lynch, N.A. Distributed Algorithms; Elsevier: Amsterdam, The Netherlands, 1996. [Google Scholar]
  25. Halpern, J.Y.; Tuttle, M.R. Knowledge, probability, and adversaries. J. ACM (JACM) 1993, 40, 917–960. [Google Scholar] [CrossRef] [Green Version]
  26. Rubinstein, A. The electronic mail game: Strategic behavior under almost common knowledge. Am. Econ. Rev. 1989, 79, 385–391. [Google Scholar]
  27. Gács, P.; Kurdyumov, G.L.; Levin, L.A. One-dimensional uniform arrays that wash out finite islands. Probl. Peredachi Inform. 1978, 14, 92–96. [Google Scholar]
  28. Mustafa, N.H.; Pekeč, A. Majority consensus and the local majority rule. In International Colloquium on Automata, Languages, and Programming; Springer: Berlin/Heidelberg, Germany, 2001; pp. 530–542. [Google Scholar]
  29. Moreira, A.A.; Mathur, A.; Diermeier, D.; Amaral, L.A. Efficient system-wide coordination in noisy environments. Proc. Natl. Acad. Sci. USA 2004, 101, 12085–12090. [Google Scholar] [CrossRef] [Green Version]
  30. Gogolev, A.; Marchenko, N.; Marcenaro, L.; Bettstetter, C. Distributed binary consensus in networks with disturbances. ACM Trans. Auton. Adapt. Syst. (TAAS) 2015, 10, 1–17. [Google Scholar] [CrossRef]
  31. Thomas, R.H. A majority consensus approach to concurrency control for multiple copy databases. ACM Trans. Database Syst. (TODS) 1979, 4, 180–209. [Google Scholar] [CrossRef]
  32. Breitwieser, H.; Leszak, M. A distributed transaction processing protocol based on majority consensus. In Proceedings of the First ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, Ottawa, ON, Canada, 18–20 August 1982; pp. 224–237. [Google Scholar]
  33. Kanrar, S.; Chattopadhyay, S.; Chaki, N. A new hybrid mutual exclusion algorithm in the absence of majority consensus. In Advanced Computing and Systems for Security; Springer: Berlin/Heidelberg, Germany, 2016; pp. 201–214. [Google Scholar]
  34. Mostofi, Y. Binary consensus with gaussian communication noise: A probabilistic approach. In Proceedings of the 2007 46th IEEE Conference on Decision and Control, IEEE, New Orleans, LA, USA, 12–14 December 2007; pp. 2528–2533. [Google Scholar]
  35. Perron, E.; Vasudevan, D.; Vojnovic, M. Using three states for binary consensus on complete graphs. In Proceedings of the IEEE INFOCOM 2009, IEEE, Rio de Janeiro, Brazil, 19–25 April 2009; pp. 2527–2535. [Google Scholar]
  36. Cruise, J.; Ganesh, A. Probabilistic consensus via polling and majority rules. Queueing Syst. 2014, 78, 99–120. [Google Scholar] [CrossRef]
  37. Wensley, J.H.; Lamport, L.; Goldberg, J.; Green, M.W.; Levitt, K.N.; Melliar-Smith, P.M.; Shostak, R.E.; Weinstock, C.B. Sift: Design and analysis of a fault-tolerant computer for aircraft control. Proc. IEEE 1978, 66, 1240–1255. [Google Scholar] [CrossRef]
  38. Pease, M.; Shostak, R.; Lamport, L. Reaching agreement in the presence of faults. J. ACM (JACM) 1980, 27, 228–234. [Google Scholar] [CrossRef] [Green Version]
  39. Santoro, N.; Widmayer, P. Time is not a healer. In Annual Symposium on Theoretical Aspects of Computer Science; Springer: Berlin/Heidelberg, Germany, 1989; pp. 304–313. [Google Scholar]
  40. Padhye, J.; Firoiu, V.; Towsley, D.; Kurose, J. Modeling tcp throughput: A simple model and its empirical validation. In Proceedings of the ACM SIGCOMM’98 conference on Applications, Technologies, Architectures, and Protocols for Computer Communication, Vancouver, BC, Canada, 31 August–4 September 1998; pp. 303–314. [Google Scholar]
  41. Lua, E.K.; Crowcroft, J.; Pias, M.; Sharma, R.; Lim, S. A survey and comparison of peer-to-peer overlay network schemes. IEEE Commun. Surv. Tutor. 2005, 7, 72–93. [Google Scholar]
  42. Abdullah, M.A.; Draief, M. Global majority consensus by local majority polling on graphs of a given degree sequence. Discret. Appl. Math. 2015, 180, 1–10. [Google Scholar] [CrossRef]
  43. Gärtner, B.; Zehmakan, A.N. Majority model on random regular graphs. In Proceedings of the Latin American Symposium on Theoretical Informatics, Buenos Aires, Argentina, 16–19 April 2018; pp. 572–583. [Google Scholar]
  44. Tran, L.; Vu, V. Reaching a consensus on random networks: The power of few. Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Leibniz Int. Proc. Inform. 2020, 176, 20:1–20:15. [Google Scholar]
  45. Fountoulakis, N.; Kang, M.; Makai, T. Resolution of a conjecture on majority dynamics: Rapid stabilization in dense random graphs. Random Struct. Algorithms 2020, 57, 1134–1156. [Google Scholar] [CrossRef]
  46. Durrett, R. Probability: Theory and Examples, 2nd ed.; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  47. Billingsley, P. Convergence of Probability Measures, 2nd ed.; John Wiley & Sons Inc.: New York, NY, USA, 1999. [Google Scholar]
  48. Csiszár, I. Information-type measures of difference of probability distributions and indirect observations. Stud. Sci. Math. Hung. 1967, 2, 299–318. [Google Scholar]
  49. Kullback, S. A lower bound for discrimination information in terms of variation. IEEE Trans. Inf. Theory 1967, 13, 126–127. [Google Scholar] [CrossRef]
  50. Sason, I.; Verdú, S. f-divergence inequalities. IEEE Trans. Inf. Theory 2016, 62, 5973–6006. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tamir, R.; Livshits, A.; Shadmi, Y. Simple Majority Consensus in Networks with Unreliable Communication. Entropy 2022, 24, 333. https://doi.org/10.3390/e24030333

AMA Style

Tamir R, Livshits A, Shadmi Y. Simple Majority Consensus in Networks with Unreliable Communication. Entropy. 2022; 24(3):333. https://doi.org/10.3390/e24030333

Chicago/Turabian Style

Tamir, Ran, Ariel Livshits, and Yonatan Shadmi. 2022. "Simple Majority Consensus in Networks with Unreliable Communication" Entropy 24, no. 3: 333. https://doi.org/10.3390/e24030333

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop