Next Article in Journal
Linear Response of General Observables in Spiking Neuronal Network Models
Previous Article in Journal
An Information Theory Approach to Aesthetic Assessment of Visual Patterns
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tripartite Dynamic Zero-Sum Quantum Games

Information Security and National Computing Grid Laboratory, Southwest Jiaotong University, Chengdu 610031, China
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(2), 154; https://doi.org/10.3390/e23020154
Submission received: 22 December 2020 / Revised: 15 January 2021 / Accepted: 22 January 2021 / Published: 27 January 2021
(This article belongs to the Section Quantum Information)

Abstract

:
The Nash equilibrium plays a crucial role in game theory. Most of results are based on classical resources. Our goal in this paper is to explore multipartite zero-sum game with quantum settings. We find that in two different settings there is no strategy for a tripartite classical game being fair. Interestingly, this is resolved by providing dynamic zero-sum quantum games using single quantum state. Moreover, the gains of some players may be changed dynamically in terms of the committed state. Both quantum games are robust against the preparation noise and measurement errors.

1. Introduction

The quantum state as an important resource has been widely used for accomplishing difficult or impossible tasks with classical resources [1]. Quantum game theory as one of important applications is to investigate strategic behavior of agents using quantum resources. It is closely related to quantum computing and Bell theory [2,3,4]. In most cases, distributive tasks can be regarded as equivalent quantum games [5,6]. So far, it has been widely used in Bell tests [3,7], quantum network verification [8], distributed computation [8,9,10,11,12,13], parallel testing [14,15,16,17], device-independent quantum key distribution [18,19,20,21].
Different from those applications, Marinatto and Weber present a two quantum game using Nash strategy which gives more reward than classical [22]. Eisert et al. resolve the prisoner’s dilemma with quantum settings by providing higher gains than its with classical settings [23]. Sekiguchi et al. have prove that the uniqueness of Nash equilibria in quantum Cournot duopoly game [24]. Brassard et al. recast colorredMermin’s multi-player game in terms of quantum pseudo-telepathy [25]. Meyer introduces the quantum strategy for coin flipping game [26]. In the absence of a fair third party, Zhang et al. prove that a space separated two party game can achieve fairness by combining Nash equilibrium theory with quantum game theory [27]. All of these quantum games show different supremacy to classical games. Nevertheless, there are specific games without quantum advantage. One typical example is guessing your neighbor’s input game (GYNI) [28] or its generalization [8]. Hence, it is interesting to find games with different features.
Every game contains three elements: player, strategy and gain function [6,22]. There are various classification according to different benchmarks. According to the participants’ understanding of other participants, one game may be a complete information game [3] or a incomplete information game [28]. From the time sequence of behavior, it may be divided into static game [22] and dynamic game [27,29]. Another case is cooperating game [3] or competing game (non-cooperating game) [22]. Non cooperative game can be further divided into complete information static game, complete information dynamic game, incomplete information static game and incomplete information dynamic game [22,29]. As the basis of non-cooperative game [29], Nash theory is composed of the optimal strategies of all participants such that no one is willing to break the equilibrium. Moreover, it is a zero-sum game if the total gains is zero for any combination of strategies [27]. Otherwise, it is a non zero-sum game. So far, most of games are focused on cooperative games such as Bell game [3]. Our goal in this paper is to find dynamic zero-sum games with Nash equilibrium. Dynamic game refers to that the actions of different players have a sequence, and the post actor can observe the actions chosen by the actor in front of him [29]. Different from bipartite zero-sum game [27], we provide tripartite quantum fair zero-sum games which cannot be realized in classical scenarios even if it is difficult to evaluating Nash equilibrium [30]. The interesting feature is that the present quantum game uses of only clean qubit without entanglement.
The rest of paper is organized as follows. In Section 2, we introduce a tripartite zero-sum game with two different settings inspired by bipartite game [27]. We show that there is no strategy for achieving a fair game using classical resources. Both models can be regarded as complete information dynamic zero-sum game. In Section 3, we present quantum zero-sum games with the same settings using quantum single-photon states. Both games are asymptotically fair in terms of some free parameters. Although any quantum pure state is unitarily equivalent to a classical state, our results show that this kind of resources are also useful for special quantum tasks going beyond classical scenarios. In Section 4, we show that the robustness of two quantum games. The last section concludes the paper.

2. Classical Tripartite Dynamic Zero-Sum Games

2.1. Game Model

We firstly present some definitions as follows.
Definition 1.
Dynamic zero-sum game means that the actions of different players have a sequence, the later participant can observe the formers actions, and the sum of payments for all players is zero for any combination of strategies.
Definition 2.
Fair game means that the game does not favor any player. Games in this paper are all zero-sum games, thus the fairness in this paper means everyone’s average gain is zero.
Definition 3.
Asymptotically fair game means that under the given initial conditions, the game may not be fair, but with the increase of variable parameter, the degree of deviation from the fair game becomes smaller and smaller and the game is fair when the variable parameter approaches infinity.
Inspired by bipartite game [27], we present a tripartite game G , as shown in Figure 1.
In the present games, we assume that the actions of different participants have a sequence, where the latter participant can observe the former’s action. There are four stages in the present model as follows.
S1.
Alice randomly puts a ball into one of three black boxes, A , B and C . Alice sends the box C to Rice, and sends B to Bob.
S2.
Bob gives his own choice, i.e., he chooses to open B or asks Alice to open A , but he does not take any action.
S3.
Rice chooses her own strategy and takes action, i.e., she opens C or lets Alice open A . If Rice opens C and there is no ball in the box, the game enters the fourth stage.
S4.
It is Bob’s turn to take action according the strategy he has chosen in S2, i.e., he opens box B or asks Alice to open box A .
Moreover, for any strategy combination, we assume that the total payment of all participants is zero. In the following, we will explore this kind of games with two settings with different gains.
The classical game tree is shown in Figure 2. In the stages of G , the wining rules of the game are given by
W1.
Bob and Rice win if Rice chooses to open A and finds the ball in S3.
W2.
Alice wins if Rice does not find the ball in box A in S3.
W3.
Rice wins if Rice opens C and finds the ball in S3.
W4.
Bob and Rice win if Bob opens A and finds the ball in S4.
W5.
Alice wins if Bob opens A and does not find the ball in A in S4.
W6.
Bob wins if Bob opens box B and finds the ball in S4.
W7.
Alice wins if Bob opens box B and does not find the ball in S4.
Now, for convenience, we define the following probabilities.
P1.
Denote P 1 as the probability that Alice puts the ball into A .
P2.
Denote P 2 as the probability that Alice puts the ball into B or C .
P3.
Denote P C as the probability that Rice chooses to open C .
P4.
Denote P B as the probability that Bob chooses to open B .
Here, we assume that Alice has equal probability to put the ball into B or C . It follows that P 1 + 2 P 2 = 1 , with P B , P C [ 0 , 1 ] .
From winning rules W1–W7 of G , it is easy to get the winning probability P R i c e 1 for Rice from W3 (i.e., Rice finds the ball after opening C ) is given by
P R i c e 1 = P 2 P C
Moreover, the winning probability for Bob from W6 (i.e., Rice chooses to open C and does not find the ball, but Bob finds the ball after opening B ) is given by
P B o b 1 = P 2 P C P B ( 1 P 2 )
For Alice, there are three subcases W2, W5 and W7 for winning. It follows that
P A l i c e = 1 P 1 + P B P C P 2 2 P B P C P 2 P C P 2 + P B P C P 1 + P C P 1 P 2 P B P C P 1 P 2
The winning probability for Bob from W1 and W4 is given by
P B o b 2 = P 1 P B P C P 1 P C P 1 P 2 + P B P C P 1 P 2
The same result holds for Rice from W1 and W4, i.e., P R i c e 2 = P B o b 2 .
Here, we analyze players’ strategies for the present game G to show the main idea. For Alice, she does not know which box Rice or Bob would choose to open before she prepares. Alice may lose the game if she puts the ball in one of the three boxes with a higher probability. Thus, there is a tradeoff for Alice to choose her strategy (the probability P 1 ). For Bob, he does not know which box Rice would to open when he gives the probability P B . So, he should consider a known parameter P 1 given by Alice and an unknown parameter P C given by Rice. Similar analysis can be applied to Rice’s strategy, Rice needs to choose her own parameter based on what she knows before she takes action.

2.2. The First Tripartite Classical Game

It is well known that every player will maximize its own interests in a non-cooperative game. In this section, we present the first game G 1 with the gain setting given in Table 1. Our goal is to show the no-fairness of this game with classical resources.

2.2.1. The Average Gain of Rice

From Table 1, the average gain of Rice is given by
G R i c e = 2 P R i c e 1 P B o b 1 P A l i c e + η P R i c e 2 = 3 P 2 P C + ( η + 1 ) P 1 ( η + 1 ) P 1 P B P C ( η + 1 ) P 1 P 2 P C + ( η + 1 ) P 1 P 2 P B P C 1
From Equation (6), we get that the partial derivative of G R i c e with respect to P C is given by
G R i c e P C = 3 P 2 ( η + 1 ) P 1 P B + ( η + 1 ) P B P 1 P 2 ( η + 1 ) P 1 P 2
From Equation (6), if G R i c e P C < 0 , G R i c e is a deceasing function. In this case, Rice has to set P C = 0 to maximize her gains. If G R i c e P C > 0 , i.e., G R i c e is increasing function, Rice will choose P C = 1 to maximize her gains. Moreover, when G R i c e P C = 0 , i.e., G R i c e is a constant, P C can be any probability.

2.2.2. The Average Gain of Bob

Similar to Equation (6), we get that the average gain of Bob is given by
G B o b = P R i c e 1 + 2 P B o b 1 P A l i c e + P B o b 2 = 3 P B P C P 2 3 P B P C P 2 2 + 2 P 1 2 P B P C P 1 2 P C P 1 P 2 + 2 P B P C P 1 P 2 1
There are several cases to maximize G B o b . We only present one case in the following for explaining the main idea. The other cases are included in Appendix A.
For G R i c e P C 0 , i.e., 3 P 2 ( η + 1 ) P 1 P 2 ( η + 1 ) P 1 ( 1 P 2 ) P B 1 , we get that P C = 0 . From Equation (8), we obtain
G B o b 1 = 2 P 1 1
C1.
If 0 3 P 2 ( η + 1 ) P 1 P 2 ( η + 1 ) P 1 ( 1 P 2 ) 1 , we get that 3 2 η + 5 P 1 3 η + 1 . In this case, Bob chooses G B o b 1 such that P B = X 0 , where X 0 > 3 P 2 ( η + 1 ) P 1 P 2 ( η + 1 ) P 1 ( 1 P 2 ) .
C2.
If 3 P 2 ( η + 1 ) P 1 P 2 ( η + 1 ) P 1 ( 1 P 2 ) < 0 , i.e., 3 η + 1 < P 1 1 . Owing to P B 0 , we get that P B can be any probability.
All the results are given in Table 2.

2.2.3. The Average Gain of Alice

In this subsection, we calculate the average gain of Alice, which is denoted by G A l i c e . It is easy to write the expression for G A l i c e according to Table 1 as follows.
G A l i c e = P R i c e 1 P B o b 1 + 2 P A l i c e ( η + 1 ) P R i c e 2 = 3 P C P 2 3 P B P C P 2 + 3 P B P C P 2 2 + ( η + 3 ) P C P 1 P 2 + ( η + 3 ) P B P C P 1 ( η + 3 ) P 1 ( η + 3 ) P B P C P 1 P 2 + 2
For the case of 0 P 1 < 3 2 η + 5 , it follows from Table 2 that P C = 1 and P B = 1 . Equation (9) is then rewritten into
G A l i c e = 3 P 2 2 6 P 2 + 2
where P 2 = 1 P 1 2 . It is easy to prove that G A l i c e achieves the maximum when P 1 = 3 2 η + 5 . Denote P 1 = 3 2 η + 5 ϵ , where ϵ is a small constant satisfying ϵ > 0 . It follows from Equation (10) that
G A l i c e = η 2 + 4 η + 23 ( 2 η + 5 ) 2 + O ( ϵ )
By using the same method for the rest of cases (see Appendix B for details), we can get Table 3. From Table 3, we get that η 2 + 4 η + 23 ( 2 η + 5 ) 2 + O ( ϵ ) > η + 1 2 η + 5 for 1 η < 2 , and η 2 + 4 η + 23 ( 2 η + 5 ) 2 + O ( ϵ ) < η + 1 2 η + 5 for η 2 . Thus, Alice will choose P 1 = 3 2 η + 5 ϵ to maximize her gain when 1 η < 2 , while P 1 = 3 2 η + 5 when η 2 .

2.2.4. Fair Zero-Sum Game

From Section 2.2.3, we get that P 1 = 3 2 η + 5 ϵ for 1 η < 2 . By using induction we know that P B = 1 and P C = 1 from Table 2. From Equations (6), (8) and (9), the expressions of G A l i c e , G B o b and G R i c e with respect to η are shown in Table 4.
Result 1 The tripartite game G 1 is unfair for any η 1 .
Proof. 
The numeric evaluations of G A l i c e , G B o b and G R i c e are shown in Figure 3 for η 100 and ϵ = 10 5 . It shows that the tripartite classical game G 1 is unfair. Formally, since the tripartite game G 1 is a zero-sum game, i.e., the summation of the average gains of all players is zero. It is sufficient to prove that the gain of one player is strictly greater than that of the other. The proof is completed by two cases.
C1.
For 1 η < 2 , from Table 4, we get that
G A l i c e G B o b = 9 η + 36 ( 2 η + 5 ) 2 + O ( ϵ ) > 0
C2.
For η 2 , from Table 4, we obtain that
G A l i c e G B o b = 3 η 2 η + 5 > 0
if ϵ is very small. This completes the proof. □

2.3. The Second Tripartite Classical Game

In this section, we present the second game G 2 with different settings shown in Table 5. The first game and the second game are the same except for the gain table, i.e., both games adopt the game model in Section 2.1. Similar to the first game G 1 , our goal is to prove its unfair.
Similar to Section 2.2, we can evaluate the gains of all parties, as shown in Table 6. The details are shown in Appendix C, Appendix D and Appendix E. From Table 6, we can prove the following theorem.
Result 2 The tripartite classical game G 2 is unfair for any μ 2 .
Proof. 
The numeric evaluations of G A l i c e , G B o b and G R i c e are shown in Figure 4 for μ 100 . It shows that the tripartite classical game G 2 is unfair. This can be proved formally as follows. From the assumption, the second classical game G 2 is zero-sum. It is sufficient to prove that there is no μ such that G A l i c e , G B o b and G R i c e equal to zero. The proof is completed by two cases.
C1.
For 2 μ 41 + 1345 14 , from Table 6, we get that
G B o b = 19 49 0
C2.
For μ > 41 + 1345 14 , from Table 6, we obtain that
G B o b = μ 3 μ + 5 0
This completes the proof. □

3. Zero-Sum Quantum Games

In this section, by quantizing the classical game shown in Figure 1, we get that there are also four stages in quantum game, as shown in Figure 5. The correspondence between the classical and quantum game are: the classical game is to put a ball into three ordinary black boxes, while the quantum game is to put a particle into three quantum boxes. In the classical games, Bob and Rice can selectively let Alice open the box A to prevent Alice from putting the ball into the box A so that Bob and Rice cannot find the ball. In quantum game, they can prevent this same problem by setting the committed state before the game, i.e., Alice, Bob and Rice agree on which state Alice should set the photon to. Alice has three quantum boxes, A , B and C used to store a photon. The state of the photon in the boxes are denoted | a , | b and | c . The quantum game is given by the following four stages S1–S4.
S1.
Alice randomly puts the single photon into one of the three quantum boxes, A , B and C . Alice sends the box C to Rice, and sends B to Bob.
S2.
Bob gives his own strategy, and he opens box B .
S3.
Rice chooses her own strategy and takes action, i.e., she opens C . If neither Rice nor Bob finds the photon, the game enters the fourth stage.
S4.
Bob(Rice) asks Alice to send him(her) box A to verify whether the state of photon prepared by Alice is the same as the committed state.
Moreover, for any strategy combination, we assume that the total payment of all participants is zero and the latter participant can observe the former’s action. In the following, we will explore this kind of games with two settings with different gains.
The quantum game tree is shown in Figure 6. In the stages of G , the wining rules of the game is given by
W1.
Rice wins if Rice finds the photon after opening C .
W2.
Bob wins if Bob finds the photon after opening B .
W3.
Bob and Rice win if neither Rice nor Bob finds the photon but Alice is not honest.
W4.
Alice wins if neither Rice nor Bob finds the photon and Alice is honest.
We consider the dynamic zero-sum quantum game with the same setting parameters given in Table 1 and Table 5, but the difference is that each symbol has a slightly different meaning, i.e., in quantum games, R s u c c ( C ) means that Rice finds the photon after opening the box C . B s u c c ( B ) means that Bob finds the photon after opening the box B . R / B f a i l ( · ) means that neither Rice nor Bob finds the photon and Alice is honest. R / B s u c c ( A ) means that neither Rice nor Bob finds the photon but Alice is not honest.

3.1. The Winning Rules of The Quantum Game

The winning rules of quantum game is similar to W1–W4. For convenience we define the following probabilities.
P1.
Denote α 1 as the probability that Alice puts the photon into the box A .
P2.
Denote α 2 as the probability that Alice puts the photon into box B or box C .
P3.
Denote 1 γ as the probability that Rice chooses to open the box C when she received C .
P4.
Denote 1 β as the probability that Bob chooses to open the box B when he received B .
Similar to classical game shown in Figure 1, we have α 1 + 2 α 2 = 1 and γ , β [ 0 , 1 ] .
In quantum scenarios, the box of A , B , or C is realized by a quantum state | a , | b or | c . The statement of one party finding the photon by opening one box ( A for example) means that one party find the photon in the state | a after projection measurement under the basis { | a , | b , | c } . With these assumption, we get an experimental quantum game as follows.
Alice’s preparation. Alice prepares the single photon in the following supposition state
| ψ = α 1 | a + α 2 | b + α 2 | c
where | a , | b and | c can be realized by using different paths, α 1 + 2 α 2 = 1 and α 1 [ 0 , 1 ] is a parameter controlled by Alice.
Bob’s operation. Bob splits the box B into box B and box B according the following transformation
| b 1 β | b + β | b
where β [ 0 , 1 ] is a parameter controlled by Bob.
Rice’s operation. Rice splits the box C into two parts C and C according to the following transformation
| c 1 γ | c + γ | c
where γ [ 0 , 1 ] is a parameter controlled by Rice.
Similar to classical games, Alice may choose a large α 1 to increase the probability of the photon appearing in box A , which will then reduce the probability of Bob and Rice finding the photon. However, this strategy may result in losing the game with a high probability for Alice in the verification stage. Similar intuitive analysis holds for others. Hence, it should be important to find reasonable parameters for them. We give the detailed process in the following.
Suppose that Alice prepares the photon in the following state
| ψ = α 1 | a + α 2 | b + α 2 | c
using path encoding. When Bob (Rice) receives its box B ( C ) and splits it into two parts, the final state of the photon is given by
| ψ 0 = α 1 | a + α 2 ( 1 β ) | b + α 2 β | b + α 2 ( 1 γ ) | c + α 2 γ | c
From Equation (20), the probability that Rice finds the photon in C (using single photon detector) is
P R i c e 1 = | c | ψ 0 | 2 = α 2 ( 1 γ )
Moreover, the probability that Bob finds the photon in B is given by
P B o b 1 = | b | ψ 0 | 2 = α 2 ( 1 β )
If neither Rice nor Bob finds the photon, the state in Equation (20) will collapse into
| ψ 1 = α 1 α 1 + α 2 β + α 2 γ | a + α 2 β α 1 + α 2 β + α 2 γ | b + α 2 γ α 1 + α 2 β + α 2 γ | c
If Alice did prepare the photon in the committed state | ϕ = ω 1 | a + ω 2 | b + ω 2 | c initially, it is easy to prove that the state at this stage should be
| ψ C = ω 1 ω 1 + ω 2 β + ω 2 γ | a + ω 2 β ω 1 + ω 2 β + ω 2 γ | b + ω 2 γ ω 1 + ω 2 β + ω 2 γ | c
where ω 1 + 2 ω 2 = 1 . By performing a projection measurement on | ψ 1 , Rice or Bob gets an ideal state | ψ C with the probability
| ψ 1 | ψ C | 2 = α 1 ω 1 + x α 2 ω 2 2 ( α 1 + α 2 x ) ( ω 1 + ω 2 x )
where x = β + γ .
The probability that neither Rice nor Bob finds the photon when Alice did prepare the photon in the committed state | ϕ is give by
P A l i c e = ( 1 P R i c e 1 P B o b 1 ) | ψ 1 | ψ C | 2 = ( α 1 ω 1 + x α 2 ω 2 ) 2 ω 1 + ω 2 x
Moreover, the probability that neither Rice nor Bob finds the photon but Rice or Bob detects forge state prepared by Alice is denote by P R i c e 2 or P B o b 2 , which is given by
P R i c e 2 = ( 1 P R i c e 1 P B o b 1 ) ( 1 | ψ 1 | ψ C | 2 ) = α 1 ω 1 + x α 2 ω 2 2 ω 1 + ω 2 x + α 1 + α 2 x
with P R i c e 2 = P B o b 2 from winning rule W3.

3.2. The First Tripartite Quantum Game

In this subsection, we introduce the quantum implementation of the first game G 1 with the gain setting shown in Table 1. Here, each symbol in Table 1 has a slightly different meaning, i.e., in quantum game, R s u c c ( C ) means that Rice finds the photon after opening the box C . B s u c c ( B ) means that Bob finds the photon after opening the box B . R / B f a i l ( · ) means that neither Rice or Bob finds the photon and Alice is honest. R / B s u c c ( A ) means that neither Rice or Bob finds the photon but Alice is not honest.

3.2.1. The Average Gain of Rice

Denote G R i c e as Rice’s average gain. We can easily get G R i c e according to Table 1 as follows
G R i c e = 2 P R i c e 1 + η P R i c e 2 P B o b 1 P A l i c e = ( η + 1 ) ( α 1 ω 1 + x α 2 ω 2 ) 2 ω 1 + ω 2 x + ( η 2 ) α 2 γ + ( η + 1 ) α 2 β + η α 1 + α 2
where x = β + γ .
The partial derivative of G R i c e with respect to the variable γ is given by
G R i c e γ = 1 ( ω 1 + ω 2 x ) 2 ( 3 ω 2 2 α 2 x 2 6 ω 1 ω 2 α 2 x + ( η 2 ) ω 1 2 α 2 + ( η + 1 ) ω 1 ω 2 α 1 2 ω 1 ( η + 1 ) α 1 α 2 ω 1 ω 2 )
Each participant has the perfect knowledge before the game reaching this stage, i.e., each participant is exactly aware of what previous participant has done. Hence, ω 1 + ω 2 x 0 . Let G R i c e γ = 0 . We get
x 1 * = ω 1 ( η + 1 ) α 1 ω 2 α 2 ω 1 ω 2 3 α 2 ω 1 ω 2 x 2 * = ω 1 ( η + 1 ) α 1 ω 2 α 2 ω 1 ω 2 3 α 2 ω 1 ω 2
If Alice chooses α 1 < ω 1 , the probability that Alice finds the photon will decrease while the probability that Rice or Bob detects the difference between the prepared and the committed states will increase in the verification stage. This means that Alice has no benefit if she chooses α 1 < ω 1 . It follows that α 1 ω 1 and α 2 ω 2 . Hence, from Equation (30) we get
x 1 * = ω 1 ( η + 1 ) ( α 1 ω 2 α 2 ω 1 ) 3 α 2 ω 2 ω 1 ω 2 x 2 * = ω 1 ( η + 1 ) ( α 1 ω 2 α 2 ω 1 ) 3 α 2 ω 2 ω 1 ω 2
From Equation (31), we get x 1 * 0 and x 2 * x 1 * . From Equations (30) and (31), G R i c e is a decreasing function in x when x x 1 * and increasing when x 1 * < x x 2 * . Moreover, when x > x 2 * , G R i c e is a decreasing function in x. Since 0 x 2 , G R i c e gets the maximum value at x = 0 for x 2 * 0 , or gets the maximum value at x = 2 for x 2 * 2 , or gets the maximum value at x = x 2 * for 0 < x 2 * < 2 .

3.2.2. The Average Gain of Bob

Denote G B o b as the average gain of Bob. From Table 1, we get G B o b as
G B o b = 2 P B o b 1 + P B o b 2 P R i c e 1 P A l i c e = 2 ( α 1 ω 1 + x α 2 ω 2 ) 2 ( ω 1 + ω 2 x ) + α 1 + α 2 α 2 β + 2 α 2 γ
where x = β + γ .
The partial derivative of G B o b with respect to β is given by
G B o b β = 1 ( ω 1 + ω 2 x ) 2 ( 3 ω 2 2 α 2 x 2 6 ω 1 ω 2 α 2 x ω 1 2 α 2 4 ω 1 α 1 α 2 ω 1 ω 2 + 2 ω 1 ω 2 α 1 )
Similar to Equations (30) and (31), we can get
x 3 * = 2 ω 1 ( α 1 ω 2 α 2 ω 1 ) ω 2 3 α 2 ω 1 ω 2 x 4 * = 2 ω 1 ( α 1 ω 2 α 2 ω 1 ) ω 2 3 α 2 ω 1 ω 2
from G B o b β = 0 .
From Equation (34), we obtain x 3 * 0 and x 4 * x 3 * . From Equations (33) and (34), G B o b decreases with x x 3 * and increases with x 3 * < x x 4 * . Moreover, G B o b decreases with x > x 4 * . Since 0 x 2 , G B o b gets the maximum value at x = 0 for for x 4 * 0 , or gets the maximum value at x = 2 for x 4 * 2 , or gets the maximum value at x = x 4 * for 0 < x 4 * < 2 . From η 1 , we get x 2 * x 4 * .
Similarly, we can get the detailed analysis of Rice, shown in Appendix F. The values of β * and γ * which depend on x i * in the first quantum game such that G B o b achieves the maximum are given in Table 7.

3.2.3. The Average Gain of Alice

Denote G A l i c e as the average gain of Alice. From Table 1, it is easy to evaluate G A l i c e as follows
G A l i c e = 2 P A l i c e ( η + 1 ) P R i c e 2 P B o b 1 P R i c e 1 = ( η + 3 ) ( α 1 ω 1 + x α 2 ω 2 ) 2 ω 1 + ω 2 x η α 2 x 1 η α 1
where x = β + γ .
From Table 7, we discuss Alice’s gain in five cases. Here, we only discuss one of them. Other cases are shown in Appendix G.
If x 2 * 0 , i.e., α 1 ω 1 ( η + 1 ) 3 α 2 ω 2 ω 1 ω 2 ( η + 1 3 + 1 ) 0 , we get that
ω 1 α 1 ω 1 ω 2 ( η + 1 3 + 1 ) 2 ω 1 ω 2 ( η + 1 3 + 1 ) 2 + 2 3 ( η + 1 )
Denote
D = ω 1 ω 2 ( η + 1 3 + 1 ) 2 ω 1 ω 2 ( η + 1 3 + 1 ) 2 + 2 3 ( η + 1 )
It follows that ω 1 α 1 D . In this case, we get β = γ = 0 . From Equation (36), we get that the average gain of Alice is given by
G A l i c e = 3 α 1 1
Note that d G A l i c e d α 1 = 3 from Equation (37). G A l i c e increases with α 1 when ω 1 α 1 D . So, G A l i c e gets the maximum value at α 1 = D , which is denoted by G A l i c e 1 given by
G A l i c e 1 = 3 D 1
where the corresponding point is denoted by p 1 .
By using the same method for other cases (see Appendix G), we get Table 8.

3.2.4. Quantum Fair Game

In this subsection, we prove the proposed quantum game is fair. From Equations (29) and (32) and Table 8 we get that
G A l i c e = max { G A l i c e 1 , G A l i c e 2 , G A l i c e 3 , G A l i c e 4 , G A l i c e 5 } G B o b = 2 ( α 1 ω 1 + x α 2 ω 2 ) 2 ω 1 + ω 2 x + 2 α 2 γ α 2 β α 2 + 1 G R i c e = ( η + 1 ) α 1 ω 1 + x α 2 ω 2 2 ω 1 + ω 2 x + ( η 2 ) α 2 γ + η α 2 β + η α 1 + 2 α 2
If G A l i c e 1 = max { G A l i c e 1 , G A l i c e 2 , G A l i c e 3 , G A l i c e 4 , G A l i c e 5 } , from Table 8 we obtain that α 1 = p 1 , and β = γ = 0 from Table 7. The average gains of three players are evaluated using η and ω 1 , as shown in Figure 7. Here, for each η 1 , the gains of Alice, Bob and Rice can tend to zero by changing ω 1 .
From Figure 8a,b, we get that the degree of deviation from the fair game is very small even if the game is not completely fair. The game converges to the fair game when η . To sum up, we get the following theorem.
Result 3 The first quantum game G 1 is asymptotically fair.
Proof. 
Note that the first quantum game G 1 is zero-sum, i.e., the summation of the average gains of all players is zero. From Equation (36), Alice will make | ψ 1 | ψ C | = 0 when η , i.e., α 1 = ω 1 , in order to maximize her own gain, while Bob and Rice will choose β = γ = 0 accordingly. Hence, we get three gains as follows
G A l i c e = 2 ω 1 2 ω 2 G B o b = ω 2 ω 1 G R i c e = ω 2 ω 1
Combining with the assumption of ω 1 + 2 ω 2 = 1 , we get ω 2 = ω 1 = 1 3 from G A l i c e = G B o b = G R i c e = 0 . Thus, when η , the quantum game G 1 converges a fair game, i.e., asymptotically fair. This completes the proof. □

3.3. The Second Tripartite Quantum Game

In this section, we give the quantum realization of the second game G 2 . According to Table 5, similar to the first quantum game, the gain of Rice is given by
G R i c e = 2 ( α 1 ω 1 + x α 2 ω 2 ) 2 ω 1 + ω 2 x + ( μ 1 ) α 2 ( 1 γ ) + α 1 + 2 α 2 β
where x = β + γ . The detailed maximizing G R i c e is shown in Appendix H.
Similarly, the gain of Bob is given by
G B o b = 2 ( α 1 ω 1 + x α 2 ω 2 ) 2 ( ω 1 + ω 2 x ) + α 1 + α 2 α 2 β + 2 α 2 γ
Its maximization is shown in Appendix I.
The gain of Alice is given by
G A l i c e = 4 ( α 1 ω 1 + x α 2 ω 2 ) 2 ω 1 + ω 2 x μ α 2 + ( μ 3 ) α 2 γ 2 α 1 α 2 β
Its maximization is shown in Appendix J.
The numeric evaluations of average gains are shown in Figure 9 in terms of μ and ω 1 . For each μ , we can make the gains of Alice, Bob and Rice equal to zero as much as possible by adjusting ω 1 . The deviation degree is shown in Figure 10a. The relationship between the appropriate ω 1 and μ is shown in Figure 10b. From Figure 10a,b, the degree of deviation from the fair game is very small even if the game is not completely fair. The game converges to the fair game when μ . To sum up, we get the following result.
Result 4 The second quantum game G 2 is asymptotically fair.
Proof. 
Note that the second quantum game G 2 is zero-sum, i.e., the summation of the average gains of all players is zero. From Equation (46), Alice will make α 2 = 0 when μ , i.e., α 1 = 1 , in order to maximize her own gain, while Bob and Rice will choose β = γ = 1 accordingly. Hence, we get three gains as follows
G A l i c e = 4 ω 1 2 G B o b = 1 2 ω 1 G R i c e = 1 2 ω 1
We get ω 1 = 1 2 and ω 2 = 1 4 from G A l i c e = G B o b = G R i c e = 0 . Thus, when μ , the quantum game G 2 converges a fair game, i.e., asymptotically fair. This completes the proof. □

4. Quantum Game with Noises

In this section, we consider quantum games with noises. One is from the experimental measurement. The other is from the preparation noise of resource state.

4.1. Experimental Measurement Error

In the case of measurement error, from Equation (20), we can get that the probability of Rice opens box C and finds the photon becomes
P R i c e 1 = | c | ψ 0 | 2 + ε = α 2 ( 1 γ ) + ε
where ε is measurement error which may be very small.
From Equation (20), we get the probability that Bob opens box B and finds the photon is given by
P B o b 1 = | b | ψ 0 | 2 + ε = α 2 ( 1 β ) + ε
Rice or Bob makes a projection measurement on | ψ C for the verification the final state with success probability
| ψ 1 | ψ C | 2 = ( α 1 ω 1 + x α 2 ω 2 ) ( α 1 + α 2 x ) ( ω 1 + ω 2 x ) + ε
where x = β + γ .
P A l i c e is the probability that neither Rice nor Bob finds the photon and Alice did prepare the photon in the committed state, is given by
P A l i c e = ( 1 P R i c e 1 P B o b 1 ) | ψ 1 | ψ C | 2 = ( α 1 ω 1 + x α 2 ω 2 ) 2 ω 1 + ω 2 x 2 ε λ 4 + ε λ 3 2 ε 2
where λ i are given by
λ 3 = α 1 + α 2 x 1 λ 4 = ( α 1 ω 1 + x α 2 ω 2 ) 2 ( ω 1 + ω 2 x ) ( α 1 + α 2 x ) 1
Since λ 3 1 and λ 4 1 , and ε is very small, denotes O ( ε ) = 2 ε λ 4 + ε λ 3 2 ε 2 , which can be treated as measurement error. Thus Equation (49) can be rewritten as
P A l i c e = ( α 1 ω 1 + x α 2 ω 2 ) 2 ω 1 + ω 2 x + O ( ε )
The probability that neither Rice nor Bob finds the photon but Rice or Bob detects the forage preparation of Alice is denoted by P R i c e 2 or P B o b 2 which given by
P R i c e 2 = ( 1 P R i c e 1 P B o b 1 ) ( 1 | ψ 1 | ψ C | 2 ) = ( α 1 ω 1 + x α 2 ω 2 ) 2 ω 1 + ω 2 x + α 1 + α 2 x 2 ε O ( ε ) = ( α 1 ω 1 + x α 2 ω 2 ) 2 ω 1 + ω 2 x + α 1 + α 2 x + O ( ε )
where P R i c e 2 = P B o b 2 from the winning rule W4.
From Equations (21) and (22), Equations (25)–(27), Equations (45)–(47) and Equations (49) and (50), we get the present quantum games are also asymptotically fair if the measurement error is small enough.

4.2. White Noises

In this subsection, we consider that Alice prepares a noisy photon in the state
ρ 0 = v | ψ ψ | + 1 v 3 I
where I = | a a | + | b b | + | c c | denotes the identity operator, | ψ is given in Equation (16), and v [ 0 , 1 ] . After Bob’s and Rice’s splitting operation, the state of the photon becomes
ρ 1 = v | ψ 0 ψ 0 | + 1 v 3 ( | a a | + ( 1 β | b + β | b ) ( 1 β b | + β b | ) + ( 1 γ | c + γ | c ) ( 1 γ c | + γ c | ) )
where | ψ 0 is given by
| ψ 0 = α 1 | a + α 2 ( 1 β ) | b + α 2 β | b + α 2 ( 1 γ ) | c + α 2 γ | c
Denote P R i c e 1 as the probability that Rice finds the photon after opening the box C . From Equation (53) it is given by
P R i c e 1 = v α 2 ( 1 γ ) + 1 v 3 ( 1 γ )
Denote P B o b 1 as the probability that Bob finds the photon after opening the box B . From Equation (53) it is given by
P B o b 1 = v α 2 ( 1 β ) + 1 v 3 ( 1 β )
For the case that neither Bob nor Rice detects the photon, the density operator for the photon is given by
ρ 2 = v 1 P B o b 1 P R i c e 1 α 1 α 1 α 2 ( 1 β ) α 1 α 2 β α 1 α 2 ( 1 γ ) α 1 α 2 γ α 1 α 2 ( 1 β ) 0 α 2 β ( 1 β ) α 2 ( 1 β ) ( 1 γ ) α 2 ( 1 β ) γ α 1 α 2 β α 2 β ( 1 β ) α 2 β α 2 β ( 1 γ ) α 2 β γ α 1 α 2 ( 1 γ ) α 2 ( 1 β ) ( 1 γ ) α 2 β ( 1 γ ) 0 α 2 γ ( 1 γ ) α 1 α 2 γ α 2 ( 1 β ) γ α 2 β γ α 2 γ ( 1 γ ) α 2 γ + 1 v 3 ( 1 P B o b 1 P R i c e 1 ) 1 0 0 0 0 0 0 β ( 1 β ) 0 0 0 β ( 1 β ) β 0 0 0 0 0 0 γ ( 1 γ ) 0 0 0 γ ( 1 γ ) 1 γ
Now, Rice or Bob makes a projection measurement with positive operator { | ψ C ψ C | , I | ψ C ψ C | } on the photon for verifying the committed state of Alice with success probability
ψ C | ρ 2 | ψ C = v α 1 ω 1 + x α 2 ω 2 2 ( ω 1 + ω 2 x ) ( 1 P B o b 1 P R i c e 1 ) + 1 v 3 ω 1 + ω 2 β 2 + ω 2 γ 2 ( ω 1 + ω 2 x ) ( 1 P B o b 1 P R i c e 1 )
Denote P A l i c e as the probability that neither Rice nor Bob finds the photon, and Alice did prepare the photon in the committed state. It is easy to obtain that
P A l i c e = ( 1 P R i c e 1 P B o b 1 ) ψ C | ρ 2 | ψ C = v ( α 1 ω 1 + x α 2 ω 2 ) 2 ω 1 + ω 2 x + ( 1 v ) ( ω 1 + ω 2 β 2 + ω 2 γ 2 ) 3 ( ω 1 + ω 2 x )
Denote P R i c e 2 or P B o b 2 as the probability that neither Rice nor Bob finds the photon but Rice or Bob detects the forage preparation. We obtain that
P R i c e 2 = ( 1 P R i c e 1 P B o b 1 ) ( 1 ψ C | ρ 2 | ψ C ) = 1 P R i c e 1 P B o b 1 P A l i c e
where P R i c e 2 = P B o b 2 from the winning rule W4.
Take the first quantum game as an example. The Rice’s average gain G R i c e is given by
G R i c e = 2 P R i c e 1 + η P R i c e 2 P B o b 1 P A l i c e = ( v α 2 + 1 v 3 ) ( 2 η ) ( 1 γ ) ( v α 2 + 1 v 3 ) ( η + 1 ) ( 1 β ) + η ( η + 1 ) ( v ( α 1 ω 1 + x α 2 ω 2 ) 2 ω 1 + ω 2 x + 1 v 3 ω 1 + ω 2 β 2 + ω 2 γ 2 ω 1 + ω 2 x )
The partial derivative of G R i c e with respect to γ is
G R i c e γ = v ( ω 1 + ω 2 x ) 2 ( 3 ω 2 2 α 2 x 2 6 ω 1 ω 2 α 2 x + ( η 2 ) ω 1 2 α 2 + ( η + 1 ) ω 1 ω 2 α 1 2 ω 1 ( η + 1 ) α 1 α 2 ω 1 ω 2 ) + 1 v 3 ϵ
where ϵ = ( η + 1 ) ( 2 ω 1 ω 2 γ + 2 ω 2 2 β γ + ω 2 2 γ 2 ω 1 ω 2 ω 2 2 β 2 ) ( ω 1 + ω 2 x ) 2 + η 2 .
Assume that v is very close to one, i.e., 1 v 3 0 . It follows that ϵ is bounded. Thus Equation (62) can be rewritten as
G R i c e γ = v ( ω 1 + ω 2 x ) 2 ( 3 ω 2 2 α 2 x 2 6 ω 1 ω 2 α 2 x + ( η 2 ) ω 1 2 α 2 + ( η + 1 ) ω 1 ω 2 α 1 2 ω 1 ( η + 1 ) α 1 α 2 ω 1 ω 2 ) + O ( ϵ )
Similarly, we get
G B o b = ( v α 2 + 1 v 3 ) ( 1 β ) 2 ( v α 2 + 1 v 3 ) ( 1 γ ) + 1 2 v ( α 1 ω 1 + x α 2 ω 2 ) 2 ω 1 + ω 2 x 2 ( 1 v ) ( ω 1 + ω 2 β 2 + ω 2 γ 2 ) 3 ( ω 1 + ω 2 x )
The partial derivative of G B o b with respect to β is given by
G B o b β = v ( ω 1 + ω 2 x ) 2 ( 3 ω 2 2 α 2 x 2 6 ω 1 ω 2 α 2 x ω 1 2 α 2 4 ω 1 α 1 α 2 ω 1 ω 2 + 2 ω 1 ω 2 α 1 ) + 1 v 3 ϵ = v ( ω 1 + ω 2 x ) 2 ( 3 ω 2 2 α 2 x 2 6 ω 1 ω 2 α 2 x ω 1 2 α 2 4 ω 1 α 1 α 2 ω 1 ω 2 + 2 ω 1 ω 2 α 1 ) + O ( ϵ )
where ϵ = 4 ω 1 ω 2 γ + 4 ω 2 2 β γ + 2 ω 2 2 γ 2 2 ω 1 ω 2 2 ω 2 2 β 2 ( ω 1 + ω 2 x ) 2 1 .
The Alice’s average gain G A l i c e is given by
G A l i c e = v ( η + 3 ) ( α 1 ω 1 + x α 2 ω 2 ) 2 ω 1 + ω 2 x + η v α 2 ( 2 x ) η 1 + 1 v 3 ( ( η + 3 ) ( ω 1 + ω 2 β 2 + ω 2 γ 2 ) ω 1 + ω 2 x + η ( 2 x ) )
                = v ( η + 3 ) ( α 1 ω 1 + x α 2 ω 2 ) 2 ω 1 + ω 2 x + v η η                 v η α 2 x v η α 1 1 + O ( ε )
From Equations (30), (33), (36), (63), (65) and (66), we get that the first quantum game is asymptotically fair if white noisy is small enough, i.e., v is close to 1.

5. Conclusions

It has shown that two present quantum games are asymptotically fair. Interestingly, these games can be easily changed to biased versions from Figure 5 and Figure 8, by choosing different η and ω 1 . These kind of schemes may be applicable in gambling theory. Similar to bipartite scheme [27], a proof-of-principle optical demonstration may be followed for each scheme.
In this paper, we present one tripartite zero-sum game with different settings. This game is unfair if all parties use of classical resources. Interestingly, this can be resolved by using only pure state in similar quantum games. Comparing with the classical games, the present quantum games provide asymptotically fair. Moreover, these quantum games are robust against the measurement errors and preparation noises. This kind of protocols provide interesting features of pure state in resolving specific tasks. The present examples may be extended for multipartite games in theory. Unfortunately, these extensions should be nontrivial because of high complexity depending on lots of free parameters.

Author Contributions

Methodology, Writing—original draft, Writing—review and editing, H.-M.C. and M.-X.L. Both authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (No.61772437) and Sichuan Youth Science and Technique Foundation (No.2017JQ0048).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data from numeric evaluations can be obtained from authors under proper requirements and goals.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The Average Gain of Bob in the First Classical Game

In order to calculate Bob’s average gain, we need to consider whether G R i c e P C is greater than zero or not. And the case of G R i c e P C 0 has given in Section 2.2.2 The case of G R i c e P C > 0 is as follows.
For G R i c e P C > 0 , we have 0 P B < 3 P 2 P 1 P 2 ( η + 1 ) P 1 ( η + 1 ) ( 1 P 2 ) . In this case, P C = 1 . From Equation (8) we get
G B o b = P B ( 3 P 2 2 + 3 P 2 2 P 1 + 2 P 1 P 2 ) + 2 P 1 2 P 1 P 2 1
The partial derivative of G B o b with respect to P B is given by
G B o b P B = ( 1 P 2 ) ( 3 7 P 1 ) 2
Two cases will be considered as follows.
(i)
If 0 < 3 P 2 P 1 P 2 ( η + 1 ) P 1 ( η + 1 ) ( 1 P 2 ) 1 , we get 3 2 η + 5 P 1 < 3 η + 1 . In this case, there are two subcases.
C1 
For 3 2 η + 5 P 1 3 7 , we get G B o b P B 0 , and G B o b increases with P B . So, Bob should make P B = 3 P 2 P 1 P 2 ( η + 1 ) P 1 ( η + 1 ) ( 1 P 2 ) in order to maximize his gains. From Equation (A1), we obtain
G B o b 2 = 3 P 2 ( 3 P 2 2 P 1 ) ( η + 1 ) P 1 3 P 2 2 + 2 P 1 1
C2 
For 3 7 < P 1 < 3 η + 1 , we get G B o b P B < 0 . Hence, G B o b decreases with P B . Thus, Bob will let P B = 0 in order to maximize his gains. From Equation (A1), we get
G B o b 3 = 2 P 1 2 P 1 P 2 1
(ii)
If 3 P 2 P 1 P 2 ( η + 1 ) P 1 ( η + 1 ) ( 1 P 2 ) > 1 , we get P 1 < 3 2 η + 5 . Since η 1 , it follows that 3 2 η + 5 3 7 . Note that G B o b P B 0 . It implies that G B o b increases with P B . Bob will set P B = 1 to maximize his gains. From Equation (A1), we obtain that
G B o b 4 = 3 P 2 3 P 2 2 1
It is no doubt for Bob to choose the strategy which has higher gains. There are two options for Bob.
Case 1. 3 2 η + 5 P 1 3 7 .
In this case, we get from Equations (8) and (A3) that
G B o b 2 G B o b 1 = 3 P 2 ( 3 P 2 2 P 1 P 1 P 2 ( η + 1 ) ) P 1 ( η + 1 ) = 3 P 2 ( P 1 2 ( η + 1 ) P 1 ( η + 8 ) + 3 ) 2 P 1 ( η + 1 )
Suppose that G B o b 2 G B o b 1 = 0 . We get P 1 2 ( η + 1 ) P 1 ( η + 8 ) + 3 = 0 . The solution of this equation is given by X 1 = η + 8 η 2 + 4 η + 52 2 η + 2 and X 2 = η + 8 + η 2 + 4 η + 52 2 η + 2 . It means that G B o b 2 G B o b 1 > 0 for P 1 [ 0 , X 1 ) and P 1 ( X 2 , 1 ) , and G B o b 2 G B o b 1 0 for P 1 [ X 1 , X 2 ] . Note that
3 7 X 2 = 3 7 η + 8 + η 2 + 4 η + 52 2 η + 2 = η 50 7 η 2 + 4 η + 52 14 ( η + 1 ) < 0
Moreover, it is easy to prove that
3 7 X 1 = 3 7 η + 8 η 2 + 4 η + 52 2 η + 2 = η 50 + 7 η 2 + 4 η + 52 14 ( η + 1 ) > 0
Similarly, we can get
3 2 η + 5 X 1 = 3 2 η + 5 η + 8 η 2 + 4 η + 52 2 η + 2 = 2 η 2 15 η 34 + ( 2 η + 5 ) η 2 + 4 η + 52 2 ( η + 1 ) ( 2 η + 5 )
From Equation (A9), we get 3 2 η + 5 X 1 0 for 1 η 2 , and 3 2 η + 5 X 1 < 0 for η > 2 .
To sum up we get the following result
  • For 1 η 2 , i.e., X 1 3 2 η + 5 < 3 7 < X 2 , we get
    P B = X 0
    for P 1 [ 3 2 η + 5 , 3 7 ]
  • For η > 2 , i.e., 3 2 η + 5 < X 1 < 3 7 < X 2 , we get
    P B = 3 P 2 ( η + 1 ) P 1 P 2 ( η + 1 ) P 1 ( 1 P 2 ) , P 1 [ 3 2 η + 5 , X 1 ) X 0 , P 1 [ X 1 , 3 7 ]
    where X 0 3 P 2 P 1 P 2 ( η + 1 ) P 1 ( η + 1 ) ( 1 P 2 ) .
Case 2. 3 7 < P 1 < 3 η + 1 .
In this case, from Equations (8) and (A4) we get
G B o b 3 G B o b 1 = 2 P 1 P 2 < 0 .
Thus P B = X 0 for 3 7 < P 1 < 3 η + 1 , where X 0 > 3 P 2 ( η + 1 ) P 1 P 2 ( η + 1 ) P 1 ( 1 P 2 ) .

Appendix B. The Average Gain of Alice in the First Classical Game

For the case of 0 P 1 < 3 2 η + 5 , we have given in Section 2.2.3. The remaining three cases are as follows.
Case 1. For 3 2 η + 5 P 1 < X 1 and 1 η 2 , we have P C = 0 and P B = X 0 . From Equation (9), we get
G A l i c e = 2 P 1 ( η + 3 )
So, G A l i c e takes the maximum value at P 1 = 3 2 η + 5 , which is then denoted by G A l i c e 2 given by
G A l i c e 2 = 1 + η 2 η + 5
Case 2. For 3 2 η + 5 P 1 < X 1 and η > 2 , we have P C = 1 and P B = 3 P 2 ( η + 1 ) P 1 P 2 ( η + 1 ) P 1 ( 1 P 2 ) . From Equation (9), we get
G A l i c e = 3 P 2 2 3 P 2 + 2 P 1 ( η + 3 ) + 3 P 2 ( P 1 ( η + 3 ) 3 P 2 ) P 1 ( η + 1 ) = 3 4 P 1 2 ( η + 9 2 ) P 1 21 P 1 + 30 4 ( η + 1 ) 9 4 P 1 ( η + 1 ) + 11 4
The partial derivative of G A l i c e with respect to P 1 is given by
G A l i c e P 1 = 9 4 P 1 2 ( η + 1 ) 21 4 ( η + 1 ) + 3 2 P 1 9 2 η
By setting G A l i c e P 1 = 0 , we get
6 P 1 3 ( η + 1 ) P 1 2 ( 4 η 2 + 22 η + 39 ) + 9 = 0
Let P 1 = y + 4 η 2 + 22 η + 39 18 ( η + 1 ) . From Equation (A15), we get
y 3 ( 4 η 2 + 22 η + 39 ) 2 108 ( η + 1 ) 2 y + 3 2 ( η + 1 ) ( 4 η 2 + 22 η + 39 ) 3 2916 ( η + 1 ) 3 = 0
The general solution of this equation is given by
r = ( 4 η 2 + 22 η + 39 ) 3 3 3 × 6 3 × ( η + 1 ) 3 θ = 1 3 arccos ( 1 4374 ( η + 1 ) 2 ( 4 η 2 + 22 η + 39 ) 3 ) y 1 = 2 r 3 cos θ y 2 = 2 r 3 cos ( θ + 2 3 π ) y 3 = 2 r 3 cos ( θ + 4 3 π )
By substituting y 1 , y 2 and y 3 into P 1 = y + 4 η 2 + 22 η + 39 18 ( η + 1 ) , we obtain that
p 1 * = y 1 + 4 η 2 + 22 η + 39 18 ( η + 1 ) = 4 η 2 + 22 η + 39 18 ( η + 1 ) ( 1 + 2 cos θ ) > 1 p 2 * = y 2 + 4 η 2 + 22 η + 39 18 ( η + 1 ) = 4 η 2 + 22 η + 39 18 ( η + 1 ) ( 1 + 2 cos ( θ + 2 3 π ) ) < 0 p 3 * = y 3 + 4 η 2 + 22 η + 39 18 ( η + 1 ) = 4 η 2 + 22 η + 39 18 ( η + 1 ) ( 1 + 2 cos ( θ + 4 3 π ) ) > 0
Equation (A17) is a cubic equation about P 1 . It is the most likely to have three roots, i.e., p 1 * , p 2 * , p 3 * with p 2 * < 0 < p 3 * < 1 < p 1 * . So, G A l i c e decreases with P 1 for P 1 ( , p 2 * ) and P 1 ( p 3 * , p 1 * ) . G A l i c e increases with P 1 for P 1 ( p 2 * , p 3 * ) and P 1 ( p 1 * , ) . From Figure A1, we get 3 2 η + 5 > p 3 * . So, G A l i c e takes the maximum value at P 1 = 3 2 η + 5 , which is then denoted by G A l i c e 3 given by
G A l i c e 3 = η + 1 2 η + 5
Case 3. For X 1 P 1 1 , we have P C = 0 . From Equation (9), we get
G A l i c e = 2 P 1 ( η + 3 )
From Equation (A20) G A l i c e decreases P 1 . Hence, the maximum value G A l i c e 4 of G A l i c e is given by
G A l i c e 4 = 2 X 1 ( η + 3 )
Note that G A l i c e 2 , G A l i c e 3 > G A l i c e 4 . Thus, G A l i c e gets the maximum value η + 1 2 η + 5 at P 1 = 3 2 η + 5 for 3 2 η + 5 P 1 1 .

Appendix C. The Average Gain of RICE in the Second Classical Game

It is straightforward to calculate the average gain of Rice according to Table 5 as follows
G R i c e = μ P R i c e 1 P B o b 1 P A l i c e + P R i c e 2 = μ P C P 2 P B P C ( 1 P 2 ) + ( 1 P C ) ( 2 P 1 1 ) + P C ( 1 P 2 ) ( 1 P B ) ( 2 P 1 1 )
The partial derivative of G R i c e with respect to P C is given by
G R i c e P C = ( μ + 1 ) P 2 2 P 1 P B + 2 P 1 P B P 2 2 P 1 P 2
If G R i c e P C > 0 , i.e., G R i c e increases with P C , Rice will choose P C = 1 to maximize her gains. However, if G R i c e P C 0 , i.e., G R i c e decreases with P C , Rice will set P C = 0 in order to maximize her gains.

Appendix D. The Average Gain of Bob in Classical Second Game

Similar to G R i c e , we can get Bob’s average gain G B o b as follows
G B o b = P R i c e 1 + 2 P B o b 1 P A l i c e + P B o b 2 = P B P C ( 1 P 2 ) ( 3 P 2 1 ) + P C ( 1 P 2 ) ( 1 P B ) ( 2 P 1 1 ) + ( 1 P C ) ( 2 P 1 1 ) P C P 2
According to the sign of G R i c e P C , we can divide it into two cases.
Case 1. G R i c e P C > 0 , i.e., 0 P B < ( μ + 1 ) P 2 2 P 1 P 2 2 P 1 ( 1 P 2 ) .
In this case, P C = 1 . From Equation (A25) we get
G B o b = P 2 + P B ( 1 P 2 ) ( 3 P 2 1 ) + ( 1 P 2 ) ( 1 P B ) ( 2 P 1 1 )
The partial derivative of G B o b with respect to P B is given by
G B o b P B = ( 1 P 2 ) ( 3 7 P 1 ) 2
Here, we discuss it in three subcases.
(i)
For 0 < P 2 ( μ + 1 ) 2 P 1 P 2 2 P 1 ( 1 P 2 ) 1 , we get μ + 1 μ + 5 P 1 1 . For μ 2 , it follows that μ + 1 μ + 5 3 7 . It means that G B o b P B < 0 , i.e., G B o b decreases with P B . Thus, Bob will set P B = 0 to maximize his gains. From Equation (A24), we get
G B o b 1 = 2 P 1 2 P 1 P 2 1
(ii)
For ( μ + 1 ) P 2 2 P 1 P 2 2 P 1 ( 1 P 2 ) > 1 , we can get P 1 < μ + 1 μ + 5 , and when P 1 [ 0 , 3 7 ] , we get G B o b P B 0 , i.e., G B o b increases with P B . So, Bob makes P B = 1 in order to maximize his gains. From Equation (A24), we can get that
G B o b 2 = 3 P 2 2 + 3 P 2 1
(iii)
For ( μ + 1 ) P 2 2 P 1 P 2 2 P 1 ( 1 P 2 ) > 1 , i.e., P 1 < μ + 1 μ + 5 , when P 1 ( 3 7 , μ + 1 μ + 5 ) , we can get G B o b P B < 0 , i.e., G B o b decreases with P B . Thus, Bob will make P B = 0 to maximize his gains. From Equation (A24), we obtain that
G B o b 3 = 2 P 1 2 P 1 P 2 1
Case 2. G R i c e P C 0 , i.e., ( μ + 1 ) P 2 2 P 1 P 2 2 P 1 ( 1 P 2 ) P B 1 . For 0 < P 2 ( μ + 1 ) 2 P 1 P 2 2 P 1 ( 1 P 2 ) 1 , we get μ + 1 μ + 5 P 1 1 .
In this case, we have P C = 0 and P B = X 0 , where X 0 > ( μ + 1 ) P 2 2 P 1 P 2 2 P 1 ( 1 P 2 ) . From Equation (A24), we obtain that
G B o b 4 = 2 P 1 1
If μ + 1 μ + 5 P 1 1 , we have G B o b 4 > G B o b 1 . Bob will make P B = X 0 to get the maximal gains for μ + 1 μ + 5 P 1 < 1 , where X 0 > ( μ + 1 ) P 2 2 P 1 P 2 2 P 1 ( 1 P 2 ) .
Table A1. The values of P B and P C . The values of P B ( C ) depend on the different cases in the game.
Table A1. The values of P B and P C . The values of P B ( C ) depend on the different cases in the game.
Cases 0 P 1 3 7 3 7 < P 1 < μ + 1 μ + 5 μ + 1 μ + 5 P 1 1
P B 10 X 0
P C 110

Appendix E. The Average Gain of Alice in the Second Classical Game

According to Table 5, we can get the average gain of Alice.
G A l i c e = ( μ 1 ) P R i c e 1 P B o b 1 + 2 P A l i c e 2 P R i c e 2 = P C P 2 ( μ 1 ) + P B P C ( 1 P 2 ) ( 2 3 P 2 ) + 2 ( 1 P C ) ( 1 2 P 1 ) + 2 P C ( 1 P 2 ) ( 1 P B ) ( 1 2 P 1 )
Here, three cases will be considered as follows.
Case 1. For 0 P 1 3 7 , we get P C = 1 and P B = 1 . From Equation (A31), we get
G A l i c e = P 2 ( μ 1 ) + ( 1 P 2 ) ( 2 3 P 2 )
which implies that
d G A l i c e d P 1 = 3 P 2 + μ + 4 2
From μ 2 , we have d G A l i c e d P 1 0 , i.e., G A l i c e increases with P 1 . It follows that P 1 = 3 7 . We obtain that the maximum value G A l i c e 1 of G A l i c e as
G A l i c e 1 = 54 49 2 μ 7
Case 2. For 3 7 < P 1 < μ + 1 μ + 5 , we get P C = 1 and P B = 0 . From Equation (A31), we get
G A l i c e = ( μ 1 ) P 2 + 2 ( 1 P 2 ) ( 1 2 P 1 ) = 8 P 2 2 + ( 11 μ ) P 2 2
The partial derivative of G A l i c e with respect to P 2 = 1 P 1 2 is given by
d G A l i c e d P 2 = 16 P 2 + 11 μ
From Equation (A36), G A l i c e increases with P 2 for P 2 [ 0 , 11 μ 16 ) , and decreases for P 2 ( 11 μ 16 , 1 ] , thus the following three cases need to be considered according to it.
(i)
For 11 μ 16 2 μ + 5 , i.e., μ 3 + 4 2 , G A l i c e decreases with P 2 . So, P 2 = 2 μ + 5 and P 1 = μ + 1 μ + 5 . From Equation (A35), we get
G A l i c e 2 = 4 μ 2 + 8 μ + 28 ( μ + 5 ) 2
(ii)
For 11 μ 16 2 7 , i.e., 2 μ 45 7 , G A l i c e increases with P 2 . So, P 2 = 2 7 and P 1 = 3 7 . From Equation (A35), we get
G A l i c e 3 = 24 49 2 μ 7
(iii)
For 2 μ + 5 < 11 μ 16 < 2 7 , i.e., 45 7 < μ < 3 + 4 2 , G A l i c e increases with P 2 for P 2 ( 2 μ + 5 , 11 μ 16 ) , and decreases for P 2 ( 11 μ 16 , 2 7 ) . So, G A l i c e takes its maximum at the point of P 2 = 11 μ 16 . From Equation (A35), we get
G A l i c e 4 = ( 11 μ ) 2 32 2
Case 3. For μ + 1 μ + 5 P 1 1 , we have P C = 0 and P B = X 0 , where X 0 > ( μ + 1 ) P 2 2 P 1 P 2 2 P 1 ( 1 P 2 ) . From Equation (A31), we get
G A l i c e = 2 4 P 1
It is easy to prove that G A l i c e decreases with P 1 . So, the maximum G A l i c e 5 of G A l i c e achieves at P 1 = μ + 1 μ + 5 , which is given by
G A l i c e 5 = 6 2 μ μ + 5
It is easy to prove that G A l i c e 2 < G A l i c e 5 for μ 3 + 4 2 , and G A l i c e 3 < G A l i c e 1 for 2 μ < 45 7 , and G A l i c e 4 < G A l i c e 1 for 45 7 < μ < 3 + 4 2 .
To sum up we get Table A2.
Table A2. The maximum G A l i c e m of G A l i c e and the corresponding point p max .
Table A2. The maximum G A l i c e m of G A l i c e and the corresponding point p max .
Cases G Alice 1 G Alice 5 G Alice 1 < G Alice 5
p max 3 7 μ + 1 μ + 5
G A l i c e m G A l i c e 1 G A l i c e 5

Appendix F. Analysis of β and γ in the First Quantum Game

How Rice and Bob choose their own parameters to maximize their average gains? we will discuss them in five cases.
Case 1. For x 2 * 0 , Rice should take x = 0 to maximize her gains. Since x 2 * x 4 * , i.e., x 4 * 0 , Bob should take x = 0 to maximize his gains. Hence, we have β = γ = 0 .
Case 2. For x 4 * 2 , Rice should take x = 2 to maximize her gains. Since x 2 * x 4 * , i.e., x 2 * 2 , Bob should take x = 2 to maximize his gains. Therefore, we get β = γ = 1 .
Case 3. For 0 < x 2 * 1 , if Bob takes β x 2 * , after knowing Bob’s actions, Rice will definitely choose γ = 0 to maximize her gains, i.e., x > x 2 * x 4 * . For Bob, when x > x 4 * , G B o b decreases with β . Hence, we have β = x 2 * and γ = 0 . If Bob takes β x 2 * , after knowing Bob’s actions, Rice will definitely choose γ = x 2 * β to maximize her gains. However, when x = x 2 * , G B o b decreases with β . In this case, we get β = 0 and γ = x 2 * .
Case 4. For x 4 * 1 < x 2 * , if Bob takes β < x 2 * 1 , after knowing Bob’s actions, Rice will definitely choose γ = 1 to maximize her gains, i.e., x 4 * x < x 2 * . The reason is that G B o b decreases with β for x x 4 * . From Bob’s own gains, it should be β = 0 and γ = 1 . If Bob takes β [ x 2 * 1 , 1 ] , Rice will definitely choose γ = x 2 * β to maximize her gains after knowing Bob’s actions, i.e., x = x 2 * x 4 * , because G B o b decreases with β for x x 4 * . In this case, from Bob’s own interests, we have β = x 2 * 1 and γ = 1 . Note that G B o b is decreasing function in β for x = x 2 * . So, we get β = 0 and γ = 1 .
Case 5. For 1 < x 4 * < 2 , if Bob takes 0 β x 4 * 1 , Rice will choose γ = 1 to maximize her gains after knowing Bob’s actions, i.e., 1 x x 4 * . It is because Bob gets the maximum value at x = x 4 * . We have β = x 4 * 1 and γ = 1 . If Bob takes 1 β > x 4 * 1 , Rice will choose γ as close as possible to x 2 * to maximize her gains after knowing Bob’s actions, i.e., x 4 * x x 2 * . Thus, we obtain β = x 4 * 1 and γ = 1 .

Appendix G. The Average Gain of Alice in the First Quantum Game

We can calculate Alice’s average gain according to the values of x 2 * and x 4 * , and the first case of x 2 * 0 has been discussed in Section 3.2.3, the remaining four cases are as follows.
Case 1. For x 4 * 2 , i.e., 2 α 1 ω 1 3 α 2 ω 2 ω 1 ω 2 ( 2 3 + 1 ) 2 , we get
( 2 + ( 2 3 + 1 ) ω 1 ω 2 ) 2 ( 2 + ( 2 3 + 1 ) ω 1 ω 2 ) 2 + 4 ω 1 3 ω 2 α 1 1
Let E = ( 2 + ( 2 3 + 1 ) ω 1 ω 2 ) 2 ( 2 + ( 2 3 1 ) ω 1 ω 2 ) 2 + 4 ω 1 3 ω 2 , i.e., E α 1 1 . In this case, we have β = γ = 1 . From Equation (36) we get
G A l i c e = ( η + 3 ) ( α 1 ω 1 + 2 α 2 ω 2 ) 2 η 1
The derivative of G A l i c e with respect to α 1 is given by
d G A l i c e d α 1 = ( η + 3 ) ( 2 ω 1 ω 2 ( 1 2 α 1 ) α 1 α 1 2 + ω 1 2 ω 2 )
The second derivative of G A l i c e with respect to α 1 is given by
d 2 G A l i c e d α 1 2 = ω 1 ω 2 ( η + 3 ) 2 ( α 1 α 1 2 ) 3 2
From Equation (A45), we know that d 2 G A l i c e d α 1 2 < 0 , and d G A l i c e d α 1 has at most one zero when 0 α 1 1 . From d G A l i c e d α 1 = 0 , it follows that
1 2 α 1 α 1 α 1 2 = ( 2 ω 2 ω 1 ) 2 ω 1 ω 2 .
Denote F = ( 2 ω 2 ω 1 ) 2 ω 1 ω 2 . If F 0 , i.e., ω 1 2 ω 2 , the solution of Equation (A46) is given by α = 0.5 F 2 F 2 + 4 = ω 1 . G A l i c e is increasing function for α 1 ( 0 , ω 1 ) , while it is decreasing for α 1 ( ω 1 , 1 ) . From α 1 E > ω 1 , G A l i c e achieves the maximum value at α 1 = E . If F < 0 , i.e., ω 1 > 2 ω 2 , the solution of Equation (A46) is given by α = 0.5 F 2 F 2 + 4 = 2 ω 2 . G A l i c e is increasing function for α 1 ( 0 , 2 ω 2 ) , and decreasing for α 1 ( 2 ω 2 , 1 ) . From α 1 E > ω 1 > 2 ω 2 , G A l i c e gets the maximum value at α 1 = E . It means that G A l i c e gets the maximum value G A l i c e 2 as
G A l i c e 2 = ( η + 3 ) ( ω 1 E + 2 ω 2 ( 1 E ) ) 2 η 1
For x 4 * 2 , where the corresponding point is denoted by p 2 .
Case 2. For 0 < x 2 * 1 , i.e., 0 < α 1 ω 1 ( η + 1 ) 3 α 2 ω 2 ( η + 1 3 + 1 ) ω 1 ω 2 1 , we get
D < α 1 G
where G = ( 1 + ( η + 1 3 + 1 ) ω 1 ω 2 ) 2 ( 1 + ( η + 1 3 + 1 ) ω 1 ω 2 ) 2 + 2 ω 1 ( η + 1 ) 3 ω 2 . In this case, we have β = 0 and γ = x 2 * . From Equation (36) we have
G A l i c e = ( η + 3 ) ( α 1 ω 1 + x 2 * α 2 ω 2 ) 2 ω 1 + ω 2 x 2 * x 2 * η α 2 1 η α 1 = α 2 ( η + 3 ) ( 1 + η + 1 3 ) 2 ( α 1 ω 1 ω 2 ω 1 α 2 ω 2 ) 2 η + 1 3 ( α 1 ω 1 ω 2 ω 1 α 2 ω 2 ) η ( η + 1 ) α 1 α 2 ω 1 3 ω 2 + η ω 1 α 2 ( η + 1 3 + 1 ) ω 2 η α 1 1
From D < α 1 < G , we have α 1 ω 1 ω 2 ω 1 α 2 ω 2 0 . Equation (A49) is rewritten into
G A l i c e = ( 2 3 η + 4 3 η + 1 + 2 η + 6 ) ( ω 1 α 1 α 2 ω 2 ω 1 α 2 ω 2 ) η α 1 1 + η ω 1 α 2 ω 2
The derivative of G A l i c e with respect to α 1 is given by
d G A l i c e d α 1 = ( 2 3 η + 4 3 η + 1 + 2 η + 6 ) × ( ω 1 ( 1 2 α 1 ) 8 ω 2 ( α 1 α 2 2 ) + ω 1 2 ω 2 ) η ω 1 2 ω 2 η
The second derivative of G A l i c e with respect to α 1 is given by
d 2 G A l i c e d α 1 2 = ( 2 3 η + 4 3 η + 1 + 2 η + 6 ) × ω 1 32 ω 2 ( α 1 α 1 2 ) 3
From Equation (A52), we obtain that d 2 G A l i c e d α 1 2 < 0 , and d G A l i c e d α 1 = 0 has at most one solution when 0 α 1 1 . By letting d G A l i c e d α 1 = 0 , we get
1 2 α 1 α 1 α 1 2 = 2 2 η ω 2 2 ω 1 ( 2 3 η + 4 3 η + 1 + η + 6 ) ( 2 3 η + 4 3 η + 1 + 2 η + 6 ) ω 1 ω 2
Denote
H = 2 2 η ω 2 2 ω 1 ( 2 3 η + 4 3 η + 1 + η + 6 ) ( 2 3 η + 4 3 η + 1 + 2 η + 6 ) ω 1 ω 2
The solution of Equation (A53) is given by α = 0.5 H 2 H 2 + 4 . G A l i c e is increasing function when α 1 ( 0 , α ) , or decreasing for α 1 ( α , 1 ) . Thus, G A l i c e gets the maximum value at α 1 = D if α D , or gets the maximum value at α 1 = G if α G . Moreover, G A l i c e gets the maximum value at α 1 = α if D < α < G . The maximum gain of Alice is denoted by G A l i c e 3 when 0 < x 2 * 1 , where the corresponding point is denoted by p 3 .
Case 3. For x 4 * 1 < x 2 * , i.e., ( η + 1 ) α 1 ω 1 3 α 2 ω 2 ( η + 1 3 + 1 ) ω 1 ω 2 > 1 and 2 α 1 ω 1 3 α 2 ω 2 ( 2 3 + 1 ) ω 1 ω 2 1 , we get
G < α 1 K
where K = ( 1 + ( 2 3 + 1 ) ω 1 ω 2 ) 2 ( 1 + ( 2 3 + 1 ) ω 1 ω 2 ) 2 + 4 ω 1 3 ω 2 . In this case, we have β = 0 and γ = 1 . From Equation (36) we get
G A l i c e = ( η + 3 ) ( α 1 ω 1 + α 2 ω 2 ) 2 ω 1 + ω 2 η α 2 η α 1 1
The derivative of G A l i c e with respect to α 1 is given by
d G A l i c e d α 1 = ( η + 3 ) ( ω 1 ω 2 2 + ω 1 ω 2 ( 1 2 α 1 ) 2 ( α 1 α 1 2 ) ) ω 1 + ω 2 η 2
The second derivative of G A l i c e with respect to α 1 is given by
d 2 G A l i c e d α 1 2 = ( η + 3 ) ω 1 ω 2 2 ( ω 1 + ω 2 ) ( α 1 α 1 2 ) 3 2
From Equation (A58), we know that d 2 G A l i c e d α 1 2 < 0 , and d G A l i c e d α 1 has at most one zero when 0 α 1 1 . By letting d G A l i c e d α 1 = 0 , we get
1 2 α 1 α 1 α 1 2 = ( 2 η + 3 ) ω 2 ( η + 6 ) ω 1 2 ω 1 ω 2 ( η + 3 )
Let M = ( 2 η + 3 ) ω 2 ( η + 6 ) ω 1 2 ω 1 ω 2 ( η + 3 ) . If M 0 , the solution of Equation (A59) is given by α = 0.5 M 2 M 2 + 4 . G A l i c e is increasing function for α 1 ( 0 , α ) , while it is decreasing for α 1 ( α , 1 ) . Thus, G A l i c e gets the maximum value at α 1 = G if α G , or at α 1 = K if α K . Moreover, G A l i c e gets the maximum value at α 1 = α if G < α < K . The maximum gain of Alice gets is denoted by G A l i c e 4 when x 2 * > 1 x 4 * , where the corresponding point is denoted by p 4 .
Case 4. For 1 < x 4 * < 2 , i.e., 1 < 2 α 1 ω 1 3 α 2 ω 2 ( 2 3 + 1 ) ω 1 ω 2 < 2 , we get
K < α 1 < E
where E = ( 2 + ( 2 3 + 1 ) ω 1 ω 2 ) 2 ( 2 + ( 2 3 + 1 ) ω 1 ω 2 ) 2 + 4 ω 1 3 ω 2 . In this case, we have β = x 4 * 1 and γ = 1 . From Equation (36) we get
G A l i c e = ( η + 3 ) α 1 ω 1 + x 4 * α 2 ω 2 2 ω 1 + ω 2 x 4 * x 4 * η α 2 1 η α 1 = ( 3 η + 5 3 2 + 2 η + 6 ) ( ω 1 α 1 α 2 ω 2 ω 1 α 2 ω 2 ) η α 1 1 + η ω 1 α 2 ω 2
The derivative of G A l i c e with respect to α 1 is given by
d G A l i c e d α 1 = ( 3 η + 5 3 2 + 2 η + 6 ) × ( ω 1 ( 1 2 α 1 ) 8 ω 2 ( α 1 α 2 2 ) + ω 1 2 ω 2 ) η ω 1 2 ω 2 η
The second derivative of G A l i c e with respect to α 1 is given by
d 2 G A l i c e d α 1 2 = ( 3 ( η + 5 ) 2 + 2 η + 6 ) ω 1 32 ω 2 ( α 1 α 1 2 ) 3 2
From Equation (A63), we have that d 2 G A l i c e d α 1 2 < 0 , and d G A l i c e d α 1 has at most one zero when 0 α 1 1 . By setting d G A l i c e d α 1 = 0 , we get
1 2 α 1 α 1 α 1 2 = 4 η ω 2 6 ω 1 ( η + 5 ) 2 ω 1 ( η + 6 ) ( η + 5 ) 3 ω 1 ω 2 + ( 2 η + 6 ) 2 ω 1 ω 2
Let N = 4 η ω 2 6 ω 1 ( η + 5 ) 2 ω 1 ( η + 6 ) ( η + 5 ) 3 ω 1 ω 2 + ( 2 η + 6 ) 2 ω 1 ω 2 . The solution of Equation (A64) is given by α = 0.5 N 2 N 2 + 4 . G A l i c e is increasing for α 1 ( 0 , α ) , and decreasing for α 1 ( α , 1 ) . G A l i c e is increasing for α 1 ( 0 , α ) , and decreasing for α 1 ( α , 1 ) . Thus, G A l i c e gets the maximum value at α 1 = K if α K , or at α 1 = E if α E . Moreover, G A l i c e gets the maximum value at α 1 = α if K < α < E . The maximum gain of Alice is denoted by G A l i c e 5 when 1 < x 4 * < 2 , where the corresponding point is denoted by p 5 .

Appendix H. The Average Gain of Rice in the Second Quantum Game

According to Table A2, we get
G R i c e = μ P R i c e 1 + P R i c e 2 P B o b 1 P A l i c e = 2 ( α 1 ω 1 + x α 2 ω 2 ) 2 ω 1 + ω 2 x + ( μ 1 ) α 2 ( 1 γ ) + α 1 + 2 α 2 β
where x = β + γ . The partial derivative of G R i c e with respect to γ is given by
G R i c e γ = ( μ + 1 ) ω 2 2 α 2 x 2 2 ( μ + 1 ) ω 1 ω 2 α 2 x ( ω 1 + ω 2 x ) 2 + ( 1 μ ) ω 1 2 α 2 4 ω 1 α 1 α 2 ω 1 ω 2 + 2 ω 1 ω 2 α 1 ( ω 1 + ω 2 x ) 2
By setting G R i c e γ = 0 , we get its solutions as
x 1 * = 2 ω 1 ( μ + 1 ) α 2 α 1 ω 2 α 2 ω 1 ω 2 ω 1 ω 2 x 2 * = 2 ω 1 ( μ + 1 ) α 2 α 1 ω 2 α 2 ω 1 ω 2 ω 1 ω 2
From Equation (A67), we know that x 1 * 0 and x 2 * x 1 * . From Equations (A66) and (A67), G R i c e is decreasing for x x 1 * or x > x 2 * , and increasing for x 1 * < x x 2 * . When, G R i c e is decreasing in x. Since 0 x 2 , when x 2 * 0 , G R i c e achieves its maximum at x = 0 . When x 2 * 2 , G R i c e gets its maximum at x = 2 . When 0 < x 2 * < 2 , G R i c e gets the maximum at x = x 2 * .

Appendix I. The Average Gain of Bob in the Second Quantum Game

According to Table A2, we get that
G B o b = 2 P B o b 1 + P B o b 2 P R i c e 1 P A l i c e = 2 ( α 1 ω 1 + x α 2 ω 2 ) 2 ( ω 1 + ω 2 x ) + α 1 + α 2 α 2 β + 2 α 2 γ
The partial derivative of G B o b with respect to β is given by
G B o b β = 3 ω 2 2 α 2 x 2 6 ω 1 ω 2 α 2 x ω 1 2 α 2 ( ω 1 + ω 2 x ) 2 + 4 ω 1 α 1 α 2 ω 1 ω 2 + 2 ω 1 ω 2 α 1 ( ω 1 + ω 2 x ) 2
By setting G B o b β = 0 , we get two solutions as
x 3 * = 2 ω 1 3 α 2 | α 1 ω 2 α 2 ω 1 | ω 2 ω 1 ω 2 x 4 * = 2 ω 1 3 α 2 | α 1 ω 2 α 2 ω 1 | ω 2 ω 1 ω 2
It follows that α 1 ω 1 . Equation (A70) is then written into
x 3 * = 2 ω 1 3 α 2 α 1 ω 2 α 2 ω 1 ω 2 ω 1 ω 2 x 4 * = 2 ω 1 3 α 2 α 1 ω 2 α 2 ω 1 ω 2 ω 1 ω 2
From Equation (A71), we obtain x 3 * 0 and x 4 * x 3 * . From Equations (A69) and (A71), G B o b decreases with x x 3 * or x > x 4 * , and increases for x 3 * < x x 4 * . Since 0 x 2 , G B o b achieves its maximum at x = 0 when x 4 * 0 . G B o b gets its maximum at x = 2 when x 4 * 2 . When 0 < x 4 * < 2 , G B o b gets the maximum at x = x 4 * . From P 2 , we obtain x 4 * x 2 * .
Now, we make a brief analysis of the strategy of Rice and Bob as follows.
Case 1. For x 4 * 0 , both Rice and Bob should take x = 0 to maximize its own gain from x 4 * x 2 * , i.e., x 2 * 0 . It follows that β = γ = 0 .
Case 2. For x 2 * 2 , both Rice and Bob should take x = 2 to maximize the gain from x 4 * x 2 * , i.e., x 4 * 2 . Therefore, we have β = γ = 1 .
Case 3. For x 2 * 0 < x 4 * 1 , we have x 0 > x 2 * for any β being chosen by Bob. Rice will choose γ = 0 to maximize her gain. For Bob, G B o b increases with β when x < x 4 * , and decreases with β when x > x 4 * , So, we have β = x 4 * and γ = 0 .
Case 4. For 0 < x 2 * x 4 * 1 and x 2 * < β 1 , we have x > x 2 * for any β being chosen by Bob. Rice will choose γ = 0 to maximize her gain, i.e., β = x 4 * . From Equation (A68), we get that
G B o b 1 = α 1 + α 2 α 2 x 4 * 2 α 1 ω 1 + x 4 * α 2 ω 2 2 ω 1 + ω 2 x 4 *
Case 5. For 0 < x 2 * x 4 * 1 and 0 β x 2 * , we have x x 2 * for any β being chosen by Bob. Rice will choose γ = x 2 * β to maximize her gain from x = x 2 * . G B o b decreases with β . It implies β = 0 and γ = x 2 * . From Equation (A68), we get that
G B o b 2 = α 1 + α 2 + 2 α 2 x 2 * 2 α 1 ω 1 + x 2 * α 2 ω 2 2 ω 1 + ω 2 x 2 *
Thus if G B o b 1 G B o b 2 , we get that β = x 4 * and γ = 0 . Otherwise, β = 0 and γ = x 2 * .
Case 6. For x 2 * 0 and 1 < x 4 * , we get that x > 0 > x 2 * for any β being chosen by Bob. Rice will choose γ = 0 to maximize her gain. For Bob, G B o b increases with β when x < x 4 * . So, β = 1 and γ = 0 .
Case 7. For 0 < x 2 * < 1 < x 4 * and x 2 * < β 1 , we have x > x 2 * for any β being chosen by Bob. Rice will choose γ = 0 to maximize her gain. We get that β = 1 and γ = 0 . From Equation (A68), it implies that
G B o b 3 = α 1 2 ( α 1 ω 1 + α 2 ω 2 ) 2 ω 1 + ω 2
Case 8. For 0 < x 2 * < 1 < x 4 * and 0 β x 2 * , we obtain that x x 2 * for any β being chosen by Bob. Rice will choose γ = x 2 * β to maximize her gain because of x = x 2 * . G B o b decreases with β . In this case, we get that β = 0 and γ = x 2 * . From Equation (A68), we obtain that
G B o b 4 = α 1 + α 2 + 2 α 2 x 2 * 2 α 1 ω 1 + x 2 * α 2 ω 2 2 ω 1 + ω 2 x 2 *
Hence, if G B o b 3 G B o b 4 , we obtain that β = 1 and γ = 0 . Otherwise, β = 0 and γ = x 2 * .
Case 9. For 1 x 2 * < 2 , Rice will choose γ = 1 to maximize her gain if Bob chooses β x 2 * 1 , i.e., x < x 2 * . So, β = x 2 * 1 and γ = 1 . If Bob chooses β x 2 * 1 , Rice will choose γ = x 2 * β to maximize her gain. Note that G B o b decreases with β when x = x 2 * . It follows that β = x 2 * 1 and γ = 1 . Thus, β = x 2 * 1 and γ = 1 .
To sum up, we obtain that
( β , γ ) = ( 0 , 0 ) , if x 4 * 0 ; ( x 4 * , 0 ) , if x 2 * 0 < x 4 * 1 ; ( x 4 * , 0 ) , if 0 < x 2 * x 4 * 1 , G B o b 1 G B o b 2 ; ( 0 , x 2 * ) , if 0 < x 2 * x 4 * 1 , G B o b 1 < G B o b 2 ; ( 1 , 0 ) , if 1 < x 4 * , x 2 * 0 ; ( 1 , 0 ) , if 0 < x 2 * < 1 < x 4 * , G B o b 3 G B o b 4 ; ( 0 , x 2 * ) , if 0 < x 2 * < 1 < x 4 * , G B o b 3 < G B o b 4 ; ( x 2 * 1 , 1 ) , if 1 x 2 * < 2 ; ( 1 , 1 ) if x 2 * 2 .

Appendix J. The Average Gain of Alice in the Second Quantum Game

According to Table A2, we have
G A l i c e = 2 P A l i c e 2 P R i c e 2 P B o b 1 ( μ 1 ) P R i c e 1 = 4 ( α 1 ω 1 + x α 2 ω 2 ) 2 ω 1 + ω 2 x μ α 2 + ( μ 3 ) α 2 γ 2 α 1 α 2 β
where x = β + γ .
The value of G A l i c e varies with the different values of x 2 * and x 4 * , we can calculate Alice’s average gain in the following nine cases according to that.
Case 1. For x 4 * 0 , it follows that 2 α 1 ω 1 3 α 2 ω 2 ( 2 3 + 1 ) ω 1 ω 2 0 . We get
ω 1 α 1 D
where D = ( 2 3 + 1 ) 2 ( 2 3 + 1 ) 2 + 4 ω 2 3 ω 1 . In this case, β = γ = 0 . From Equation (A76), we obtain
G A l i c e = 2 α 1 μ α 2
From G A l i c e α 1 = 2 + μ 2 , we get that G A l i c e increases with α 1 when ω 1 α 1 D . So, G A l i c e gets the maximum at α 1 = D , which is denoted by G A l i c e 1 given by
G A l i c e 1 = D ( 4 + μ ) μ 2
where the corresponding point is denoted by p 1 .
Case 2. For x 2 * 2 , it follows that 2 α 1 ω 1 α 2 ω 2 ( μ + 1 ) ( 2 ( μ + 1 ) + 1 ) ω 1 ω 2 2 . We get
E α 1 1
where E = ( 2 + ω 1 ω 2 ( 2 μ + 1 + 1 ) ) 2 ( 2 + ω 1 ω 2 ( 2 μ + 1 + 1 ) ) 2 + 4 ω 1 ω 2 ( μ + 1 ) . In this case, β = γ = 1 . From Equation (A76) we get that
G A l i c e = 4 ( α 1 ω 1 + 2 α 2 ω 2 ) 2 2
The derivative of G A l i c e with respect to α 1 is given by
d G A l i c e d α 1 = 4 2 ω 1 ω 2 ( 1 2 α 1 ) α 1 α 1 2 + 4 ω 1 8 ω 2
The second derivative of G A l i c e with respect to α 1 is given by
d 2 G A l i c e d α 1 2 = 2 2 ω 1 ω 2 ( α 1 α 1 2 ) 3 2
From Equation (A83), we have d 2 G A l i c e d α 1 2 < 0 , and d G A l i c e d α 1 = 0 has at most one solution when 0 α 1 1 . By setting d G A l i c e d α 1 = 0 , we get
1 2 α 1 α 1 α 1 2 = 2 ω 2 ω 1 2 ω 1 ω 2
Denote F = 2 ω 2 ω 1 2 ω 1 ω 2 . If F 0 , i.e., ω 1 2 ω 2 , the solution of Equation (A84) is given by α = 0.5 F 2 F 2 + 4 = ω 1 . G A l i c e increases with α 1 when α 1 ( 0 , ω 1 ) , and decreases with α 1 when α 1 ( ω 1 , 1 ) . From α 1 E > ω 1 , G A l i c e gets the maximum at α 1 = E . If F < 0 , i.e., ω 1 > 2 ω 2 , the solution of Equation (A84) is given by α = 0.5 F 2 F 2 + 4 = 2 ω 2 . G A l i c e increases when α 1 ( 0 , 2 ω 2 ) , and decreases when α 1 ( 2 ω 2 , 1 ) . From α 1 E > ω 1 > 2 ω 2 , G A l i c e gets the maximum at α 1 = E . It means that G A l i c e gets the maximum G A l i c e 2 when x 2 * 2 , which is further given by
G A l i c e 2 = 4 ( ω 1 E + 2 ω 2 ( 1 E ) ) 2 2
where the corresponding point is denoted by p 2 .
Case 3. For x 2 * 0 < x 4 * 1 , it follows that 0 < 2 α 1 ω 1 3 α 2 ω 2 ( 2 3 + 1 ) ω 1 ω 2 1 and 2 α 1 ω 1 α 2 ω 2 ( μ + 1 ) ( 2 ( μ + 1 ) + 1 ) ω 1 ω 2 0 . We get that
D < α 1 I
where
I = min ( H , G ) H = ( 1 + ( 2 3 + 1 ) ω 1 ω 2 ) 2 ( 1 + ( 2 3 + 1 ) ω 1 ω 2 ) 2 + 4 ω 1 3 ω 2 G = ( 2 μ + 1 + 1 ) 2 ω 1 ω 2 ( 2 μ + 1 + 1 ) 2 ω 1 ω 2 + 4 μ + 1
In this case, we get β = x 4 * and γ = 0 . From Equation (A76), it follows that
G A l i c e = 4 ( α 1 ω 1 + x 4 * α 2 ω 2 ) 2 ω 1 + ω 2 x 4 * μ α 2 2 α 1 α 2 β = ( 3 6 + 8 ) ( ω 1 α 1 α 2 ω 2 ω 1 α 2 ω 2 ) 2 α 1 μ α 2 + ω 1 α 2 ω 2
The derivative of G A l i c e with respect to α 1 is given by
d G A l i c e d α 1 = ( 3 6 + 8 ) ( ω 1 ( 1 2 α 1 ) 8 ω 2 ( α 1 α 2 2 ) + ω 1 2 ω 2 ) ω 1 2 ω 2 2 + μ 2
The second derivative of G A l i c e with respect to α 1 is given by
d 2 G A l i c e d α 1 2 = 3 3 ω 1 + 4 2 ω 1 4 ω 2 ( α 1 α 1 2 ) 3 2
From Equation (A90), we know that d 2 G A l i c e d α 1 2 < 0 , which implies that d G A l i c e d α 1 = 0 has at most one solution when 0 α 1 1 . From d G A l i c e d α 1 = 0 , we get
1 2 α 1 α 1 α 1 2 = 2 ω 2 ( 4 μ ) ω 1 ( 6 3 + 7 2 ) ( 3 6 + 8 ) ω 1 ω 2
Denote J = 2 ω 2 ( 4 μ ) ω 1 ( 6 3 + 7 2 ) ( 3 6 + 8 ) ω 1 ω 2 . The solution of Equation (A91) is given by α = 0.5 J 2 J 2 + 4 . G A l i c e increases with α 1 ( 0 , α ) , and decreases with α 1 ( α , 1 ) . Thus, G A l i c e gets the maximum at α 1 = D if α D . If α I , G A l i c e gets the maximum at α 1 = I . If D < α < I , G A l i c e gets the maximum at α 1 = α . The maximum gain of Alice when x 2 * < 0 < x 4 * < 1 is denoted by G A l i c e 3 , where the corresponding point is denoted by p 3 .
Case 4. For 0 < x 2 * x 4 * 1 , i.e., 2 α 1 ω 1 3 α 2 ω 2 ( 2 3 + 1 ) ω 1 ω 2 1 and 2 α 1 ω 1 α 2 ω 2 ( μ + 1 ) ( 2 μ + 1 + 1 ) ω 1 ω 2 > 0 , we get
G < α 1 H
If G B o b 1 G B o b 2 , β = x 4 * and γ = 0 , the maximum of Alice’s gain G A l i c e achieves when 0 < x 2 * < x 4 * < 1 . The maximum is denoted by G A l i c e 4 , where the corresponding point is denoted by p 4 .
Case 5. For G < α 1 H and G B o b 1 < G B o b 2 , β = 0 and γ = x 2 * , the maximum of G A l i c e achieves when 0 < x 2 * < x 4 * < 1 . The maximum is denoted by G A l i c e 5 , where the corresponding point is denoted by p 5 .
Case 6. For x 2 * 0 and x 4 * > 1 , i.e., 2 α 1 ω 1 3 α 2 ω 2 ( 2 3 + 1 ) ω 1 ω 2 > 1 and 2 α 1 ω 1 α 2 ω 2 ( μ + 1 ) ( 2 μ + 1 + 1 ) ω 1 ω 2 0 , we get
H < α 1 G
In this case, β = 1 and γ = 0 . From Equation (A76), we get
G A l i c e = 4 ( α 1 ω 1 + α 2 ω 2 ) 2 ω 1 + ω 2 ( μ + 1 ) α 2 2 α 1
The derivative of G A l i c e with respect to α 1 is given by
d G A l i c e d α 1 = 4 ( ω 1 ω 2 2 + ω 1 ω 2 ( 1 2 α 1 ) 2 ( α 1 α 1 2 ) ) ω 1 + ω 2 + μ 3 2
The second derivative of G A l i c e with respect to α 1 is given by
d 2 G A l i c e d α 1 2 0
By setting d G A l i c e d α 1 = 0 , we get
1 2 α 1 α 1 α 1 2 = ( 7 μ ) ω 2 ( m u + 5 ) ω 1 4 2 ω 1 ω 2
Denote N = ( 7 μ ) ω 2 ( m u + 5 ) ω 1 4 2 ω 1 ω 2 . The solution of Equation (A97) is given by α = 0.5 N 2 N 2 + 4 . G A l i c e increases with α 1 ( 0 , α ) , and decreases with α 1 ( α , 1 ) . Thus, if α H , G A l i c e gets the maximum at α 1 = H . If α G , G A l i c e gets the maximum at α 1 = G . If H < α < G , G A l i c e gets the maximum at α 1 = α . The maximum of G A l i c e achieves when x 2 * < 0 and x 4 * > 1 , and is then denoted by G A l i c e 6 , where the corresponding point is denoted by p 6 .
Case 7. For 0 < x 2 * < 1 < x 4 * , i.e., 2 α 1 ω 1 3 α 2 ω 2 ( 2 3 + 1 ) ω 1 ω 2 > 1 and 0 < 2 α 1 ω 1 α 2 ω 2 ( μ + 1 ) ( 2 μ + 1 + 1 ) ω 1 ω 2 < 1 , we get
R < α 1 < Q
where Q = ( 1 + ω 1 ω 2 ( 2 μ + 1 + 1 ) ) 2 ( 1 + ω 1 ω 2 ( 2 μ + 1 + 1 ) ) 2 + 4 ω 1 ω 2 ( μ + 1 ) and R = max ( H , G ) . If G B o b 3 G B o b 4 , β = 1 and γ = 0 , the maximum of G A l i c e achieves when 0 < x 2 * < 1 < x 4 * , and is then denoted by G A l i c e 7 , where the corresponding point is denoted by p 7 .
Case 8. For R < α 1 < Q and G B o b 3 < G B o b 4 , β = 0 and γ = x 2 * , the maximum of G A l i c e achieves when 0 < x 2 * < 1 < x 4 * , and is then denoted by G A l i c e 8 , where the corresponding point is denoted by p 8 .
Case 9. For 1 x 2 * < 2 , i.e., 1 2 α 1 ω 1 α 2 ω 2 ( μ + 1 ) ( 2 μ + 1 + 1 ) ω 1 ω 2 < 2 , we get
Q α 1 < E
In this case, β = x 2 * 1 and γ = 1 . From Equation (A76), we get
G A l i c e = 4 ( α 1 ω 1 + x 2 * α 2 ω 2 ) 2 ω 1 + ω 2 x 2 * 2 α 2 2 α 1 α 2 x 2 * = ( 2 2 ( μ + 1 ) + 3 2 μ + 1 + 8 ) × ( ω 1 α 1 α 2 ω 2 ω 1 α 2 ω 2 ) 2 α 1 2 α 2 + ω 1 α 2 ω 2
The derivative of G A l i c e with respect to α 1 is given by
d G A l i c e d α 1 = ( 2 2 ( μ + 1 ) + 3 2 μ + 1 + 8 ) × ( ω 1 ( 1 2 α 1 ) 8 ω 2 ( α 1 α 2 2 ) + ω 1 2 ω 2 ) ω 1 2 ω 2 1
The second derivative of G A l i c e with respect to α 1 is given by
d 2 G A l i c e d α 1 2 = 2 2 ω 1 ( μ + 1 ) + 3 2 ω 1 μ + 1 + 8 ω 1 32 ω 2 ( α 1 α 1 2 ) 3
From Equation (A102), we gave d 2 G A l i c e d α 1 2 < 0 which implies that d G A l i c e d α 1 = 0 has at most one solution when 0 α 1 1 . From d G A l i c e d α 1 = 0 , we get
1 2 α 1 α 1 α 1 2 = ( 4 ω 2 14 ω 1 ) μ + 1 2 ω 1 ( 4 μ + 10 ) 4 μ + 10 + 8 2 μ + 1 ω 1 ω 2
Denote T = ( 4 ω 2 14 ω 1 ) μ + 1 2 ω 1 ( 4 μ + 10 ) ( 4 μ + 10 + 8 2 μ + 1 ) ω 1 ω 2 . The solution of Equation (A103) is given by α = 0.5 T 2 T 2 + 4 . G A l i c e increases with α 1 ( 0 , α ) , and decreases with α 1 ( α , 1 ) . Thus, if α Q , G A l i c e gets the maximum at α 1 = Q . If α E , G A l i c e gets the maximum at α 1 = E . If Q < α < E , G A l i c e gets the maximum at α 1 = α . The maximum value of G A l i c e achievers when 1 < x 2 * < 2 , and is then denoted by G A l i c e 9 , where the corresponding point is denoted by p 9 .

References

  1. Streltsov, A.; Adesso, G.; Plenio, M.B. Colloquium: Quantum coherence as a resource. Rev. Mod. Phys. 2017, 89, 041003. [Google Scholar] [CrossRef] [Green Version]
  2. Bell, J.S. On the Einstein Podolsky Rosen paradox. Physics 1964, 1, 195. [Google Scholar] [CrossRef] [Green Version]
  3. Clauser, J.F.; Horne, M.A.; Shimony, A.; Holt, R.A. Proposed experiment to test local hidden-variable theories. Phys. Rev. Lett. 1969, 23, 880–884. [Google Scholar] [CrossRef] [Green Version]
  4. Landsburg, S.E. Quantum Game Theory. Not. Am. Math. Soc. 2004, 51, 394–399. [Google Scholar]
  5. Benjamin, S.C.; Hayden, P.M. Multi-player quantum games. Phys. Rev. A 2001, 64, 030301. [Google Scholar] [CrossRef] [Green Version]
  6. Flitney, A.P.; Schlosshauer, M.; Schmid, C.; Laskowski, W.; Hollenberg, L.C.L. Equivalence between Bell inequalities and quantum minority games. Phys. Lett. A 2008, 373, 521–524. [Google Scholar] [CrossRef] [Green Version]
  7. Murta, G.; Ramanathan, R.; Moller, N.; Cunha, M.T. Quantum bounds on multiplayer linear games and device-independent witness of genuine tripartite entanglement. Phys. Rev. A 2016, 93, 022305. [Google Scholar] [CrossRef] [Green Version]
  8. Luo, M.X. A nonlocal game for witnessing quantum networks. NPJ Quantum Inf. 2019, 5, 91. [Google Scholar] [CrossRef] [Green Version]
  9. Cleve, R.; Hoyer, P.; Toner, B.; Watrous, J. Consequences and limits of nonlocal strategies. In Proceedings of the 19th IEEE Conference on Computational Complexity (CCC2004), Amherst, MA, USA, 21–24 June 2004; pp. 236–249. [Google Scholar]
  10. Junge, M.; Palazuelos, C. Large violation of Bell inequalities with low entanglement. Commun. Math. Phys. 2011, 11, 1–52. [Google Scholar] [CrossRef] [Green Version]
  11. Briët, J.; Vidick, T. Explicit lower and upper bounds on the entangled value of multiplayer XOR games. Commun. Math. Phys. 2013, 321, 181–207. [Google Scholar] [CrossRef] [Green Version]
  12. Regev, O.; Vidick, T. Quantum XOR games. ACM Trans. Comput. Theory 2015, 7, 1–43. [Google Scholar] [CrossRef]
  13. Kempe, J.; Regev, O.; Toner, B. Unique games with entangled provers are easy. SIAM J. Comput. 2010, 39, 3207–3229. [Google Scholar] [CrossRef] [Green Version]
  14. Dinur, I.; Steurer, D.; Vidick, T. A parallel repetition theorem for entangled projection games. Comput. Complex. 2013, 24, 201–254. [Google Scholar] [CrossRef] [Green Version]
  15. Yuen, H. A Parallel Repetition Theorem for All Entangled Games. In Proceedings of the 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016), Rome, Italy, 11–15 July 2016; Volume 77. [Google Scholar]
  16. Jain, R.; Pereszlenyi, A.; Yao, P. A parallel repetition theorem for entangled two-player one-round games under product distributions. In Proceedings of the CCC’14, 2014 IEEE 29th Conference on Computational Complexity, Vancouver, BC, Canada, 11–13 June 2014; pp. 209–216. [Google Scholar]
  17. Brakerski, Z.; Kalai, Y.T. A Parallel Repetition Theorem for Leakage Resilience. In TCC’12, Proceedings of the 9th International Conference on Theory of Cryptography, Taormina, Sicily, Italy, 19–21 March 2012; Springer: Berlin, Germany, 2012; pp. 248–265. [Google Scholar]
  18. Ekert, A.K. Quantum cryptography based on Bell’s theorem. Phys. Rev. Lett. 1991, 67, 661. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Lim, C.C.W.; Portmann, C.; Tomamichel, M.; Renner, R.; Gisin, N. Device-Independent Quantum Key Distribution with Local Bell Test. Phys. Rev. X 2013, 3, 031006. [Google Scholar] [CrossRef] [Green Version]
  20. Vazirani, U.; Vidick, T. Fully device-independent quantum key distribution. Phys. Rev. Lett. 2014, 113, 140501. [Google Scholar] [CrossRef] [Green Version]
  21. Werner, R.F. Optimal cloning of pure states. Phys. Rev. A 1998, 58, 1827. [Google Scholar] [CrossRef] [Green Version]
  22. Marinatto, L.; Weber, T. A quantum approach to static games of complete information. Phys. Lett. A 2000, 272, 291–303. [Google Scholar] [CrossRef] [Green Version]
  23. Eisert, J.; Wilkens, M.; Lewenstein, M. Quantum games and quantum strategies. Phys. Rev. Lett. 1999, 83, 3077. [Google Scholar] [CrossRef] [Green Version]
  24. Sekiguchi, Y.; Sakahara, K.; Sato, T. Uniqueness of Nash equilibria in a quantum Cournot duopoly game. J. Phys. A Math. Theor. 2010, 43, 145303. [Google Scholar] [CrossRef] [Green Version]
  25. Brassard, G.; Broadbent, A.; Tapp, A. Recasting Mermin’s multi-player game into the framework of pseudo-telepathy. Quantum Inf. Comput. 2005, 5, 538–550. [Google Scholar]
  26. Meyer, D.A. Quantum strategies. Phys. Rev. Lett. 2012, 82, 1052. [Google Scholar] [CrossRef] [Green Version]
  27. Zhang, P.; Zhou, X.-Q.; Wang, Y.-L.; Shadbolt, P.J.; Zhang, Y.-S.; Gao, H.; Li, F.-L.; O’Brien, J.L. Quantum gambling based on Nash equilibrium. NPJ Quantum Inf. 2017, 3, 24. [Google Scholar] [CrossRef] [Green Version]
  28. Almeida, M.L.; Bancal, J.D.; Brunner, N.; Acin, A.; Gisin, N.; Pironio, S. Guess your neighbors input: A multipartite nonlocal game with no quantum advantage. Phys. Rev. Lett. 2010, 104, 230404. [Google Scholar] [CrossRef] [Green Version]
  29. Fudenberg, D.; Tirole, J. Game Theory; MIT Press: London, UK, 1995. [Google Scholar]
  30. Daskalakis, C.; Goldberg, P.W.; Papadimitriou, C.H. The complexity of computing a Nash equilibrium. Commun. ACM 2009, 52, 89–97. [Google Scholar] [CrossRef] [Green Version]
Figure 1. A schematic tripartite dynamic zero-sum classical game G . Alice puts a ball into three boxes A , B and C . And then, she sends C to Rice, and sends B to Bob. Rice can choose to open C or let Alice open A . If Rice opens C and there is no ball in it, it is Bob’s turn to open B or ask Alice to open A .
Figure 1. A schematic tripartite dynamic zero-sum classical game G . Alice puts a ball into three boxes A , B and C . And then, she sends C to Rice, and sends B to Bob. Rice can choose to open C or let Alice open A . If Rice opens C and there is no ball in it, it is Bob’s turn to open B or ask Alice to open A .
Entropy 23 00154 g001
Figure 2. The classical game tree. The rectangle with color represents the black box with the ball while the rectangle without color represents an entity, i.e., Alice, Bob and Rice. The diamond represents an operation. N means no ball being found, and Y means the ball being found.
Figure 2. The classical game tree. The rectangle with color represents the black box with the ball while the rectangle without color represents an entity, i.e., Alice, Bob and Rice. The diamond represents an operation. N means no ball being found, and Y means the ball being found.
Entropy 23 00154 g002
Figure 3. The average gains of three parties in the first classical game G 1 . In this simulation, we assume η 100 and ϵ = 10 5 . The gain of Alice is strictly larger than the gains of Bob and Rice for any η 1 .
Figure 3. The average gains of three parties in the first classical game G 1 . In this simulation, we assume η 100 and ϵ = 10 5 . The gain of Alice is strictly larger than the gains of Bob and Rice for any η 1 .
Entropy 23 00154 g003
Figure 4. The average gains of three parties in the second classical game G 2 . Here, μ 100 . G A l i c e and G B o b and G R i c e have no common intersection. Moreover, when μ > μ 1 , G B o b and G R i c e coincide, but do not intersect with G A l i c e .
Figure 4. The average gains of three parties in the second classical game G 2 . Here, μ 100 . G A l i c e and G B o b and G R i c e have no common intersection. Moreover, when μ > μ 1 , G B o b and G R i c e coincide, but do not intersect with G A l i c e .
Entropy 23 00154 g004
Figure 5. The schematic model of tripartite dynamic zero-sum quantum game. Here, three parties use a single photon to complete the game while the classic game uses a ball in the game given in Figure 1.
Figure 5. The schematic model of tripartite dynamic zero-sum quantum game. Here, three parties use a single photon to complete the game while the classic game uses a ball in the game given in Figure 1.
Entropy 23 00154 g005
Figure 6. The quantum game tree. The rectangle with color represents the black box with the photon while the rectangle without color represents an entity Alice. The diamond represents an operation. N means no photon being found, and Y means the photon being found.
Figure 6. The quantum game tree. The rectangle with color represents the black box with the photon while the rectangle without color represents an entity Alice. The diamond represents an operation. N means no photon being found, and Y means the photon being found.
Entropy 23 00154 g006
Figure 7. The average gains of three parties depending on η and ω 1 . (a) The average gain of Alice. (b) The average gain of Bob. (c) The average gain of Rice.
Figure 7. The average gains of three parties depending on η and ω 1 . (a) The average gain of Alice. (b) The average gain of Bob. (c) The average gain of Rice.
Entropy 23 00154 g007
Figure 8. (a) The average gains of Alice, Bob and Rice depending on η . For each η , the value of ω 1 is equal to the value that minimizes the deviation of the game from fair game. (b) Degree of deviation. Here, we express the degree of deviation from the fair game as the sum of the squares of each player’s average gain.
Figure 8. (a) The average gains of Alice, Bob and Rice depending on η . For each η , the value of ω 1 is equal to the value that minimizes the deviation of the game from fair game. (b) Degree of deviation. Here, we express the degree of deviation from the fair game as the sum of the squares of each player’s average gain.
Entropy 23 00154 g008
Figure 9. The average gains of three parties depending on μ and ω 1 .
Figure 9. The average gains of three parties depending on μ and ω 1 .
Entropy 23 00154 g009
Figure 10. (a) For each μ , the degree to which the game deviates from the fair game. Here, we express the degree of deviation from the game as the sum of the squares of each player’s average gain. (b) The relationship between the appropriate ω 1 and μ .
Figure 10. (a) For each μ , the degree to which the game deviates from the fair game. Here, we express the degree of deviation from the game as the sum of the squares of each player’s average gain. (b) The relationship between the appropriate ω 1 and μ .
Entropy 23 00154 g010
Figure A1. The values of 3 2 η + 5 p 3 * depending on η .
Figure A1. The values of 3 2 η + 5 p 3 * depending on η .
Entropy 23 00154 g0a1
Table 1. The gain settings of players in the first classical game G 1 . Here, g c ( b , a ) denotes the gain of Rice (or Bob, or Alice). R s u c c ( C ) means that Rice finds the ball after opening box C . B s u c c ( B ) means that Bob finds the ball after opening box B . R / B f a i l ( · ) means that neither Rice or Bob finds the ball. R / B s u c c ( A ) means that either Rice or Bob successes by opening box A . η 1 in the first classical game.
Table 1. The gain settings of players in the first classical game G 1 . Here, g c ( b , a ) denotes the gain of Rice (or Bob, or Alice). R s u c c ( C ) means that Rice finds the ball after opening box C . B s u c c ( B ) means that Bob finds the ball after opening box B . R / B f a i l ( · ) means that neither Rice or Bob finds the ball. R / B s u c c ( A ) means that either Rice or Bob successes by opening box A . η 1 in the first classical game.
Cases∖Gains g c g b g a
R s u c c ( C ) 2−1−1
B s u c c ( B ) −12−1
R / B f a i l ( · ) −1−12
R / B s u c c ( A ) η 1 η −1
Table 2. The values of P B and P C . P B ( C ) depends on the different cases in the game, where X 1 = η + 8 η 2 + 4 η + 52 2 η + 2 . Here, Δ i denotes the intervals given by Δ 1 = [ 0 , 3 2 η + 5 ) , Δ 2 = [ 3 2 η + 5 , X 1 ) , Δ 3 = [ X 1 , 3 η + 1 ] and Δ 4 = ( 3 η + 1 , 1 ] .
Table 2. The values of P B and P C . P B ( C ) depends on the different cases in the game, where X 1 = η + 8 η 2 + 4 η + 52 2 η + 2 . Here, Δ i denotes the intervals given by Δ 1 = [ 0 , 3 2 η + 5 ) , Δ 2 = [ 3 2 η + 5 , X 1 ) , Δ 3 = [ X 1 , 3 η + 1 ] and Δ 4 = ( 3 η + 1 , 1 ] .
η 1 η 2 η > 2
P 1 P 1 Δ 1 P 1 Δ 2 P 1 Δ 2 P 1 Δ 3 P 1 Δ 4
P B 1 X 0 3 P 2 ( η + 1 ) P 1 P 2 P 1 ( η + 1 ) ( 1 P 2 ) X 0 [ 0 , 1 ]
P C 10100
Table 3. G A l i c e m denotes the maximum of G A l i c e , and P max denotes the corresponding point of P 1 .
Table 3. G A l i c e m denotes the maximum of G A l i c e , and P max denotes the corresponding point of P 1 .
P 1 0 P 1 < 3 2 η + 5 3 2 η + 5 P 1 1
P max 3 2 η + 5 ϵ 3 2 η + 5
G A l i c e m η 2 + 4 η + 23 ( 2 η + 5 ) 2 + O ( ϵ ) η + 1 2 η + 5
Table 4. The average gains of players in the first classical game G 1 .
Table 4. The average gains of players in the first classical game G 1 .
Gain∖ η 1 η < 2 η 2
G A l i c e η 2 + 4 η + 23 ( 2 η + 5 ) 2 + O ( ϵ ) 1 + η 2 η + 5
G B o b η 2 5 η 13 ( 2 η + 5 ) 2 + O ( ϵ ) 1 2 η 2 η + 5
G R i c e η 2 2 η + 5 + O ( ϵ ) η 2 2 η + 5
Table 5. The gain settings of the second game G 2 . Here, g c ( b , a ) denotes gain of Rice (or Bob, or Alice). We assume that μ 2 in this second game.
Table 5. The gain settings of the second game G 2 . Here, g c ( b , a ) denotes gain of Rice (or Bob, or Alice). We assume that μ 2 in this second game.
Cases∖Gains g c g b g a
R s u c c ( C ) μ −11 − μ
B s u c c ( B ) −12−1
R / B f a i l ( · ) −1−12
R / B s u c c ( A ) 11−2
Table 6. The average gains of players in the second classical tripartite game G 2 .
Table 6. The average gains of players in the second classical tripartite game G 2 .
Gain∖ μ 2 μ 41 + 1345 14 μ > 41 + 1345 14
G A l i c e 54 14 μ 49 6 2 μ μ + 5
G B o b 19 49 μ 3 μ + 5
G R i c e 2 μ 5 7 μ 3 μ + 5
Table 7. The values of β * and γ * depend on x i * in the first quantum game such that G R i c e and G B o b achieve the maximums, where β * and γ * are the probability of Bob opening box B and Rice opening box C respectively. Here, x 1 * and x 2 * are given in Equation (31), and x 3 * and x 4 * are given in Equation (34).
Table 7. The values of β * and γ * depend on x i * in the first quantum game such that G R i c e and G B o b achieve the maximums, where β * and γ * are the probability of Bob opening box B and Rice opening box C respectively. Here, x 1 * and x 2 * are given in Equation (31), and x 3 * and x 4 * are given in Equation (34).
Cases x 2 * 0 0 < x 2 * 1 x 4 * 1 < x 2 * 1 < x 4 * < 2 x 4 * 2
β * 000 x 4 * 1 1
γ * 0 x 2 * 111
Table 8. G A l i c e m denotes the maximum of G A l i c e where the corresponding point is denoted by p max .
Table 8. G A l i c e m denotes the maximum of G A l i c e where the corresponding point is denoted by p max .
Cases x 2 * 0 0 < x 2 * 1 x 4 * 1 < x 2 * 1 < x 4 * < 2 x 4 * 2
p max p 1 p 2 p 3 p 4 p 5
G A l i c e m G A l i c e 1 G A l i c e 2 G A l i c e 3 G A l i c e 4 G A l i c e 5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cheng, H.-M.; Luo, M.-X. Tripartite Dynamic Zero-Sum Quantum Games. Entropy 2021, 23, 154. https://doi.org/10.3390/e23020154

AMA Style

Cheng H-M, Luo M-X. Tripartite Dynamic Zero-Sum Quantum Games. Entropy. 2021; 23(2):154. https://doi.org/10.3390/e23020154

Chicago/Turabian Style

Cheng, Hui-Min, and Ming-Xing Luo. 2021. "Tripartite Dynamic Zero-Sum Quantum Games" Entropy 23, no. 2: 154. https://doi.org/10.3390/e23020154

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop