Next Article in Journal
On the Nash Equilibria of a Duel with Terminal Payoffs
Next Article in Special Issue
Game Theoretic Foundations of the Gately Power Measure for Directed Networks
Previous Article in Journal
Delay to Deal: Bargaining with Indivisibility and Round-Dependent Transfer
Previous Article in Special Issue
Matrix-Based Method for the Analysis and Control of Networked Evolutionary Games: A Survey
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Two-Player Resource-Sharing Game with Asymmetric Information

Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA 90089-2565, USA
*
Author to whom correspondence should be addressed.
Games 2023, 14(5), 61; https://doi.org/10.3390/g14050061
Submission received: 11 August 2023 / Revised: 12 September 2023 / Accepted: 13 September 2023 / Published: 17 September 2023
(This article belongs to the Special Issue Applications of Game Theory with Mathematical Methods)

Abstract

:
This paper considers a two-player game where each player chooses a resource from a finite collection of options. Each resource brings a random reward. Both players have statistical information regarding the rewards of each resource. Additionally, there exists an information asymmetry where each player has knowledge of the reward realizations of different subsets of the resources. If both players choose the same resource, the reward is divided equally between them, whereas if they choose different resources, each player gains the full reward of the resource. We first implement the iterative best response algorithm to find an ϵ -approximate Nash equilibrium for this game. This method of finding a Nash equilibrium may not be desirable when players do not trust each other and place no assumptions on the incentives of the opponent. To handle this case, we solve the problem of maximizing the worst-case expected utility of the first player. The solution leads to counter-intuitive insights in certain special cases. To solve the general version of the problem, we develop an efficient algorithmic solution that combines online convex optimization and the drift-plus penalty technique.

1. Introduction

We consider the following game with two players, A and B. There are n resources, each denoted by an integer between 1 and n. Each player selects a resource without knowledge about the other player’s selection. The state of the game is described by the random vector W = ( W 1 , W 2 , , W n ) , where W k is the reward random variable of resource k. We assume W k to be independent random variables for each 1 k n , taking non-negative real values. If both players choose the same resource k, each gets a utility of W k / 2 . If they choose different resources k , l , they receive utilities of W k and W l , respectively. It is assumed that the mean and the variance of W k exist and are finite for each 1 k n . Both players know the distribution of W . Our formulation allows for an information asymmetry between the players. In particular, { 1 , 2 , , n } can be partitioned into four sets { A , B , C , AB } where only player A observes the realizations of W k for k A , only player B observes the realizations of W k for k B , no player observes the realizations of W k for k C , and both players observe the realizations of W k for k AB .
This game can be used to model different real-world scenarios where the agents have asymmetric information regarding the involved information structure. One classic example is the problem of Multiple-Access Control (MAC) in communication systems. Here, communication channels are accessed by multiple users, and the data rate of a channel is shared amongst the users who select it [1]. A channel can be shared using Time Division Multiple Access (TDMA) or Frequency Division Multiple Access (FDMA), where in TDMA, the channel is time-shared among the users [2,3], whereas in FDMA, the channel is frequency-shared among the users [4]. In both cases, the total data rate supported by the channel can be considered the utility of the channel. The problem of information asymmetry arises since a user might have precise information regarding the total data rate offered by some channels but not others, and the known channels can be different for different users. On the other hand, the users in such a system cannot be trusted since the system may have malicious users (for instance, jammers) who focus on reducing the data rate available to genuine users.
Modified versions of this game apply to problems in economics. For instance, consider a firm that chooses a market to enter from a pool of market options. The chosen market may also be chosen by another firm. The reward of a market is the revenue it brings. Assume a simplified model where there exists a total revenue for each market, and the total revenue is divided equally among the firms entering the market. A reward known to all firms can be considered public information, while a reward known only to one firm is private information of that firm.
The game defined above can be viewed as a stochastic version of the class of games defined in [5], which are resource-sharing games, also known as congestion games. In resource-sharing games, players compete for a finite collection of resources. In a single turn of the game, each player is allowed to select a subset of the collection of resources, where the allowed subsets make up the action space of the player. Each resource offers a reward to each player who selected the particular resource, where the reward offered depends on the number of players who selected it. The relationship between the reward offered to a player by a resource and the number of users selecting it is captured by the reward function of the resource. A player’s utility is equal to the sum of the rewards offered by the resources in the subset selected by the player. In [5], it is established that the above game has a pure-strategy (deterministic) Nash equilibrium.
Although in the classical setting, these games ignore the stochastic nature of the rewards offered by the resources, the idea of resource-sharing games has been extended to different stochastic versions [6,7]. Versions of the game with information asymmetry have been considered through the work of [8] in the context of Bayesian games, which considers the information design problem for resource-sharing with uncertainty. Similar Bayesian games have also been considered in [9,10]. It should be noted that in general resource-sharing games, no conditions are placed on the reward functions of the resources. The special case where the reward functions are non-decreasing in the number of players selecting the resource is called a cost-sharing game [11]. These games are typically treated as games where a cost is minimized rather than a utility being maximized. In fair cost-sharing games, the cost of a resource is divided equally among the players selecting the resource. We consider a fair reward allocation model, where the reward of a resource is equally shared among the players selecting the resource. It should be noted that in this model, the players have opposite incentives compared to a fair-cost sharing model.
The work on resource-sharing games assumes that the players either cooperate or have the incentive to maximize a private or a social utility. It is interesting to consider a stochastic version of the game with asymmetric information between players who do not necessarily trust each other and who place no assumptions on the incentives of the opponents. In this context, the players have no signaling or external feedback and take actions based only on their personal knowledge of the reward realizations for a subset of the resource options. In this paper, we consider the above problem and limit our attention to the two-player singleton case, where each player can choose only one resource.
In the first part of the paper, we provide an iterative best response algorithm to find an ϵ -approximate Nash equilibrium of the system. In the second part, we solve the problem of maximizing the worst-case expected utility of the first player. We solve the problem in two cases. The first case is when both players do not know the realizations of the reward random variables of any of the resources, in which case an explicit solution can be constructed. This case yields a counter-intuitive solution that provides insight into the problem. One such insight is that, while it is always optimal to choose from a subset of resources with the highest average rewards, within that subset, one chooses the higher-valued rewards with lower probability. For the second case, we solve the general version of the problem by developing an algorithm that leverages the online optimization technique [12,13] and the drift-plus penalty method [14]. This algorithm generates a mixture of O ( 1 / ε 2 ) pure strategies, which, when used in an equiprobable mixture, provides a utility within ε of optimality on average. Below, we summarize our major contributions.
  • We consider the problem of a two-player singleton stochastic resource-sharing game with asymmetric information. We first provide an iterative best response algorithm to find an ϵ -approximate Nash equilibrium of the system. This equilibrium analysis uses potential game concepts.
  • When the players do not trust each other and place no assumptions on the incentives of the opponent, we solve the problem of maximizing the worst-case expected utility of the first player using a novel algorithm that leverages techniques from online optimization and the drift-plus penalty methods. The algorithm developed can be used to solve the general unconstrained problem of finding the randomized decision α { 1 , 2 , , n } , which maximizes E { h ( x ; Θ ) } , where x R n with x k = E { Γ k 1 { α = k } } , Θ R m and Γ R n are non-negative random vectors with finite second moments, and h is a concave function such that h ˜ ( x ) = E { h ( x ; Θ ) } is Lipschitz continuous, entry-wise non-decreasing and has bounded subgradients.
  • We show our algorithm uses a mixture of only O ( 1 / ε 2 ) pure strategies using a detailed analysis of the sample path of the related virtual queues (our preliminary work on this algorithm used a mixture of O ( 1 / ε 3 ) pure strategies). Virtual queues are also used for constrained online convex optimization in [13], but our problem structure is different and requires a different and more involved treatment.

1.1. Background on Resource-Sharing Games

The classical resource-sharing game defined in [5] is a tuple ( M , N , T , r ) , where M is a set of m players, N is a set of n resources, T = T 1 × T 2 × × T m where T j is the set of possible actions of player j (which is a subset of 2 N ), and r = ( r 1 , r 2 , , r n ) , where r i : N 0 R is the reward function of resource i. Here, we use the notation N 0 = N { 0 } . Each player has complete knowledge about the tuple ( M , N , T , r ) , but they do not have knowledge of the actions chosen by other players. For an action profile a = ( a 1 , a 2 , , a m ) T , the count function # is a function from N × T to N 0 where # ( i , a ) = k = 1 m 1 { i a k } . In other words, # ( i , a ) is the number of players choosing resource i under action profile a . We call the quantity r i ( # ( i , a ) ) the per-player reward of resource i under action profile a . The utility u j of player j is a function from T to R , where u j ( a ) = i = 1 n 1 { i a k } r i ( # ( i , a ) ) . In other words, u j ( a ) is the sum of the per-player rewards of the resources chosen by player j under action profile a . Resource-sharing games fall under the general category of potential games [15]. Potential games are the class of games for which the change in reward of any player as a result of changing their strategy can be captured by the change in a global potential function.
Many game variations of the resource-sharing game have been studied [16]. Weighted resource-sharing games [17], games with player-dependent reward functions [18], and games with resources having preferences over players [19] are some of the extensions. Singleton games, where each player is allowed to choose only one resource, have also been explored explicitly in the literature [20,21]. Some of the extensions of the classical resource-sharing game possess a pure Nash equilibrium in the singleton case. Two examples would be the games with player-specific reward functions for a resource [18] and the games with priorities where the resources have preferences over the players [19].
Resource-sharing games have been extended to several stochastic versions. For instance, ref.  [6] considers the selfish routing problem with risk-averse players in a network with stochastic delays. The work of [7] considers two scenarios where, in the first scenario, each player participates in the game with a certain probability, and in the second scenario, the reward functions are stochastic. The problem of information asymmetry in resource-sharing games has been addressed through the work of [8,9,10,22]. The work of [22] considers a network congestion game where the players have different information sets regarding the edges of the network. Further, ref. [8] considers a scenario with a single random state θ , which determines the reward functions. The realization of θ is known to a game manager who strategically provides recommendations (signaling) to the players to minimize the social cost. An information asymmetry arises among the players in this case due to the actions of the game manager during private signaling, where the game manager provides player-specific recommendations.
Resource-sharing games appear in a variety of applications such as service chain composition [23], congestion control [24], network design [25], load balancing networks [26,27], resource sharing in wireless networks [28], spectrum sharing [29], radio access selection [30], non-orthogonal multiple access [31,32], network selection [33,34], and migration of species [35].
Our formulation differs from the literature on resource-sharing games since we consider a scenario that is difficult to be analyzed using the standard equilibrium-based approaches. This is due to the fact that the players do not trust each other and place no assumptions on the incentives of the opponents, and they take action in the absence of a signaling mechanism or external feedback by just using their knowledge of the reward random variables. This motivates our formulation as a one-shot problem tackled using worst-case expected utility maximization.

1.2. Notation

We use calligraphic letters to denote sets. Vectors and matrices are denoted by boldface characters. For integers n and m, we denote by [ n : m ] the inclusive set of integers between n and m. Given a vector w R m , w k is used to denote the k-th element of w ; w k : l for l k represents the l k + 1 dimensional sub-vector ( w k , w k + 1 , , w l ) of w ; for a subset S of integers from 1 to n  { w k ; k S } represents the sub-vector of w with index in S . For z R m , we use z 2 to denote the standard Euclidean norm (L2 norm) of z . For a function f : R m R , and z R m , we use f ( z ) = ( f 1 ( z ) , f 2 ( z ) , , f m ( z ) ) to denote a subgradient of f at z .

2. Materials and Methods

The code used for the simulations is implemented using Python programming language in the notebook https://rb.gy/wvt33, accessed on 10 August 2023.

3. Formulation

Denote X = { W k ; k A } , Y = { W k ; k B } , Z = { W k ; k AB } , and V = { W k ; k C } . Recall that X is known only to player A, Y is known only to player B, Z is known to both players, and V is known to neither. Let us define A c = [ 1 : n ] A , and B c = [ 1 : n ] B . Let | A | = a , | B | = b , | C | = c and | AB | = d . Therefore, a + b + c + d = n . Without loss of generality, we assume A = [ 1 : a ] , B = [ a + 1 : a + b ] , C = [ a + b + 1 : a + b + c ] , and AB = [ a + b + c + 1 : n ] .
Let R C ( g A , g B ) be the random variable representing the utility of player C { A , B } , given that player A uses strategy g A , and player B uses strategy g B . General strategies for players A and B can be represented by the Borel-measurable functions,
g A : [ 0 , 1 ) × R 0 a + d [ 1 : n ] ,
g B : [ 0 , 1 ) × R 0 b + d [ 1 : n ] ,
where
α A = g A ( U A , X , Z ) ,
α B = g B ( U B , Y , Z ) ,
are the resources chosen by players A and B, respectively. Here, U A and U B are independent randomization variables uniformly distributed in [0, 1) and independent of W . A pure strategy for player A is a function g A that does not depend on U A , whereas a mixed strategy is a function g A that depends on U A . Hence, we drop the randomization variable when depicting a pure strategy. Pure strategies and mixed strategies for player B are defined similarly. Let S A and S B denote the sets of all possible strategies for players A and B, respectively.
It turns out that our analysis is simplified when Z is fixed. Fixing Z does not affect the symmetry between players A and B since Z is observed by both players A and B. Hereafter, we conduct the analysis by considering all quantities conditioned on Z .
Define
p k A = E { 1 { α A = k } | Z } for 1 k n , q k A = E { W k 1 { α A = k } | Z } for k A ,
and,
p k B = E { 1 { α B = k } | Z } for 1 k n , q k B = E { W k 1 { α B = k } | Z } for k B .
Note that p k A and p k B are the conditional probabilities of players A and B choosing k given Z . Define vectors p A = { p k A ; 1 k n } , q A = { q k A ; k A } , p B = { p k B ; 1 k n } , and q B = { q k B ; k B } . For 1 k n , define E k = E { W k | Z } . Hence, we have
E k = W k if k AB , E { W k } otherwise ,
which uses the independence of W k and Z when k AB .
Note that the utility achieved by player A given the strategies g A and g B can be written as
R A ( g A , g B ) = k = 1 n W k 1 { α A = k } 1 2 1 { α A = k } 1 { α B = k } .
Given the strategies g A and g B , we provide an expression for the expected utility of player A given Z , where the expectation is over the random variables X , Y , V , and the possibly random actions α A and α B . Taking expectations of (8) gives,
E { R A ( g A , g B ) | Z } = k = 1 n E { W k 1 { α A = k } | Z } 1 2 k = 1 n E { W k 1 { α A = k } 1 { α B = k } | Z } = k A E { W k 1 { α A = k } | Z } + k A c E { W k | Z } E { 1 { α A = k } | Z } 1 2 k = 1 n E { W k 1 { α A = k } 1 { α B = k } | Z } = k A q k A + k A c E k p k A 1 2 k = 1 n E { W k 1 { α A = k } 1 { α B = k } | Z } .
Note that given Z , the random variables α A and α B are independent. Hence, we can split the last term (9) as follows,
k = 1 n E { W k 1 { α A = k } 1 { α B = k } | Z } = k A E { W k 1 { α A = k } 1 { α B = k } | Z } + k B E { W k 1 { α A = k } 1 { α B = k } | Z } + k C AB E { W k 1 { α A = k } 1 { α B = k } | Z } = k A E { W k 1 { α A = k } | Z } E { 1 { α B = k } | Z } + k B E { 1 { α A = k } | Z } E { W k 1 { α B = k } | Z } + k C AB E k E { 1 { α A = k } | Z } E { 1 { α B = k } | Z } = k A q k A p k B + k B p k A q k B + k C AB E k p k A p k B .

4. Computing the ϵ -Approximate Nash Equilibrium

This section focuses on finding an ϵ -approximate Nash equilibrium of the game. Fix ϵ > 0 . A strategy pair ( g A , g B ) is defined as an ϵ -approximate Nash equilibrium if neither player can improve its expected reward by more than ϵ if it changes its strategy (while holding the strategy of the other player fixed).
Combining (10) with (9), we have that
E { R A ( g A , g B ) | Z } = k A q k A + k A c E k p k A 1 2 k A q k A p k B + k B p k A q k B + k C AB E k p k A p k B .
Similarly, for player B, we have
E { R B ( g A , g B ) | Z } = k B q k B + k B c E k p k B 1 2 k A q k A p k B + k B p k A q k B + k C AB E k p k A p k B .
First, we focus on finding the best response for players A and B, given the other player’s strategy is fixed.
Lemma 1.
The best response for players A and B are given by α A = arg max 1 k n A k , and α B = arg max 1 k n B k , where A k and B k are given by,
A k = W k 1 p k B 2 i f k A , E k q k B 2 i f k B , E k 1 p k B 2 i f k C AB , B k = E k q k A 2 i f k A , W k 1 p k A 2 i f k B , E k 1 p k A 2 i f k C AB .
Proof of Lemma 1.
We find the best response for A, and the best response for B follows similarly. Notice that we can rearrange (11) as,
E { R A ( g A , g B ) | Z } = k A q k A 1 p k B 2 + k B p k A E k q k B 2 + k C AB p k A E k 1 p k B 2 = k A E { W k 1 { α A = k } | Z } 1 p k B 2 + k B E { 1 { α A = k } | Z } E k q k B 2 + k C AB E k E { 1 { α A = k } | Z } 1 p k B 2 = E k A W k 1 p k B 2 1 { α A = k } + k B E k q k B 2 1 { α A = k } + k C AB E k 1 p k B 2 1 { α A = k } | Z .
The above expectation is maximized when A chooses according to the given policy.    □
Next, we find a potential function for the game. A potential function is a function of the strategies of the players such that the change in the utility of a player when he changes his strategy (while the strategies of other players are held fixed) is equal to the change in the potential function [15].
Theorem 1.
The function H ( g A , g B ) given by,
H ( g A , g B ) = k A ( q k A + E k p k B ) + k B ( q k B + E k p k A ) + k C AB E k ( p k A + p k B ) 1 2 k A q k A p k B + k B p k A q k B + k C AB E k p k A p k B ,
is a potential function for the game, where p k A , p k B for 1 k n , q k A for k A and q k B for k B are defined in (5) and (6). Moreover, we have that for all g A , g B S A × S B , H ( g A , g B ) 2 k = 1 n E k .
Proof of Theorem 1.
The key to the proof is separating (15) (using (11) and (12)) as,
H ( g A , g B ) = E { R A ( g A , g B ) } + k B c E k p k B + k B q k B
= E { R B ( g A , g B ) } + k A q k A + k A c p k A E k .
Consider updating the strategy of player A while holding the strategy of player B fixed. Notice that since k B c E k p k B + k B q k B is not affected in this process, from (16), we have that the change in the expected utility of player A is equal to the change in the H function. Similarly, this holds when player B updates the strategy while holding player A’s strategy fixed. Hence, this is indeed a potential function. The proof that H ( g A , g B ) 2 k = 1 n E k is omitted for brevity (See technical report [36] for details).    □
Using Theorem 1 with standard potential game theory (see, for example, [37]), we have that the iterative best response algorithm with the best response found in Lemma 1 converges to an ϵ -approximate Nash equilibrium in at most ( 2 k = 1 n E k ) / ϵ iterations.

5. Worst-Case Expected Utility

Finding a Nash equilibrium using the above algorithm may not be desirable when the players do not trust each other and place no assumptions on the incentives of the opponent. To mitigate this issue, we consider maximizing the worst-case expected utility of player A. Similar to the case of finding the Nash equilibrium, the analysis is simplified when Z is fixed.
Notice that we can simplify (10) to yield,
k = 1 n E { W k 1 { α A = k } 1 { α B = k } | Z } = k A q k A E { 1 { α B = k } | Z } + k B p k A E { W k 1 { α B = k } | Z } + k C AB E k p k A E { 1 { α B = k } | Z }
= k A E { Ω k q k A 1 { α B = k } | Z } + k A c E { Ω k p k A 1 { α B = k } | Z } ,
where
Ω k = 1 if k A , W k if k B , E k if k C AB .
Plugging the above into (9), we find that
E { R A ( g A , g B ) | Z } = k A q k A + k A c E k p k A 1 2 E k A Ω k q k A 1 { α B = k } + k A c Ω k p k A 1 { α B = k } | Z .
The difficulty in dealing with E { R A ( g A , g B ) | Z } is that it depends on the strategy g B of player B, which is not known to player A. Hence, given a strategy g A of player A, we first focus on obtaining the worst-case strategy g A ^ of player B. Then we focus on finding the strategy g A of player A, which maximizes E { R A ( g A , g A ^ ) | Z } . This way, we can guarantee a minimum expected utility for player A irrespective of player B’s strategy.
Lemma 2.
For given g A S A , the strategy g B S B that minimizes E { R A ( g A , g B ) | Z } chooses α B = arg max 1 k n Λ k , where
Λ k = Ω k q k A i f k A , Ω k p k A i f k A c ,
and Ω k are defined in (20).
Proof of Lemma 2.
Notice that the only term of E { R A ( g A , g B ) | Z } in (21) that depends on the strategy of player B is the last expectation. This expectation is maximized when player B chooses k, for which Λ k is maximized.1    □
Hence, we have
E { R A ( g A , g A ^ ) | Z } = k A q k A + k A c E k p k A 1 2 E { max { Λ k ; 1 k n } | Z } ,
where Λ k is defined in (22). We formulate a strategy for player A using the following optimization problem
( P 1 ) : maximize f ( q , p a + 1 : n ) g S A subject to q R a , p R n , q k = E W , U A { W k 1 g ( U A , X , Z ) = k | Z } 1 k a p l = E W , U A { 1 { g ( U A , X , Z ) = l } | Z } 1 l n ,
where f : R n R is defined by,
f ( x ) = k A x k + k A c E k x k 1 2 E { max { Ω j x j ; 1 j n } | Z } .
Although not used immediately, we derive certain properties of f in the following theorem, which are useful later.
Theorem 2.
The function f
1.
is concave.
2.
is entry-wise non-decreasing.
3.
satisfies,
| f ( x ) f ( y ) | 3 2 j A | x j y j | + 3 2 j A c E j | x j y j | ,
for any x , y R n .
Proof of Theorem 2.
See Appendix A.    □
It turns out that when a = b = d = 0 , an explicit solution can be obtained to (P1), which we describe in Section 5.1. In Section 5.2, we describe the solution to the general case. In the technical report [36], we provide simpler alternative solutions to the special cases a = 0 (with no restriction on b) and a = 1 (with the additional assumption that W 1 has a continuous CDF).

5.1. Explicit Solution for a = b = d = 0

When neither player knows any of the reward realizations, we have a = b = d = 0 , and the problem reduces to the following.
( P 2 ) : maximize k = 1 n p k E k 1 2 max { p k E k ; 1 k n } subject to p I ,
where
I = { p R n : i = 1 n p i = 1 , p i 0 i }
is the n-dimensional probability simplex. For this section, we assume without loss of generality that E k > 0 for all k. If at least one of the E k ’s is zero, we could transform (P2) into a lower dimensional problem with non-zero E k ’s. The following lemma constructs an explicit solution for a = b = d = 0 .2
Lemma 3.
Assume without loss of generality that E k E k + 1 for 1 k n 1 . Further, let,
r = arg max 1 k n k 1 2 j = 1 k 1 E j ,
where the lowest index is chosen in the case of ties. The optimal solution for (P2) is given by p where
p k = 1 E k j = 1 r 1 E j if k r , 0 otherwise .
Proof of Lemma 3.
See Appendix B.    □
It should be noted that this solution is not unique. For instance, consider the case when n = 2 , E 1 = 2 , and E 2 = 1 . In this case, the lemma finds the solution ( p 1 , p 2 ) = ( 1 , 0 ) , but it should be noted that ( p 1 , p 2 ) = ( 1 / 3 , 2 / 3 ) is also a solution. It is also interesting that the solution assigns positive probabilities to the r resources with the highest average reward, although within these r resources, higher probabilities are assigned to the resources with lower rewards.
It should also be noted that the worst-case strategy can be arbitrarily worse than the Nash equilibrium strategy. For instance, consider the simple scenario with two resources such that E 1 = E 2 , where none of the players observe any of the reward realizations. In this case, a Nash equilibrium would be player A always choosing resource 1 and player B always choosing resource 2. Another Nash equilibrium would be player B always choosing resource 1 and player A always choosing resource 2. In either case, player A’s expected utility is E 1 . However, notice that, from Lemma 3, the maximum worst-case expected utility of player A is 3 E 1 E 2 / ( 2 E 1 + 2 E 2 ) = 3 E 1 / 4 . Hence, E 1 can be scaled to obtain arbitrarily large deviation between the worst-case and the Nash equilibrium solutions.

5.2. Solving the General Case

In this section, we focus on solving the most general version of (P1) (with no restrictions on the sets A , B , AB , C ). In particular, we focus on finding a mixed strategy to optimize the worst-case expected utility for player A. It turns out that our optimal solution chooses from a mixture of pure strategies parameterized by Q R n , of the following form
g Q A ( X ) = arg max 1 j n { { Q j W j ; j A } { Q j ; j A c } } .
We name this special class of pure strategies as threshold strategies. We develop a novel algorithm to solve this problem. Our algorithm leverages techniques from drift-plus penalty theory [14] and online convex optimization [12,13]. It should be noted that our algorithm runs offline and is used to construct an appropriate strategy for player A that approximately solves (P1) conditioned on the observed realization of Z . We show that we can obtain values arbitrarily close to the optimal value of (P1) by using a finite equiprobable mixture of pure strategies of the above form. It should be noted that the algorithm developed in this section can be used to solve the general unconstrained problem of finding the randomized decision α { 1 , 2 , , n } which maximizes E { h ( x ; Θ ) } , where x R n with x k = E { Γ k 1 { α = k } } , Θ R m and Γ R n are non-negative random vectors with finite second moments, and h is a concave function such that h ˜ ( x ) = E { h ( x ; Θ ) } is Lipschitz continuous, entry-wise non-decreasing, and has bounded subgradients.
We first provide an algorithm that generates a mixture of T pure strategies, after which we establish the closeness to the optimality of the mixture. We generate a mixture of T pure strategies { g Q ( t ) A } t = 1 T by iteratively updating vector Q for T iterations, where Q ( t ) and g Q ( t ) ( X ) denote the state of Q and the pure strategy generated in the t-th iteration, respectively. In addition to Q ( t ) , we require another state vector γ ( t ) R n , which we also update in each iteration, and parameter V, which decides the convergence properties of the algorithm. We provide the specific details on setting V later in our analysis. We begin with Q ( 1 ) = γ ( 0 ) = 0 . In the t-th iteration ( t 1 ) , we independently sample X ( t ) and Ω ( t ) from the distributions of X and Ω , respectively, where Ω is defined in (20) while keeping Z fixed to its observed value. Then, we update γ ( t ) and Q ( t + 1 ) as follows. First, we solve,
(32a) ( P 3 ) : minimize γ ( t ) V f t ( γ ( t 1 ) ) γ ( t ) + α γ ( t ) γ ( t 1 ) 2 2 + j = 1 n Q j ( t ) γ j ( t ) (32b) subject to   γ ( t ) K ,
to find γ ( t ) , where
f t ( x ) = k A x k + k A c x k E k 1 2 max { x k Ω k ( t ) ; 1 k n } ,
α > 0 and K = j A [ 0 , E j ] × [ 0 , 1 ] n a . Notice that f t ( x ) is given by,
f t , j ( x ) = 1 1 2 1 { arg max 1 k n { x k Ω k ( t ) } = j } if j A , E j 1 2 1 { arg max 1 k n { x k Ω k ( t ) } = j } Ω j ( t ) if j A c ,
where arg max returns the lowest index in the case of ties. Notice that f t is a concave function, which can be established by repeating the same argument used to establish the concavity of f in Theorem 2. Then, we choose the action for the t-th iteration α A ( t ) = g Q ( t ) A ( X ( t ) ) (See (31)). Then, to update Q ( t + 1 ) , we use,
Q j ( t + 1 ) = max Q j ( t ) + γ j ( t ) X j ( t ) 1 { α A ( t ) = j } , 0 , j A , Q j ( t + 1 ) = max Q j ( t ) + γ j ( t ) 1 { α A ( t ) = j } , 0 , j A c .
The algorithm is summarized as Algorithm 1 for clarity.
Algorithm 1: Algorithm for the generation of the optimal mixture of T pure strategies.
Games 14 00061 i001
After creating the mixture { g Q ( t ) A } t = 1 T of pure strategies, we choose one of them randomly with probability 1 / T to take the decision. In the following two sections, we focus on solving (P3) and evaluating the performance of Algorithm 1.

5.2.1. Solving (P3)

Notice that the objective of (P3) can be written as
V f t ( γ ( t 1 ) ) γ ( t ) + α γ ( t ) γ ( t 1 ) 2 2 + j = 1 n Q j ( t ) γ j ( t ) = j = 1 n V f t , j ( γ ( t 1 ) ) γ j ( t ) + α ( γ j ( t ) γ j ( t 1 ) ) 2 + Q j ( t ) γ j ( t ) .
Hence (P3) seeks to minimize a separable convex function over the box constraint γ ( t ) K . The solution vector γ ( t ) is found by separately minimizing each component γ j ( t ) over [ 0 , u j ] , where
u j = E j if j A , 1 if j A c .
The resulting solution is,
γ j ( t ) = Π [ 0 , u j ] γ j ( t 1 ) V f t , j ( γ ( t 1 ) ) + Q j ( t ) 2 α ,
where Π [ 0 , u j ] denotes the projection onto [ 0 , u j ] . Notice that the above solution is obtained by projecting the global minimizer of the function to be minimized onto [ 0 , u j ] .

5.2.2. How Good Is the Mixed Strategy Generated by Algorithm 1

Without loss of generality, we assume that E k > 0 for all 1 k n . The following theorem establishes the closeness of the expected utility generated by Algorithm 1 to the optimal value f opt of (P1).
Theorem 3.
Assume α is set such that α V 2 , and we use the mixed strategy g A generated by Algorithm 1 to make the decision. Then,
E { R A ( g A , g A ^ ) | Z } f opt D 1 V V D 2 16 α α D 3 V T 3 2 T k A α + E k 2 2 α + 1 3 2 T k A c E k 2 α + E k 2 2 α + 1 ,
where
D 1 = n a + 1 2 j A ( E j 2 + E { W k 2 } ) , D 2 = 4 a + E { Ω 2 2 | Z } + j A c 4 E j 2 , D 3 = n a + j A E j 2 ,
Ω is defined in (20), and f opt is the optimal value of (P1). Hence, by fixing ε > 0 , and using V = 1 / ε , α = 1 / ε 2 , and T 1 / ε 2 , the average error is O ( ε ) .
Proof of Theorem 3.
The key to the proof is noticing that Q ( t ) can be treated as n queues. Before proceeding with the proof, we define some quantities. Define the history up to time t by H ( t ) = { X ( τ ) ; 1 τ < t } { Ω ( τ ) ; 1 τ t } . Notice that we include Ω ( t ) in H ( t ) since this will allow us to treat γ ( t ) and Q ( t ) as deterministic functions of H ( t ) and Z . Let us define the Lyapunov function L ( t ) = 1 2 | | Q ( t ) | | 2 = 1 2 j = 1 n Q j ( t ) 2 , and the drift Δ ( t ) = E { L ( t + 1 ) L ( t ) | H ( t ) , Z } . Now, notice that
E { R A ( g A , g A ^ ) | Z } = f 1 T t = 1 T E { x ( t ) | Z } ,
where
x k ( t ) = X k ( t ) 1 { g Q ( t ) A ( X ( t ) ) = k } if k A , 1 { g Q ( t ) A ( X ( t ) ) = k } if k A c .
We begin with the following two lemmas, which will be useful in the proof.
Lemma 4.
The drift is bounded above as
Δ ( t ) D 1 + j = 1 n Q j ( t ) ( γ j ( t ) E { x j ( t ) | H ( t ) , Z } ) ,
where D 1 is defined in (40).
Proof of Lemma 4.
See Appendix C. □
The following is a well-known result regarding the minimization of strongly convex functions (see, for example, a more general pushback result in [38]).
Lemma 5.
For a convex function h : R n R , a convex subset C of R n , y R n and α > 0 , let,
x arg min x C h ( x ) + α x y 2 2 .
Then,
h ( x ) + α x y 2 2 h ( z ) + α z y 2 2 α z x 2 2 ,
for all z C .
Now, we move on to the main proof. Notice that the objective of (P3) can be written as
g t ( γ ( t 1 ) ) γ ( t ) + α γ ( t ) γ ( t 1 ) 2 2 ,
where
g t ( x ) = V f t ( x ) + j = 1 n Q j ( t ) x j .
Let g A , be the strategy that is optimal for (P1). Let us define x ( t ) R n , where
x k ( t ) = (48a) X k ( t ) 1 { g A , ( U A ( t ) , X ( t ) , Z ) = k } if k A , (48b) 1 { g A , ( U A ( t ) , X ( t ) , Z ) = k } if k A c ,
where U A ( t ) for 1 t T is a collection of independent and identically distributed uniform [ 0 , 1 ) random variables. Notice that y = E { x ( t ) | Z } is independent of t and belongs to K . Hence, y is feasible for (P3). Notice that
V f t ( γ ( t 1 ) ) γ ( t ) + j = 1 n Q j ( t ) γ j ( t ) + α γ ( t ) γ ( t 1 ) 2 2 = g t ( γ ( t 1 ) ) γ ( t ) + α γ ( t ) γ ( t 1 ) 2 2 ( a ) g t ( γ ( t 1 ) ) y + α y γ ( t 1 ) 2 2 α y γ ( t ) 2 2 = V f t ( γ ( t 1 ) ) y + j = 1 n Q j ( t ) y j + α y γ ( t 1 ) 2 2 α y γ ( t ) 2 2 ,
where (a) follows from Lemma 5 for the convex function h given by h ( x ) = g t ( γ ( t 1 ) ) x , and C = K , since γ ( t ) is the solution to (P3) and y is feasible for (P3). Further, step 5 in each iteration of Algorithm 1 of finding the action can be represented as the maximization of
j A Q j ( t ) E { X j ( t ) 1 { α A = j } | H ( t ) , Z } + j A c Q j ( t ) E { 1 { α A = j } | H ( t ) , Z }
over all possible actions α A { 1 , 2 , , n } at time-slot t. Hence, comparing the scenario where g Q ( t ) A is used in the t-th iteration with the scenario where g A , is used with the randomization variable U A ( t ) in the t-th iteration, we have the inequality,
j = 1 n Q j ( t ) E { x j ( t ) | H ( t ) , Z } j = 1 n Q j ( t ) E { x j ( t ) | H ( t ) , Z } = j = 1 n Q j ( t ) y j ,
where the last equality follows since x ( t ) is independent of H ( t ) . Summing (49) and (51),
V f t ( γ ( t 1 ) ) γ ( t ) + α γ ( t ) γ ( t 1 ) 2 2 + j = 1 n Q j ( t ) ( γ j ( t ) E { x j ( t ) | H ( t ) , Z } ) V f t ( γ ( t 1 ) ) y + α y γ ( t 1 ) 2 2 α y γ ( t ) 2 2 .
Adding D 1 + V f t ( γ ( t 1 ) ) γ ( t 1 ) to both sides and using Lemma 4 yields,
Δ ( t ) V f t ( γ ( t 1 ) ) γ ( t ) γ ( t 1 ) + α γ ( t ) γ ( t 1 ) 2 2 D 1 V f t ( γ ( t 1 ) ) ( y γ ( t 1 ) ) + α y γ ( t 1 ) 2 2 α y γ ( t ) 2 2 D 1 V { f t ( y ) f t ( γ ( t 1 ) ) } + α y γ ( t 1 ) 2 2 α y γ ( t ) 2 2 ,
where the last inequality follows from the sub-gradient inequality for the concave function f t . Now, we introduce the following lemma.
Lemma 6.
We have
V f t ( γ ( t 1 ) ) γ ( t ) γ ( t 1 ) + α γ ( t ) γ ( t 1 ) 2 2 V 2 4 α a + j A c E j 2 V 2 16 α Ω ( t ) 2 2 ,
Proof of Lemma 6.
See Appendix D. □
Substituting the bound from Lemma 6 in (53) we have that
Δ ( t ) V 2 4 α a + j A c E j 2 V 2 16 α Ω ( t ) 2 2 D 1 V { f t ( y ) f t ( γ ( t 1 ) ) } + α y γ ( t 1 ) 2 2 α y γ ( t ) 2 2 ,
The above holds for each t { 1 , 2 , , T } . Hence, we first take the expectation conditioned on Z of both sides of the above expression, after which we sum from 1 to T, which results in,
E { L ( T + 1 ) | Z } E { L ( 1 ) | Z } T V 2 4 α a + j A c E j 2 T V 2 16 α E { Ω 2 2 | Z } D 1 T V t = 1 T E { f t ( y ) | Z } + V t = 1 T E { f t ( γ ( t 1 ) ) | Z } + α E { y γ ( 0 ) 2 2 | Z } α E { y γ ( T ) 2 2 | Z } .
Notice that
E { f t ( y ) | Z } = f ( y ) = f opt ,
where functions f and f t are defined in (25) and (33), respectively. Further, we have that
E { f t ( γ ( t 1 ) ) | Z } = E { E Ω ( t ) { f t ( γ ( t 1 ) ) | H ( t 1 ) , Z } | Z } = ( a ) E { f ( γ ( t 1 ) ) | Z } ,
where (a) follows from the definition of f t in (33), since γ ( t 1 ) is a function of H ( t 1 ) and Ω ( t ) is independent of H ( t 1 ) . Substituting (57) and (58) into (56), we have that
E { L ( T + 1 ) | Z } E { L ( 1 ) | Z } T V 2 4 α a + j A c E j 2 T V 2 16 α E { Ω 2 2 | Z } D 1 T V T f opt + V t = 1 T E { f ( γ ( t 1 ) ) | Z } + α E { y γ ( 0 ) 2 2 | Z } α E { y γ ( T ) 2 2 | Z } ( a ) D 1 T V T f opt + V t = 1 T E { f ( γ ( t 1 ) ) | Z } + α n a + k A E k 2 D 1 T V T f opt + V T f 1 T t = 1 T E { γ ( t 1 ) | Z } + α D 3 ,
where (a) follows since y , γ ( T ) , γ ( 0 ) K and the last inequality follows from Jensen’s inequality on the concave function f. (See the definition of D 3 and D 2 in (40)). Since Q ( 1 ) = 0 and E { L ( T + 1 ) | Z } 0 , after some rearrangements above translates to,
f opt D 1 V V D 2 16 α α D 3 V T f 1 T t = 0 T 1 E { γ ( t ) | Z } ,
where D 2 is defined in (40). Now, we prove the following lemma.
Lemma 7.
We have
f 1 T t = 0 T 1 E { γ ( t ) | Z } f 1 T t = 1 T E { x ( t ) | Z } + 3 2 T k A α + E k 2 2 α + 1 + 3 2 T k A c E k 2 α + E k 2 2 α + 1 .
Proof of Lemma 7.
We first introduce the following two lemmas.
Lemma 8.
The queues Q j ( t ) for 1 j n updated according to Algorithm 1 satisfy,
max 1 T t = 1 T 1 E { γ j ( t ) x j ( t ) | Z } , 0 E { Q j ( T ) | Z } T .
Proof of Lemma 8.
See Appendix E. □
The following lemma is vital in constructing the O ( α ) bound on the queue sizes, which leads to the O ( 1 / ε 2 ) solution. It should be noted that an easier bound can be obtained on the queue sizes, which leads to a O ( 1 / ε 3 ) solution.
Lemma 9.
Given that α V 2 , Q ( t ) satisfy the bound
Q j ( t ) ( 1 + 2 2 E j ) α + E j if j A ( E j + 2 2 ) α + 1 if j A c ,
for each t [ 1 : T ] .
Proof of Lemma 9.
See Appendix F. □
Now, we move on to the main proof. Notice that
f 1 T t = 0 T 1 E { γ ( t ) | Z } = f γ ( 0 ) T + 1 T t = 1 T 1 E { γ ( t ) | Z } f 1 T t = 1 T E { x ( t ) | Z } + 1 T t = 1 T 1 E { γ ( t ) | Z } 1 T t = 1 T 1 E { x ( t ) | Z } E { x ( T ) | Z } T ( a ) f 1 T t = 1 T E { x ( t ) | Z } + max 1 T t = 1 T 1 E { γ ( t ) | Z } 1 T t = 1 T 1 E { x ( t ) | Z } , 0 ( b ) f 1 T t = 1 T E { x ( t ) | Z } + 3 2 k A max 1 T t = 1 T 1 E { γ k ( t ) x k ( t ) | Z } , 0 + 3 2 k A c E k max 1 T t = 1 T 1 E { γ k ( t ) x k ( t ) | Z } , 0 ,
where (a) follows from the entry-wise non-decreasing property of f (Theorem 2-2) and (b) follows from Theorem 2-3. Combining (64) and Lemma 8 with the bound on Q ( T ) given by Lemma 9, we are finished with the proof of the lemma. □
Combining Lemma 7 with (60), we are finished with the proof of the theorem.

6. Simulations

For the simulations, we use W j as exponential random variables. Notice that since we are conditioning on Z to solve the problem, the objective of (P1) defined in (25) has the same structure for the two scenarios ( a , b , c , d ) and ( a , b , c + d , 0 ) . Hence, we use d = 0 for all the simulations. Notice that the sets A and B denote the private information of players A and B, respectively. We consider the three scenarios given below.
  • a = 0 , b = 0 , c = 3 , d = 0 : Both players do not have private information.
  • a = 0 , b = 1 , c = 2 , d = 0 : Only player B has private information.
  • a = 1 , b = 1 , c = 1 , d = 0 : Both players have private information.
Figure 1, Figure 2 and Figure 3 show pictorial representations of these cases.
We first consider scenario 1. For Figure 4 (top-left), we fix E 2 = E 3 = 1 and plot the expected utilities of players A and B at the ϵ -approximate Nash equilibrium as functions of E 1 , where ϵ = 10 3 is used. For Figure 4 (top-middle and top-right), we use the same configuration and plot a solution for the probabilities of choosing different resources as a function of E 1 at the ϵ -approximate Nash equilibrium for players A and B, respectively. For scenarios 2 and 3, Figure 4 (middle) and Figure 4 (bottom), have similar descriptions to scenario 1.
We consider the same three scenarios for the simulations on maximizing the worst-case expected utility. In each scenario, for the top figure, we fix E 2 = E 3 = 1 and plot the maximum expected worst-case utility of player A as a function of E 1 . For the bottom figure, we use the same configuration and plot a solution for the probabilities of choosing different resources for player A as a function of E 1 . Notice that the solutions may not be unique, as discussed in Section 5.1. Additionally, for Figure 5 (top-middle and top-right), we also indicate the maximum possible error of the solution calculated using the error bound derived in Theorem 3. For scenarios 2 and 3, we have obtained the solutions by averaging over 10 2 independent simulations. Further, we have used T = 10 5 , α = 4 × 10 4 , and V = 2 × 10 2 .
Notice that it is difficult to compare the worst-case strategy and the ϵ -approximate Nash equilibrium strategy in general since the first can be computed without any cooperation between the players, whereas computing the second requires cooperation among players. Further, as described in Section 5.1, the worst-case strategy can be arbitrarily worse than the Nash equilibrium strategy. Nevertheless, comparing Figure 4 (left) and Figure 5 (top), it can be seen that the worst-case strategy and the strategy at ϵ -approximate Nash equilibrium yield comparable expected utilities for player A when E 1 2 . For instance, in scenario 1, for E 1 2 , the approximate Nash equilibrium strategy coincides with the worst-case strategy of choosing resource 1 with probability 1. However, it should be noted that our algorithm for finding the ϵ -approximate Nash equilibrium does not necessarily converge to a socially optimal solution. For instance, in scenario 1, when E 1 = 2 , player A chooses resource 1 with probability 1 and player B chooses resource 2 with probability 1 gives a higher utility for player A without changing the utility of player B.
In Figure 5, it is interesting to notice the variation in choice probabilities of different resources with E 1 . Notice that in scenario 1, the choice probability of resource 1 is non-decreasing for E 1 [ 0.1 , 0.8 ] , non-increasing for E 1 [ 0.8 , 1.9 ] , and non-decreasing for E 1 1.9 . Similar behavior can also be observed for scenario 3. This is surprising since intuition suggests that the probability of choosing a resource should increase with the increasing mean of the reward random variable. However, notice that in scenarios 1 and 3, player B does not observe the reward realization of resource 1. This might force player A, playing for the worst case, to believe that player B increases the probability of choosing resource 1 with increasing E 1 , as a result of which player A chooses resource 1 with a lower probability. Notice that the probability of choosing resource 1 in scenario 3 does not grow as fast as the other two. This is because player A observes W 1 and hence can refrain from choosing it when W 1 takes low values.

7. Conclusions

We have implemented the iterative best response algorithm to find the ϵ -approximate Nash equilibrium of a two-player stochastic resource-sharing game with asymmetric information. To handle situations where the players do not trust each other and place no assumptions on the incentives of the opponent, we solved the problem of maximizing the worst-case expected utility of the first player using a novel algorithm that combines drift-plus penalty theory and online optimization techniques. An explicit solution can be constructed when both players do not observe the realizations of any of the reward random variables. This special case leads to counter-intuitive insights.
In our approach, we have assumed that the reward random variables of different resources are independent. It should be noted that this assumption can be relaxed without affecting the analysis for the special case when both players do not observe the realizations of any of the reward random variables. An interesting question would be what happens in the general case when the reward random variables are not independent. While it is still possible to implement our algorithm in this setting, it is not guaranteed that the algorithm will converge to the optimal solution. Hence, finding an algorithm for this case that exploits the correlations between the reward random variables could be potential future work.
Several other extensions can be considered as well. One would be considering a scenario with multiple players. The general multiplayer case yields a complex information structure since the set of resources has to be split into 2 m subsets, where m is the number of players. Additionally, the idea of conditioning on the common information is difficult to be adapted for this case. Nevertheless, various simplified schemes could be considered. One example would be a case with no common information. In this case, the set of resources is split into m + 1 disjoint subsets where the i-th ( 1 i m ) subset is the subset of resources of which the i-th player observes the rewards, and the m + 1 -th subset is the subset of resources of which the rewards are observed by none of the players. Another interesting scenario is when no player observes any of the reward realizations. In both these cases, the expected utility can be calculated following a similar procedure to the two-player case, but finding the worst-case expected utility is difficult. Hence, we believe both cases could be potential future work. Another extension would be extending the algorithm to be implemented with a repeated game structure and in an online scenario.

Author Contributions

Conceptualization, M.W. and M.J.N.; methodology, M.W.; software, M.W.; validation, M.W.; writing—original draft preparation, M.W.; writing—review and editing, M.J.N.; visualization, M.W.; supervision, M.J.N.; project administration, M.J.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by one or more of: NSF CCF-1718477, NSF SpecEES 1824418.

Data Availability Statement

This paper does not use any data from external sources.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Theorem 2

Notice that the term E { max { Ω j x j ; 1 j n } | Z } of f is convex since the max function is convex and expectation preserves convexity. Hence, f is concave.
For 2 and 3, we use the two inequalities,
f ( x ) f ( y ) j A ( x j y j ) + j A c E j ( x j y j ) 1 2 E { max { Ω j ( x j y j ) ; j [ 1 : n ] } | Z } ,
and
f ( x ) f ( y ) j A ( x j y j ) + j A c E j ( x j y j ) + 1 2 E { max { Ω j ( y j x j ) ; j [ 1 : n ] } | Z } ,
both of which follow from the fact that for real numbers γ 1 , γ 2 , γ 3 , γ 4 , max { γ 1 + γ 2 , γ 3 + γ 4 } max { γ 1 , γ 3 } + max { γ 2 , γ 4 } .
For 2, we consider x y where the inequality is entry-wise. Notice that
f ( x ) f ( y ) j A ( x j y j ) + j A c E j ( x j y j ) 1 2 E { j = 1 n Ω j ( x j y j ) | Z } = j A 1 2 ( x j y j ) + j A c E j 2 ( x j y j ) ,
where the inequality follows from (A1) and the fact that for γ 1 , γ 2 0 , max { γ 1 , γ 2 } γ 1 + γ 2 .
For 3, note that
f ( x ) f ( y ) ( a ) j A | x j y j | + j A c E j | x j y j | + 1 2 E { j = 1 n Ω j | x j y j | | Z } = 3 2 j A | x j y j | + 3 2 j A c E j | x j y j | ,
where (a) follows from (A2) and the fact that for γ 1 , γ 2 0 , max { γ 1 , γ 2 } γ 1 + γ 2 .

Appendix B. Proof of Lemma 3

We begin with several results which are used in the proof.
Lemma A1.
If ( p , γ ) solves the problem,
( P 2 - 1 ) : maximize p , γ k = 1 n p k E k 1 2 γ subject to p I , γ p k E k 1 k n ,
where I is the n-dimensional probability simplex defined in (28), then p solves (P2).
Proof of Lemma A1.
Define,
f 1 ( p , γ ) = k = 1 n p k E k 1 2 γ .
Notice that f ( p ) = f 1 ( p , max { p k E k ; 1 k n } ) . Let ( p , γ ) be a solution for (P2-1). Notice that for ( p , γ ) to be feasible for (P2-1), we should have γ max { p k E k ; 1 k n } . However, if γ > max { p k E k ; 1 k n } , we have that f 1 ( p , max { p k E k ; 1 k n } ) > f 1 ( p , γ ) , which contradicts the optimality of ( p , γ ) for (P2-1). Hence, γ = max { p k E k ; 1 k n } . Hence, we have f ( p ) = f 1 ( p , γ ) .
Now, consider p ˜ I . Define γ ˜ = max { p ˜ k E k ; 1 k n } . Since ( p ˜ , γ ˜ ) is also feasible for (P2-1), we should have f 1 ( p ˜ , γ ˜ ) f 1 ( p , γ ) . This implies f ( p ) f ( p ˜ ) . Hence, p is an optimal solution of (P2). □
Lemma A2.
Consider fixed μ R n such that μ k 0 for all 1 k n . Now, consider the unconstrained problem,
( P 2 - 2 ) : maximize f 2 ( p , γ ) = k = 1 n p k E k 1 2 γ + k = 1 n μ k ( γ p k E k ) subject to p I , γ R .
Assume ( p , γ ) is a solution (P2-2). Additionally, assume that
E k p k γ for all 1 k n , E k p k = γ whenever μ k > 0 .
Then ( p , γ ) is a solution for (P2-1).
Proof of Lemma A2.
First, notice that ( p , γ ) satisfies the constraints of (P2-1). To show that it maximizes the objective in (P2-1), consider any ( p , γ ) that is feasible for (P2-1). Notice that
f 1 ( p , γ ) = f 2 ( p , γ ) μ k > 0 μ k ( γ p k E k ) ( a ) f 2 ( p , γ ) μ k > 0 μ k ( γ p k E k ) = f 1 ( p , γ ) + μ k > 0 μ k ( γ p k E k γ + p k E k ) = ( b ) f 1 ( p , γ ) μ k > 0 μ k ( γ p k E k ) ( c ) f 1 ( p , γ ) ,
where f 1 is the objective of (P2-1) defined in (A5), f 2 is the objective of (P2-2), (a) follows from the optimality of ( p , γ ) for (P2-2), (b) follows due to (A7), and (c) follows since μ k 0 and ( p , γ ) is feasible for (P2-1). Hence, we have the result. □
Define,
S k = j = 1 k 1 E j ,
for 1 k n . We also establish the following lemma, which is useful in our solution.
Lemma A3.
Let
r = arg max 1 k n k 1 2 S k ,
where arg max returns the lowest index in the case of ties. Let us also define μ R n as
μ k = 1 1 E k r 1 2 S r if 1 k r , 0 otherwise .
Then we have
1.
μ k 0 for all k such that 1 k n .
2.
k = 1 n μ k = 1 2 .
3.
E k ( 1 μ k ) = r 1 2 S r for 1 k r .
4.
E k ( 1 μ k ) r 1 2 S r for r + 1 k n .
Proof of Lemma A3.
  • Notice that by the definition of μ k , it is enough to prove the result for 1 k r . Notice that we are required to prove that
    1 E k r 1 2 S r 1 ,
    for all 1 k r . Since E k E k + 1 for 1 k n 1 , it suffices to prove that
    1 E r r 1 2 S r 1 .
    We consider two cases.
    Case 1:  r = 1 . This case reduces to,
    1 2 E 1 1 E 1 ,
    which is trivial.
    Case 2:  r > 1 . Note that from the definition of r in (A10), we have
    r 1 2 S r r 3 2 S r 1 .
    After substituting S r 1 = S r 1 E r and rearranging, we have the desired result.
  • Notice that
    k = 1 n μ k = k = 1 r μ k = k = 1 r 1 1 E k r 1 2 S r = r r 1 2 S r k = 1 r 1 E k = r r 1 2 S r S r = 1 2 .
  • This follows from the definition of μ k for 1 k r .
  • There is nothing to prove if r = n . Hence, we can assume r < n . Since μ k = 0 for k r + 1 , it suffices to prove that E k r ( 1 / 2 ) S r . Notice that if we can prove the result for k = r + 1 , we are finished since E k E k + 1 for 1 k n . Note that from the definition of r in (A10), we have
    r 1 2 S r r + 1 2 S r + 1 .
    After substituting S r + 1 = S r + 1 E r + 1 and rearranging, we have the desired result.
Now, we solve the problem using the above lemmas. Consider the problem defined in Lemma A2 with μ defined in Lemma A3. Specifically, consider the problem,
( P 2 - 3 ) : maximize f 2 ( p , γ ) = k = 1 n p k E k 1 2 γ + k = 1 n μ k ( γ p k E k ) subject to p I , γ R .
where μ and r are defined in (A10) and (A11). For this choice of μ k we have
f 2 ( p , γ ) = k = 1 n p k E k ( 1 μ k ) + γ k = 1 n μ k 1 2 = k = 1 n p k E k ( 1 μ k ) ,
where the last equality follows from Lemma A3-2. Now, due to Lemma A3-3 and Lemma A3-4, the optimal solution for (P2-3) is any ( p , γ ) such that γ R , and p I such that p k = 0 for k > r . In particular, consider the solution ( p , γ ) given by,
p k = 1 E k S r if k r , 0 otherwise ,
and γ = 1 S r . Notice that for 1 k r , we have that p k E k = γ , and p k E k = 0 γ for r + 1 k n . Hence, from Lemma A2, ( p , γ ) is a solution for (P2-1). Hence, from Lemma A1, p is a solution for (P2) as desired.

Appendix C. Proof of Lemma 4

Notice that
Δ ( t ) = E { L ( t + 1 ) L ( t ) | H ( t ) , Z } = 1 2 E j = 1 n Q j ( t + 1 ) 2 Q j ( t ) 2 | H ( t ) , Z = 1 2 j = 1 n E { Q j ( t + 1 ) 2 | H ( t ) , Z } 1 2 j = 1 n Q j ( t ) 2 = 1 2 j = 1 n E { max Q j ( t ) + γ j ( t ) x j ( t ) , 0 2 | H ( t ) , Z } 1 2 j = 1 n Q j ( t ) 2 1 2 j = 1 n E { ( Q j ( t ) + γ j ( t ) x j ( t ) ) 2 | H ( t ) , Z } 1 2 j = 1 n Q j ( t ) 2 j = 1 n Q j ( t ) ( γ j ( t ) E { x j ( t ) | H ( t ) , Z } ) + 1 2 j = 1 n γ j ( t ) 2 + 1 2 j = 1 n E { x j ( t ) 2 | H ( t ) , Z } ( a ) j = 1 n Q j ( t ) ( γ j ( t ) E { x j ( t ) | H ( t ) , Z } ) + 1 2 j A E j 2 + n a 2 + 1 2 j A E { ( X j ( t ) 1 { α ( t ) = k } ) 2 | H ( t ) , Z } + 1 2 j A c E { ( 1 { α ( t ) = k } ) 2 | H ( t ) , Z } j = 1 n Q j ( t ) ( γ j ( t ) E { x j ( t ) | H ( t ) , Z } ) + 1 2 j A E j 2 + n a 2 + 1 2 j A E { X j ( t ) 2 | H ( t ) , Z } + 1 2 j A c E { 1 | H ( t ) , Z } = ( b ) j = 1 n Q j ( t ) ( γ j ( t ) E { x j ( t ) | H ( t ) , Z } ) + 1 2 j A E j 2 + n a 2 + 1 2 j A E { W j 2 } + n a 2 = j = 1 n Q j ( t ) ( γ j ( t ) E { x j ( t ) | H ( t ) , Z } ) + D 1 ,
where inequality (a) follows since γ ( t ) K and equality (b) follows from the fact that X ( t ) is independent of H ( t ) and Z .

Appendix D. Proof of Lemma 6

Notice that
f t ( γ ( t 1 ) ) = v 1 2 Ω ˜ ( t ) ,
where v is defined by,
v j = 1 if j A , E j if j A c .
Ω ˜ ( t ) is given by Ω ˜ k ( t ) = Ω k ( t ) 1 { arg max 1 j n { γ j ( t 1 ) Ω j ( t ) } = k } , and arg max returns the least index in the case of ties. Notice that
V f t ( γ ( t 1 ) ) γ ( t ) γ ( t 1 ) + α γ ( t ) γ ( t 1 ) 2 2 ( a ) V f t ( γ ( t 1 ) ) 2 γ ( t ) γ ( t 1 ) 2 + α γ ( t ) γ ( t 1 ) 2 2 = α γ ( t ) γ ( t 1 ) 2 V 2 α f t ( γ ( t 1 ) ) 2 2 V 2 4 α f t ( γ ( t 1 ) ) 2 2 V 2 4 α f t ( γ ( t 1 ) ) 2 2 = V 2 4 α v 1 2 Ω ˜ ( t ) 2 2 ( b ) V 2 4 α v 2 2 V 2 16 α Ω ˜ ( t ) 2 2 V 2 4 α a + j A c E j 2 V 2 16 α Ω ( t ) 2 2 ,
where (a) follows from the Cauchy–Schwarz inequality, (b) follows since v k 0 and Ω ˜ k ( t ) 0 for all 1 k n .

Appendix E. Proof of Lemma 8

Notice that from the definition of Q j ( t + 1 ) in (35) and the definition of x j ( t ) in (42) we have that
Q j ( t + 1 ) Q j ( t ) γ j ( t ) x j ( t ) ,
for all 1 j n and 1 t T 1 . Summing the above from 1 to T 1 , we have that
Q j ( T ) Q j ( 1 ) t = 1 T 1 { γ j ( t ) x j ( t ) } .
After using Q j ( 1 ) = 0 , taking expectations conditioned on Z and some algebraic manipulations, we have
E { Q j ( T ) | Z } T 1 T t = 1 T 1 E { γ j ( t ) x j ( t ) | Z } .
We have the desired inequality from the above since Q j ( T ) is non-negative.

Appendix F. Proof of Lemma 9

Define v , u as follows.
v k = 1 if k A , E k if k A c , and u k = E k if k A , 1 if k A c .
Hence we are required to prove that Q j ( t ) ( v j + 2 2 u j ) α + u j for all t [ 1 : T ] .
We begin with several important results.
Lemma A4.
We have the following results regarding Q j ( t ) .
1.
Q j ( t + 1 ) Q j ( t ) + u j for all t 1 .
2.
Assume Q j ( t ) ( v j + 2 u j ) α for some t 1 . Then we have either γ j ( t ) = 0 or
γ j ( t ) γ j ( t 1 ) u j 2 α .
3.
Assume Q j ( τ ) ( v j + 2 u j ) α for all τ [ t : t + t 0 ] , where t 1 and t 0 0 . Additionally assume γ j ( t 1 ) = 0 . Then γ j ( τ ) = 0 for all τ [ t 1 : t + t 0 ] .
Proof of Lemma A4.
  • Notice that from the definition of Q j ( t + 1 ) in (35), for j A we have
    Q j ( t + 1 ) = max Q j ( t ) + γ j ( t ) X j ( t ) 1 { α A ( t ) = j } , 0 max Q j ( t ) + u j , 0 = Q j ( t ) + u j ,
    where the inequality follows from the definition of u j in (A28). The same argument can be repeated for j A c .
  • Notice that if γ j ( t ) 0 then we have
    γ j ( t ) γ j ( t 1 ) V f t , j ( γ ( t 1 ) ) + Q j ( t ) 2 α ,
    which follows since γ j ( t ) is the projection of γ j ( t 1 ) V f t , j ( γ ( t 1 ) ) + Q j ( t ) 2 α onto [ 0 , u j ] (See (38)). Hence, we have that
    γ j ( t ) γ j ( t 1 ) V f t , j ( γ ( t 1 ) ) + Q j ( t ) 2 α ( a ) γ j ( t 1 ) V v j + ( v j + 2 u j ) α 2 α ( b ) γ j ( t 1 ) u j 2 α ,
    where (a) follows from the subgradients of f t found in (34) and (b) follows from α V 2 .
  • Notice if we prove γ j ( t ) = 0 , we can use the same argument inductively to establish the result. Assume the contrary that γ j ( t ) 0 . Then, from part 2, we should have
    γ j ( t ) γ j ( t 1 ) u j 2 α = u j 2 α ,
    which is a contradiction since γ j ( t ) 0 . Hence, we have the result.
Now, we use an inductive argument to prove the main result. Notice that the result is true for t = 1 , since Q j ( 1 ) = 0 ( v j + 2 2 u j ) α + u j . Now, we prove that Q j ( t + 1 ) ( v j + 2 2 u j ) α + u j for t 1 , with the assumption that Q j ( t ) ( v j + 2 2 u j ) α + u j .
We consider three cases.
Case 1:  Q j ( t ) ( v j + 2 2 u j ) α . This case follows from Lemma A4-1.
Case 2:  t 2 α + 1 . Notice that
Q j ( t + 1 ) Q j ( 1 ) + u j t ( 2 α + 1 ) u j ( v j + 2 2 u j ) α + u j ,
where the first inequality follows from Lemma A4-1.
Case 3:  t > 2 α + 1 and Q j ( t ) > ( v j + 2 2 u j ) α . For this, we prove that γ j ( t ) = 0 , which establishes the claim from the definition of Q j ( t + 1 ) in (35) and the induction hypothesis.
Notice that for all u [ 1 : t ] we have
Q j ( u ) ( a ) Q j ( t ) ( t u ) u j ( v j + 2 2 u j ) α ( t u ) u j = ( v j + 2 u j ) α + 2 α u j ( t u ) u j ,
where (a) follows from Lemma A4-1.
Hence, for all u Z such that t 2 α u t , we have that
Q j ( u ) ( v j + 2 u j ) α .
Now, we prove that there exists u Z such that t 2 α u t and γ j ( u ) = 0 , which will establish that γ j ( t ) = 0 from Lemma A4-3. For the proof, assume the contrary γ j ( u ) > 0 for all u Z such that t 2 α u t (or u [ t 2 α : t ] , where x denotes the largest integer smaller than or equal to x). From Lemma A4-2, we have that
γ j ( t ) γ j t 2 α 1 2 α + 1 u j 2 α 0 ,
where the last inequality follows since x + 1 x and γ j t 2 α 1 u j (since γ j ( τ ) [ 0 , u j ] for all τ [ 1 : T ] by the projection definition of γ j ( τ ) in (38), and γ j ( 0 ) = 0 ). Hence, we should have that γ j ( t ) = 0 , which contradicts our initial assumption. Hence, we are finished.

Notes

1
Ideally, player B may not have information about q j A and p j A . Hence, player B may not be able to utilize this exact strategy. Nevertheless, obtaining a better bound is impossible since we do not have any assumptions or information about player B’s strategy. For instance, if player B assumes that player A is using a particular strategy and if player B’s assumption turns out to be correct since player B knows the distributions of all W j for 1 j n , player B’s estimates of q j A and p j A are exact.
2
The same problem structure arises in the case with symmetric information between the players (case a = b = 0 with d arbitrary). Hence, we can use the solution obtained in this section for the above case as well.

References

  1. Akkarajitsakul, K.; Hossain, E.; Niyato, D.; Kim, D.I. Game Theoretic Approaches for Multiple Access in Wireless Networks: A Survey. IEEE Commun. Surv. Tutor. 2011, 13, 372–395. [Google Scholar] [CrossRef]
  2. Aryafar, E.; Keshavarz-Haddad, A.; Wang, M.; Chiang, M. RAT selection games in HetNets. In Proceedings of the 2013 Proceedings IEEE INFOCOM, Turin, Italy, 14–19 April 2013; pp. 998–1006. [Google Scholar] [CrossRef]
  3. Felegyhazi, M.; Cagalj, M.; Bidokhti, S.S.; Hubaux, J.P. Non-Cooperative Multi-Radio Channel Allocation in Wireless Networks. In Proceedings of the IEEE INFOCOM 2007—26th IEEE International Conference on Computer Communications, Anchorage, AK, USA, 6–12 March 2007; pp. 1442–1450. [Google Scholar] [CrossRef]
  4. Li, B.; Qu, Q.; Yan, Z.; Yang, M. Survey on OFDMA based MAC protocols for the next generation WLAN. In Proceedings of the 2015 IEEE Wireless Communications and Networking Conference Workshops (WCNCW), New Orleans, LA, USA, 9–12 March 2015; pp. 131–135. [Google Scholar] [CrossRef]
  5. Rosenthal, R.W. A class of games possessing pure-strategy Nash equilibria. Int. J. Game Theory 1973, 2, 65–67. [Google Scholar] [CrossRef]
  6. Nikolova, E.; Stier-Moses, N.E. Stochastic Selfish Routing. In Proceedings of the Algorithmic Game Theory, Amalfi, Italy, 17–19 October 2011; pp. 314–325. [Google Scholar]
  7. Angelidakis, H.; Fotakis, D.; Lianeas, T. Stochastic Congestion Games with Risk-Averse Players. In Proceedings of the SAGT 2013, Lecture Notes in Computer Science, Aachen, Germany, 21–23 October 2013. [Google Scholar] [CrossRef]
  8. Zhou, C.; Nguyen, T.H.; Xu, H. Algorithmic Information Design in Multi-Player Games: Possibilities and Limits in Singleton Congestion. In Proceedings of the 23rd ACM Conference on Economics and Computation, EC’22, Boulder, CO, USA, 11–15 July 2022; Association for Computing Machinery: New York, NY, USA, 2022; p. 869. [Google Scholar] [CrossRef]
  9. Castiglioni, M.; Celli, A.; Marchesi, A.; Gatti, N. Signaling in Bayesian Network Congestion Games: The Subtle Power of Symmetry. Proc. Aaai Conf. Artif. Intell. 2021, 35, 5252–5259. [Google Scholar] [CrossRef]
  10. Wu, M.; Liu, J.; Amin, S. Informational aspects in a class of Bayesian congestion games. In Proceedings of the 2017 American Control Conference (ACC), Seattle, WA, USA, 24–26 May 2017; pp. 3650–3657. [Google Scholar] [CrossRef]
  11. Syrgkanis, V. The complexity of equilibria in cost sharing games. In Proceedings of the Internet and Network Economics: 6th International Workshop, WINE 2010, Stanford, CA, USA, 13–17 December 2010; Proceedings 6. Springer: Berlin/Heidelberg, Germany, 2010; pp. 366–377. [Google Scholar]
  12. Zinkevich, M. Online Convex Programming and Generalized Infinitesimal Gradient Ascent. In Proceedings of the Twentieth International Conference on International Conference on Machine Learning, Washington, DC, USA, 21–24 August 2003; pp. 928–935. [Google Scholar]
  13. Yu, H.; Neely, M.; Wei, X. Online Convex Optimization with Stochastic Constraints. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Curran Associates, Inc.: Brooklyn, NY, USA, 2017; Volume 30. [Google Scholar]
  14. Neely, M.J. Stochastic Network Optimization with Application to Communication and Queueing Systems; Morgan & Claypool: Kentfield, CA, USA, 2010. [Google Scholar]
  15. Monderer, D.; Shapley, L.S. Potential Games. Games Econ. Behav. 1996, 14, 124–143. [Google Scholar] [CrossRef]
  16. Chien, S.; Sinclair, A. Convergence to approximate Nash equilibria in congestion games. Games Econ. Behav. 2011, 71, 315–327. [Google Scholar] [CrossRef]
  17. Bhawalkar, K.; Gairing, M.; Roughgarden, T. Weighted Congestion Games: Price of Anarchy, Universal Worst-Case Examples, and Tightness. In Proceedings of the Algorithms—ESA 2010, Lecture Notes in Computer Science, Liverpool, UK, 6–8 September 2010; pp. 17–28. [Google Scholar] [CrossRef]
  18. Milchtaich, I. Congestion Games with Player-Specific Payoff Functions. Games Econ. Behav. 1996, 13, 111–124. [Google Scholar] [CrossRef]
  19. Ackermann, H.; Goldberg, P.W.; Mirrokni, V.S.; Röglin, H.; Vöcking, B. A Unified Approach to Congestion Games and Two-Sided Markets. Internet Math. 2008, 5, 439–458. [Google Scholar] [CrossRef]
  20. Fotakis, D.; Kontogiannis, S.; Koutsoupias, E.; Mavronicolas, M.; Spirakis, P. The structure and complexity of Nash equilibria for a selfish routing game. Theor. Comput. Sci. 2009, 410, 3305–3326. [Google Scholar] [CrossRef]
  21. Gairing, M.; Lücking, T.; Mavronicolas, M.; Monien, B. Computing Nash Equilibria for Scheduling on Restricted Parallel Links. In Proceedings of the Thirty-Sixth Annual ACM Symposium on Theory of Computing, STOC’04, Chicago, IL, USA, 13–15 June 2004; Association for Computing Machinery: New York, NY, USA, 2004; pp. 613–622. [Google Scholar] [CrossRef]
  22. Acemoglu, D.; Makhdoumi, A.; Malekian, A.; Ozdaglar, A. Informational Braess’ Paradox: The Effect of Information on Traffic Congestion. Oper. Res. 2018, 66, 893–917. [Google Scholar] [CrossRef]
  23. Le, S.; Wu, Y.; Toyoda, M. A Congestion Game Framework for Service Chain Composition in NFV with Function Benefit. Inf. Sci. 2020, 514, 512–522. [Google Scholar] [CrossRef]
  24. Zhang, L.; Gong, K.; Xu, M. Congestion Control in Charging Stations Allocation with Q-Learning. Sustainability 2019, 11, 3900. [Google Scholar] [CrossRef]
  25. Anshelevich, E.; Dasgupta, A.; Kleinberg, J.; Tardos, E.; Wexler, T.; Roughgarden, T. The price of stability for network design with fair cost allocation. In Proceedings of the 45th Annual IEEE Symposium on Foundations of Computer Science, Rome, Italy, 17–19 October 2004; pp. 295–304. [Google Scholar] [CrossRef]
  26. Caragiannis, I.; Flammini, M.; Kaklamanis, C.; Kanellopoulos, P.; Moscardelli, L. Tight Bounds for Selfish and Greedy Load Balancing. Algorithmica 2006, 58, 311–322. [Google Scholar] [CrossRef]
  27. Zhang, F.; Wang, M.M. Stochastic Congestion Game for Load Balancing in Mobile-Edge Computing. IEEE Internet Things J. 2021, 8, 778–790. [Google Scholar] [CrossRef]
  28. Liu, M.; Ahmad, S.H.A.; Wu, Y. Congestion games with resource reuse and applications in spectrum sharing. In Proceedings of the 2009 International Conference on Game Theory for Networks, Istanbul, Turkey, 13–15 May 2009; pp. 171–179. [Google Scholar] [CrossRef]
  29. Liu, M.; Wu, Y. Spectum sharing as congestion games. In Proceedings of the 2008 46th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 23–26 September 2008; pp. 1146–1153. [Google Scholar] [CrossRef]
  30. Ibrahim, M.; Khawam, K.; Tohme, S. Congestion Games for Distributed Radio Access Selection in Broadband Networks. In Proceedings of the 2010 IEEE Global Telecommunications Conference GLOBECOM 2010, Miami, FL, USA, 6–10 December 2010; pp. 1–5. [Google Scholar] [CrossRef]
  31. Seo, J.B.; Jin, H. Two-User NOMA Uplink Random Access Games. IEEE Commun. Lett. 2018, 22, 2246–2249. [Google Scholar] [CrossRef]
  32. Seo, J.B.; Jin, H. Revisiting Two-User S-ALOHA Games. IEEE Commun. Lett. 2018, 22, 1172–1175. [Google Scholar] [CrossRef]
  33. Malanchini, I.; Cesana, M.; Gatti, N. Network Selection and Resource Allocation Games for Wireless Access Networks. IEEE Trans. Mob. Comput. 2013, 12, 2427–2440. [Google Scholar] [CrossRef]
  34. Trestian, R.; Ormond, O.; Muntean, G.M. Game Theory-Based Network Selection: Solutions and Challenges. IEEE Commun. Surv. Tutorials 2012, 14, 1212–1231. [Google Scholar] [CrossRef]
  35. Quint, T.; Shubik, M. A model of migration. Working Paper. 1994; Cowles Foundation Discussion Papers 1331. Available online: https://elischolar.library.yale.edu/cowles-discussion-paper-series/1331 (accessed on 1 May 2023).
  36. Wijewardena, M.; Neely, M.J. A Two-Player Resource-Sharing Game with Asymmetric Information. arXiv 2023, arXiv:2306.08791. [Google Scholar] [CrossRef]
  37. Nisan, N.; Roughgarden, T.; Tardos, E.; Vazirani, V.V. Algorithmic Game Theory; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar] [CrossRef]
  38. Wei, X.; Yu, H.; Neely, M.J. Online Primal-Dual Mirror Descent under Stochastic Constraints. Proc. Acm Meas. Anal. Comput. Syst. 2020, 4, 1–36. [Google Scholar] [CrossRef]
Figure 1. a , b , c , d = 0 , 0 , 3 , 0 .
Figure 1. a , b , c , d = 0 , 0 , 3 , 0 .
Games 14 00061 g001
Figure 2. a , b , c , d = 0 , 1 , 2 , 0 .
Figure 2. a , b , c , d = 0 , 1 , 2 , 0 .
Games 14 00061 g002
Figure 3. a , b , c , d = 1 , 1 , 1 , 0 .
Figure 3. a , b , c , d = 1 , 1 , 1 , 0 .
Games 14 00061 g003
Figure 4. Top: Case a = 0 , b = 0 , c = 3 , d = 0 . Middle: Case a = 0 , b = 1 , c = 2 , d = 0 . Bottom: Case a = b = c = 1 , d = 0 . Left: The expected utility of the players at the ϵ -approximate Nash equilibrium vs. E 1 . Middle: One possible solution for the probabilities of choosing different resources at the ϵ -approximate Nash equilibrium for player A vs. E 1 . Right: One possible solution for the probabilities of choosing different resources at the ϵ -approximate Nash equilibrium for player B vs. E 1 .
Figure 4. Top: Case a = 0 , b = 0 , c = 3 , d = 0 . Middle: Case a = 0 , b = 1 , c = 2 , d = 0 . Bottom: Case a = b = c = 1 , d = 0 . Left: The expected utility of the players at the ϵ -approximate Nash equilibrium vs. E 1 . Middle: One possible solution for the probabilities of choosing different resources at the ϵ -approximate Nash equilibrium for player A vs. E 1 . Right: One possible solution for the probabilities of choosing different resources at the ϵ -approximate Nash equilibrium for player B vs. E 1 .
Games 14 00061 g004
Figure 5. Left: Case a = 0 , b = 0 , c = 3 , d = 0 . Middle: Case a = 0 , b = 1 , c = 2 , d = 0 . Right: Case a = b = c = 1 , d = 0 . Top: The maximum expected worst-case utility of player A and the error margin (shaded in blue) vs. E 1 . Bottom: One possible solution for the probabilities of choosing different resources for player A vs. E 1 .
Figure 5. Left: Case a = 0 , b = 0 , c = 3 , d = 0 . Middle: Case a = 0 , b = 1 , c = 2 , d = 0 . Right: Case a = b = c = 1 , d = 0 . Top: The maximum expected worst-case utility of player A and the error margin (shaded in blue) vs. E 1 . Bottom: One possible solution for the probabilities of choosing different resources for player A vs. E 1 .
Games 14 00061 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wijewardena, M.; Neely, M.J. A Two-Player Resource-Sharing Game with Asymmetric Information. Games 2023, 14, 61. https://doi.org/10.3390/g14050061

AMA Style

Wijewardena M, Neely MJ. A Two-Player Resource-Sharing Game with Asymmetric Information. Games. 2023; 14(5):61. https://doi.org/10.3390/g14050061

Chicago/Turabian Style

Wijewardena, Mevan, and Michael J. Neely. 2023. "A Two-Player Resource-Sharing Game with Asymmetric Information" Games 14, no. 5: 61. https://doi.org/10.3390/g14050061

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop