Next Article in Journal
Strategic Information Suppression in Borrowing and Pre-Lending Cognition: Theory and Evidence
Previous Article in Journal
Social Learning for Sequential Driving Dilemmas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Invasion of Optimal Social Contracts

1
Departamento de Física, Universidade de Minas Gerais, Belo Horizonte 31270-901, Brazil
2
Instituto de Humanidades, Artes e Ciências, Universidade Federal do Sul da Bahia, Teixeira de Freitas 45988-058, Brazil
*
Author to whom correspondence should be addressed.
Games 2023, 14(3), 42; https://doi.org/10.3390/g14030042
Submission received: 28 March 2023 / Revised: 5 May 2023 / Accepted: 9 May 2023 / Published: 15 May 2023
(This article belongs to the Special Issue In Pursuit of the Unification of Evolutionary Dynamics)

Abstract

:
The stag-hunt game is a prototype for social contracts. Adopting a new and better social contract is usually challenging because the current one is already well established and stable due to sanctions imposed on non-conforming members. Thus, how does a population shift from the current social contract to a better one? In other words, how can a social system leave a locally optimum configuration to achieve a globally optimum state? Here, we investigate the effect of promoting diversity on the evolution of social contracts. We consider group-structured populations where individuals play the stag-hunt game in all groups. We model the diversity incentive as a snowdrift game played in a single focus group where the individual is more prone to adopting a deviant norm. We show that a moderate diversity incentive is sufficient to change the system dynamics, driving the population over the stag-hunt invasion barrier that prevents the global optimum being reached. Thus, an initial fraction of adopters of the new, better norm can drive the system toward the optimum social contract. If the diversity incentive is not too large, the better social contract is the new equilibrium and remains stable even if the incentive is turned off. However, if the incentive is large, the population is trapped in a mixed equilibrium and the better social norm can only be reached if the incentive is turned off after the equilibrium is reached. The results are obtained using Monte Carlo simulations and analytical approximation methods.

1. Introduction

Social contracts can be defined as systems of commonly understood conventions that coordinate behavior [1,2]. For example, social contracts contain informal rules that regulate the conduct of the citizens of a society, rules codified as laws, moral codes, and even fashion. The members of a society are always envisioning better social contracts. However, it is not easy to change the social contract. Not following the rules can lead to unwanted costs, and trying to change society’s practices is subjected to the risk of coordination failure [3,4]. Thus, how do better social contracts emerge?
The social contract theory has a long tradition that can be divided into two distinct phases [5]. The first phase dates back to Rousseau’s influential book “The Social Contract” [6]. The primary focus was to formulate a theory that would employ the notion of a contract to legitimize or restrict political power. The second phase, which emerged later, coincided with the works of John Rawls and other contemporary scholars. This phase incorporated aspects of modern rational choice theory and Rawls’ interest in social justice [7]. Generally, the social contract theory assumes that the contract is voluntary and that consensus can be achieved because individuals are rational agents.
The individualistic approach adopted in social contract theory is a natural framework for the ideas of evolutionary game theory. In the 1990s, two works stepped in that direction: Brian Skyrms’s Evolution of the Social Contract [8] and Ken Binmore’s Just Playing [2]. These works were praised for their authors’ extensive knowledge of game theory and moral philosophy, ability to extract essential issues from complex literature, and creativity [9].The evolutionary game approach aims to understand how social contracts evolve. Rather than advocating for a specific social contract, the theory represents the social contracts as the equilibria of games and explains how they can arise and persist.
The crucial link between social contract and evolutionary game theory is the map between social contracts and the equilibria of a game. The social contract can be modeled as a stag-hunt game [10]. As proposed by Rousseau in A Discourse on Inequality, “If it was a matter of hunting a deer, everyone well realized that he must remain faithful to their post; but if a hare happened to pass within reach of one of them, we cannot doubt that he would have gone off in pursuit of it without scruple” [11]. The situation where everyone hunts hare represents the current social contract, and everyone hunting stag represents a better social contract that has not yet been adopted. The social contract solution, no matter which one, is an equilibrium solution: no one has a unilateral incentive to deviate (Nash equilibrium) [12,13]. Although the individuals may want to move to a better equilibrium, such movement requires coordination (risky Pareto improvement). Thus, the problem of how the optimal social contract emerges is translated to how the population shifts from one equilibrium to the other.
Some authors have modeled the social contract as a prisoner’s dilemma game [13,14,15,16,17]. The state where all individuals adopt defection is interpreted as the “state of nature”, as in Hobbes’ view of “the war of all against all”. Cooperation is interpreted as the social contract bringing order to society [18,19,20,21]. However, the prisoner’s dilemma fails to explain how individuals adhere to the rule of the social contract because the scenario in which all parties defect is the only state of equilibrium. The social contract has also been modeled as a hawk-dove game [1]. The state of nature is represented by the mixed equilibrium where two strategies are randomly played, with the randomness mimicking the chaos of the state of nature. The social contract is interpreted as one of the two pure equilibria where each individual has their role in society: one is a the dove and the other is the hawk. In these equilibria, everyone is better off if they adopt the opposite strategy of their partner. Interestingly, behavior diversity is sustained because the hawk-dove game operates as a non-polarizing game. Although different games can be used to model social contracts, it has been argued that the stag hunt is more appropriate [1,10].
The payoff structure of the stag-hunt game does not answer how a society can jump from the local equilibrium with a low payoff to a better one. The analysis of the replicator dynamics shows that the two equilibria are local attractors, and the population cannot easily escape from them [12]. Additional mechanisms must work to allow the jump. For example, the optimal equilibrium is facilitated if the agents exchange costly signals [22,23]. More specifically, if the game is the stag hunt, with strategies A and B, and the agents can send signals (arbitrary ones, denoted as 1 and 2), the individuals can take action according to a rule (if the signal is 1 they adopt A, otherwise they adopt B). Although no individual has previous information about the sign’s meaning, the communication introduces some correlations that facilitate the emergence of a better but riskier equilibrium [22]. The social structure can also modify the nature of the game. If the agents control their interactions, individuals with the same strategy can interact preferentially and promote the new norm [24].
To comprehend how equilibrium solutions are sustained, we must include one more element: social norms are the set of mutual expectations that uphold the equilibria of the social contract [2]. Social norms have been used to explain human cooperation where monetary-based social preferences cannot explain empirically observed behavior [25,26]. In the social preference framework, cooperative behavior is explained by assuming that the individuals seek to maximize a utility function that takes into account everyone being paid off. However, social preference cannot explain some experiments. For example, in the ultimatum game, the responders reject the same proposal at different rates depending on the available options to the proposer [27]. This result suggests that responders follow their personal norms instead of exclusively looking at the monetary reward [28]. The reader can find a comprehensive review of moral preferences in [25]. In these works, the focus is on using social norms to explain unselfishness in economic games. The evolutionary aspects of social norm evolution can be found in [29,30,31,32,33].
Adopting a behavior that deviates from the prevailing norm can be challenging since individuals belong to social groups that exert pressure on them [34]. However, some groups may give individuals more freedom to express novel behaviors and conform to their beliefs. Promoting diversity has been demonstrated to offer an exit from the trap of a negative equilibrium [35,36,37,38,39,40,41,42,43,44]. The encouragement of diversity can be modeled as a snowdrift game, where deviating from the partner’s strategy is the equilibrium solution. The game derives its name from a metaphorical scenario where two drivers are stuck in a snowstorm, and if neither shovels the snow, they cannot get home, but if both shovel the snow, they can go home quickly. However, if one shovels, the other can stay in the car, waiting for the job to be done. In the Nash equilibrium solution, one person shovels and the other stays in the car. To avoid any implication of antagonistic interaction, we will use the snowdrift game instead of the hawk and dove metaphor (both metaphors represent the same game). The snowdrift game introduces incentives that allow the coexistence of strategies in well-mixed populations and in structured populations [45,46,47]. Thus, suppose that individuals play the stag-hunt game, which represents the social contract, and the snowdrift game, which represents incentives for diversity. In that case, we can ask to what extent promoting diversity (the snowdrift incentives) can shift the population towards an optimal social contract. Adopting these incentives may help individuals feel more comfortable in displaying new behaviors and conforming to their beliefs, breaking away from the pressure exerted by their social groups.
Here, we investigate how the dynamics of social contracts are influenced by incentives promoting diversity. Specifically, we use a stag-hunt game to model the social contract and a snowdrift game to model the diversity incentive. The population is partitioned into groups, and each individual belongs to a subset of groups where the stag-hunt game is played. We assume that there is one group where individuals are encouraged to adopt a strategy that deviates from the majority. In this group, the individuals play a snowdrift game. The strategies evolve through imitation dynamics, where those yielding higher payoffs spread at higher rates. Our results show that moderate snowdrift incentives are sufficient to shift the equilibrium towards the optimal social norm, which becomes stable even after the diversity incentives are turned off. However, if individuals interact with many others in different groups, more significant snowdrift incentives may be necessary due to overall social pressure. We also demonstrate that similar outcomes can be observed in populations structured in square lattices, where the stag-hunt and snowdrift games are played within a range of neighbors. Our analysis is supported by computer simulations and analytical approximations that allow us to explain the results using simple game theory concepts.

2. Model

The population is structured in groups, and the individuals can have multiple group affiliations. More specifically, there are n groups in a population of size N. Each individual pertains to one group, which we call the focus group, and to a fraction q of the other non-focal groups. An illustration of such a structure is shown in Figure 1. The focus groups are defined by initially setting N / n individuals to each group.
The social norms are modeled as strategies in a stag-hunt game, which is played in all groups. The norm that yields the social optima is represented by A and the other one by B. Interactions of type A-A yield a payoff equal to 1, while B-B yields 0. If individuals with different norms interact, A receives δ ( 0 < δ < 1 ) and B receives 1 r ( 0 < r < 1 ). The game is specified by the parameter set ( r , δ ) and is represented by the following payoff matrix:
Stag-hunt = 1 δ 1 r 0 .
The best solution is to adopt the same strategy as the partner. Because r > 0 , the norm A is the social optimum. However, if everyone is adopting B and a single individual moves to the social optimum A, this individual faces the risk of receiving δ , which is worse than 0.
The diversity incentive is modeled as a snowdrift game, which is played only in the focus group of each individual. The payoffs from snowdrift are parameterized by ( r , δ ) , with 0 < r < 1 and 0 < δ < 1 , and the snowdrift is represented by the payoff matrix:
Snowdrift = 1 δ 1 + r 0 .
The best solution is to adopt the opposite strategy to your partner. Notice that if strategy A interacts with strategy B, the individual adopting B receives 1 + r , which is larger than the payoff that A receives, which is δ . That is, the diversity incentive is not benefiting the social contract A. As a last remark, the payoff obtained when both players adopt A is larger than that obtained when both players adopt B. However, this is not an issue because the diversity incentive is most important when strategy A is the minority.
The cumulative payoff of player f is equal to the gains obtained in all stag-hunt interactions, π f S h , plus the gains from all snowdrift interactions, π f S h , with the latter weighted by the incentive strength α :
π f = π f S h + α π f S d .
Notice that the parameter q determines the extent to which the stag-hunt payoff affects the cumulative payoff, which increases with the number of groups that an individual is connected to.
The strategies’ evolution is determined by imitation dynamics. A random player f is selected to compare their payoff with a random player z, which is connected to them. If they have different strategies, the first player imitates the strategy of the second one with a probability given by the Fermi rule:
P f z = 1 1 + exp ( π z π f ) / K ,
where K is the selection intensity representing the population level of irrationality. We set k = 0.1 to allow some level of irrationality but still maintain the effective selection of the most successful strategies. Our results are robust to variations of K. However, if K is too large, the payoff difference has a low impact on the outcome and the evolution become close to neutral evolution. Each generation consists of N repetitions of the imitation step.

3. Results

The imitation dynamics can be analyzed using a mean-field approximation. For the simple case of one group ( n = 1 ), the mean-field equation becomes
d x d t = ( 1 x ) x sinh π A ( x ) π B ( x ) ,
where x is the fraction of individuals adopting A, while π A and π B are the average payoffs of individuals adopting A and B, respectively (see Appendix A). For n = 1 , there is a single well-mixed group where all individuals play the two games. Thus, the cumulative payoffs are effectively determined by the sum of the matrices of the two games:
M = 1 + α δ + α δ 1 r + α ( 1 + r ) 0 .
The analysis of Equation (5) shows that there is an unstable equilibrium x * , which is the solution of π A ( x ) π B ( x ) = 0 , and is given by:
x * = δ α δ δ + r α ( δ + r ) .
Let us first recall the main results for the stag-hunt dynamics by setting α = 0 . If the fraction of A at time t is such that x < x * , because π A ( x ) π B ( x ) < 0 for x in that range, the population goes to the state x = 0 (all B). If x > x * , the population goes to state x = 1 (all A). Thus, if a small amount of A (a fraction ϵ ) invades a population initially at x = 0 , as long as ϵ < x * , the invader has no chance. The invasion must be large enough to overcome the invasion barrier determined by the unstable equilibrium x * .
On the other side, in the dynamics of a snowdrift game, new strategies can always invade. The analysis of Equation (5) for a very large α , when the game is a pure snowdrift, shows that there is a stable equilibrium state, where A and B coexist, which is also given by Equation (7). In this case, a small amount of A invaders have the incentive to maintain their strategies and will spread in the population until the coexistence equilibrium is reached.
In the combination of both the stag hunt and snowdrift, the effect of increasing the incentive provided by the snowdrift game, which is parameterized by α , is to change the dynamics from that of the stag hunt to that of the snowdrift. If α is low, the weight of the stag-hunt payoff is larger, and the optimum strategy A cannot invade. If α is too large, the dynamics change completely, being determined by the snowdrift payoff, where both strategies coexist. However, if α is moderate, more specifically, if
δ δ < α < r r ,
then, not only can A invade, but it will certainly dominate the population. Equation (8) is obtained by a simple Nash equilibrium analysis of the payoff matrix in Equation (6), which coincides with the fixed point analysis of the replicator Equation [48]. For α in this interval given by Equation (8), the payoff structure becomes equivalent to that of a Harmony game, where A is a global attractor of the dynamics. If the goal of the snowdrift incentive is to shift the population to the social optimum without ending at a coexistence equilibrium, then the moderate α solution is the best. Thus, if the incentive provided by the snowdrift is moderate, the optimal social contract can invade the population and persists even if the snowdrift incentive is turned off.
Still in the n = 1 case, the unstable equilibrium of the stag hunt determines the invasion barrier for the norm A: the higher the value of x * , the harder it is for A to invade. Figure 2 shows how the payoff difference Π A ( x ) Π B ( x ) changes as the fraction x of A and the incentive α vary. If there is no snowdrift incentive ( α = 0 ), only a massive conversion of A will drive the population to the norm B. However, if a moderate incentive is provided, any initial fraction of A will convert the population. As expected, if α is excessively large, the dynamics change and coexistence will be the final state, independently of the initial conditions.
In the general case, where the population is split into groups, a close look at the general mean field approximation (discussed in the Appendix A) shows that the fraction of A in each group tends to be near to each other. Thus, because all groups have roughly the same fraction of A at any time, the analysis can be simplified to the analysis of a single group, the n = 1 case, with the average payoff matrix given by
M = q ˜ + α q ˜ δ + α δ q ˜ ( 1 r ) + α ( 1 + r ) 0 ,
where q ˜ = 1 + q ( n 1 ) is the number of groups where the stag hunt is played. The dynamics are now determined by the impact of the stag-hunt payoff relative to the snowdrift payoff, which is controlled by the parameters α and q.
First, if the group structure does not change, that is, if q is fixed, we would like to find the moderate values of the incentive α that turns the stag-hunt payoff into a harmony game payoff. The condition is given by
q ˜ δ δ < α < q ˜ r r .
On the other side, if α is fixed, then the effect of the snowdrift incentive depends on the group structure. If the individuals play stag hunt in many groups (high q), then the social pressure overcomes the snowdrift incentive and it is nearly impossible for A to invade. More specifically, let us fix α , with α > r / r , so that if q ˜ = 1 , the snowdrift is dominant. In this case, increasing q ˜ means that more stag-hunt games are played. Thus, if q ˜ is too large, then the stag-hunt dynamics are dominant. However, if
α r r < q ˜ < α δ δ ,
then the effective game is the harmony game and the strategy A can invade and dominate. In other words, only if the number of groups is moderate can the new social contract invade and dominate.
The simulations corroborate the mean-field analysis, as shown in Figure 3a,b. Figure 3a shows the equilibrium fraction of A (the globally optimal social norm) as a function of the number of groups connected to each agent, q ˜ = 1 + ( n 1 ) q , while in Figure 3b, the fraction of A is shown as a function of the snowdrift incentive α . The optimum influence of the incentive α over the success of A is felt for a moderate number of groups since x is at its maximum value for intermediate q ˜ (see Figure 3a). If the system is highly connected, then stag hunt dominates, and the social norm A disappears when x ( 0 ) is low. Notice that the incentive α has to be low to keep snowdrift from dominating the dynamics, as shown in Figure 3b.
The effect of the initial fraction of A, x ( 0 ) , is shown in Figure 4a,b. One can see that the initial condition x ( 0 ) affects the equilibrium fraction of A when the dynamics are dominated by stag hunt, which happens if the number of groups, q ˜ , is high enough (Figure 4a), and if the snowdrift coefficient α is sufficiently low (Figure 4b). Additionally, the minimum value of x ( 0 ) necessary for preventing A from being extinguished increases with the number of connections q ˜ , especially when the system goes from being poorly connected ( q ˜ 3 ) to being moderately connected ( 3 < q ˜ < 5 ). This effect comes from the majority’s strategy strongly influencing agents’ strategies in stag hunt. Thus, when an agent plays stag hunt with most of its connections, they are more susceptible to adopting the majority’s strategy. However, the influence of x ( 0 ) for the persistence of A weakens as the snowdrift incentive α increases, i.e., when the local interactions that encourage different opinions become more important, as can be seen in Figure 4b.
The mean-field approximation is valid only if the size of the groups is large. If small, fluctuations play a major role and the mean-field approach is not precise. Even so, we see that strategy A is facilitated for moderate values of q, as shown in Figure 5, for all population sizes. Notice that, for small population sizes, the average values in Figure 5 represent fixation probabilities and not stationary values. The reason is that for small sizes, the states where all individuals are either A or B are reached with a probability of one, independently of the game being played. The coexistence stationary solution is meta-stable even if only the snowdrift game is played. The mean-field behavior is recovered for the large groups.
To further investigate the robustness of our results, we also consider a square lattice version of our model. Each agent occupies a site in a square lattice and plays stag hunt and snowdrift with all the sites within a range of R S H for stag hunt and R S D for snowdrift, with R S H R S D . The distance between two neighboring sites is set to be 1. The interaction ranges delimit groups and play a similar role to the parameter q in the previous version of the model.
In the square lattice version, the agents have weaker connections, since the clustering coefficient is always lower for this network than in the group-structured population. This difference could impact the results since the network clustering affects the spreading ability of the social norms. Despite such differences, the results for both settings are very similar. Figure 6a,b show, respectively, the fraction of A as a function of the stag hunt influence radius R S H and the fraction of A as a function of the snowdrift incentive α . Similarly to the results for the group simulation (see Figure 3), the incentive α most benefits the norm A when the stag-hunt influence is limited but relevant. If R S H is way larger than the snowdrift influence radius R S D , then stag hunt dominates, and A disappears for a low x ( 0 ) . Additionally, the incentive α most benefits A when it is not too large, and the system is not dominated by snowdrift.
In the analysis so far, the incentive α is kept constant throughout the evolutionary process. After the system reaches the new equilibrium, we may ask what happens if the incentive is turned off. Let us analyze a group-structured population with n = 1 . The dynamics of Equation (5) for α = 0 are determined by the payoff from the stag-hunt game only, which has an unstable fixed-point given by Equation (7), namely x S H * = δ / ( δ + r ) . Let x e q be the fraction of A at equilibrium for a non-zero α . If α is set to zero after the system reaches equilibrium, the population moves to x = 1 if x e q > x S H * because, in this case, the population is in the basin of attraction of A. In the limit of infinite α , the condition x e q > x S H * for n = 1 is given by
δ δ < δ + r δ + r .
A similar analysis yields the same conclusions for the n > 1 case and for the square lattice version of the model. Figure 7 illustrates a typical result for the group-structured population.

4. Discussion and Conclusions

The persistence of social contracts can be attributed to the fact that it is typically disadvantageous to deviate from established norms, thus making it difficult for a new state of equilibrium to emerge, even if it offers greater payoffs. However, our findings indicate that introducing an incentive for diversity facilitates establishing a new equilibrium.
We use the stag-hunt game to model the social contract and the snowdrift game to model the incentives. Our model assumes that individuals have a close group of friends and trusted contacts, where they are more likely to adopt new norms. Our findings demonstrate that if the snowdrift incentive is more moderate, then the population is driven to the optimum social contract, which is an equilibrium even without the incentive. However, if the incentive is large, the population is trapped in a mixed equilibrium where the old and new social contracts coexist. In this case, if the population successfully overcomes the stag-hunt invasion barrier, the incentive can be turned off, and the stag-hunt dynamics drives the population towards the best social contract.
The coexistence equilibrium that is reached for a large α can be viewed as a new social contract where diversity is more valuable than the homogeneous situation where everyone adopts the optimum social norm. This is a fascinating aspect of how social contracts evolve. Equilibria are reached by the voluntary actions of individuals seeking the best possible payoff, and unexpected equilibria may be found in the path towards better social contracts. In Binmore’s theory, the author suggests the existence of a “Game of Life”, which is anticipated to have numerous equilibria [2], making the path to socially optimum social contracts even more complex.
Our model does not account for social contracts that are group-specific. Instead, we assume that all individuals, regardless of their group affiliation, have equal options and obtain payoffs from the same stag-hunt game. Furthermore, as many societies regard diversity and inclusion as important moral values, we should stress that our model does not advocate against these values when we turn off the snowdrift incentive after achieving the new social optimum. We assume that moving to the new equilibrium is advantageous for all individuals involved. Consequently, the lesson is that policies promoting diversity can aid in coordinating actions toward new equilibria.
The method of evolutionary game theory is very abstract, as it reduces the complexity of social phenomena into a payoff matrix that governs the strategies’ reproductive rates. Further work is needed to extend the theory connecting evolutionary games and social contracts. Vanderschraaf [9] provides a critical review of the connection between the evolutionary game theory approach to social contracts and other specialized domains such as moral and political philosophy. Our study contributes to this area by examining diversity as a mechanism to shift the population to better equilibria. In the context of evolutionary game theory, we examined the circumstances under which the payoffs from snowdrift games played in a single group alter the fixed points of the population dynamics determined by the stag-hunt game.
Our work is based only on theoretical analysis. We do not conduct economic experiments with humans. The hypothesis that social contracts are equilibria of games can be investigated using economic experiments reproducing social contract contexts. However, it is well known that human behavior in experiments does not always conform to the predictions of rational choice theory. For instance, in public goods games, the predicted Nash equilibrium outcome is that participants would contribute nothing [49]. Nonetheless, numerous experiments indicate that participants tend to initially contribute about half of their endowment, decreasing their contribution over time before ultimately settling it on a non-zero amount in the final rounds [50,51]. Contributions may increase in the presence of mechanisms such as communication [52], punishment [53], and reward [54]. Therefore, the claim that social contracts are equilibria is based on theoretical expectations that may not reflect actual human behavior. Moreover, the issue of external validity in experiments raises questions about the extent to which behavior observed in experiments reflects the actual behavior observed in natural conditions [50]. As a result, additional research examining the empirical foundations of the evolutionary game theory approach to social contracts is still necessary.
Our findings have important implications. Scholars have suggested that contemporary society is transitioning from a social world marked by stable, long-term relationships to one defined by brief, temporary connections that alter the social contract [55]. There has been a shift in the collective social understanding concerning family values and job security, among other things. This shift may indicate that the social contract is eroding and that individuals are stuck in unwanted equilibria. However, our research implies that, if people are encouraged to play differently, at least in one group, society could progress towards better social contracts.
Norms can be maintained through internalization, meaning that individuals follow the norms without even being aware of them [56]. For example, feelings of shame and guilt can guide individuals to align their actions with social norms, even if they are not consciously aware of those norms. Suppose that there is one group in which individuals feel comfortable being different. In that case, they may be less likely to feel ashamed for not conforming, and diversity may be more likely to emerge. Norms can also be enforced by authority figures such as politicians or religious leaders recognized by group members [56]. If these figures have an agenda to shift the social contract, they can incentivize diversity within their influence group. In both examples, our model shows that, if individuals have incentives not to conform, at least in one group, it can shift society towards a better social contract.
Changing social norms is a complicated matter, and we acknowledge that numerous factors likely influence the evolution of social contracts. Our research aims to illuminate one fundamental incentive structure: social norms do not encourage deviant behavior and policies that promote diversity and inclusion facilitate the emergence of different norms. When a block rests on an inclined plane held in place by static friction, gravity acts on it even if it remains stationary. Likewise, although there may be several other social forces, incentives for diversity and inclusion are forces pushing the population towards the desired social optimum.

Author Contributions

Conceptualization, A.F.L., M.A.A. and L.W.; methodology, A.F.L., I.B. and L.W.; software, A.F.L.; validation, A.F.L., M.A.A., I.B. and L.W.; formal analysis, I.B. and L.W.; investigation, A.F.L., M.A.A., I.B. and L.W.; writing—original draft preparation, A.F.L., M.A.A. and L.W.; writing—review and editing, A.F.L., M.A.A., I.B. and L.W.; visualization, A.F.L., M.A.A. and L.W.; supervision, L.W.; project administration, L.W.; funding acquisition, L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Brazilian Research Agency CNPq (proc. 428653/2018-9), the Brazilian Research Agency CAPES (proc. 88887.463878/2019-00 and proc. 88887.519069/2020-00), and the Minas Gerais State Agency for Research and Development FAPEMIG (proc. APQ-00694-18).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The justification for assuming that the fraction of A in all groups is approximately the same is derived as follows. Let N A i be the number of individuals of norm A that belong to group i. If n is the number of groups, each group has N / n individuals, and N A i ranges from 0 to N / n . The following analysis uses the fact that the payoff matrix in the focal group is the sum of the payoff of the two games. Let ( a , b , c , d ) = ( 1 , δ , 1 r , 0 ) be the stag-hunt payoff matrix and ( a , b , c , d ) = ( 1 + α , δ + α δ , 1 r + α ( 1 + r ) , 0 ) be the sum of the payoff of the two games.
In our approach, we consider that each individual in the focus group i has a probability q of being connected to each of the remaining n 1 groups. Let x i = N A i / N , with 0 x i 1 / n , be the fraction of A i individuals in the total population. Because the groups have well-mixed interactions, the payoffs gained by A and B individuals belonging to the same focus group i are given by
π A i = a n x i + b ( 1 n x i ) + q j i a n x i + b ( 1 n x i ) π B i = c n x i + d ( 1 n x i ) + q j i c n x i + d ( 1 n x i ) .
The next step is to approximate the transition rates. Recall that all individuals have an equal probability of being chosen to change their strategy. Thus, an A i individual has a probability x i of being chosen, and a B i individual has a probability ( 1 / n x i ) . Furthermore, since each individual is certainly linked with the members of the same focus group, the probability that they compare their payoff with one of the A i is proportional to x i , and with one of the B i , is proportional to ( 1 / n x i ) . For the non-focal groups, the probability that an individual from i compares its strategy with an A j , or a B j , the individual is proportional to q x j , or q ( 1 / n x j ) , respectively. Therefore, the transition rates can be written as
T i + = 1 Z 1 n x i j q i j x j 1 1 + e π B i π A j T i = 1 Z x i j q i j 1 n x j 1 1 + e π A i π B j ,
where q i i = 1 , q i j = q for i j , and Z is a normalization factor. For large populations, the system is mainly driven by the drift term, and it can be described by the following deterministic set of rate equations:
d x i d τ = j q i j 1 n ( x j x i ) + 1 n x i x j e π A i π B j ( 1 + e π A i π B j ) ( 1 + e π B i π A j ) 1 n x j x i e π B i π A j ( 1 + e π A i π B j ) ( 1 + e π B i π A j ) ,
where we have rescaled the time. Notice that, for n = 1 , Equation (5) in the main text is recovered.
Finally, we see that, if π A i π B j for all i , j , which is the case in the regime of weak selection, then e π B i π A j 1 for all i , j , and the set of rate equations drives the variables x i near to each other in the first order. The reason is that, if x i < x j for some j, then the first order term ( x j x i ) contributes with a positive term to the equation, with the opposite happening if x i > x j . Thus, the overall effect is that the variables x i will evolve in time close to each other for all i. This is interesting because we can describe the system only in terms of the total fraction of A types, x = i x i , since in this approximation, we have x i x / n after a short relaxation time.

References

  1. Binmore, K. Game Theory and the Social Contract: Playing Fair; The MIT Press Cambridge: London, UK, 1994. [Google Scholar]
  2. Binmore, K. Game Theory and the Social Contract: Just Playing; The MIT Press Cambridge: London, UK, 1994. [Google Scholar]
  3. Straub, P.G. Risk dominance and coordination failures in static games. Q. Rev. Econ. Financ. 1995, 35, 339–363. [Google Scholar] [CrossRef]
  4. Cislaghi, B.; Denny, E.K.; Cissé, M.; Gueye, P.; Shrestha, B.; Shrestha, P.N.; Ferguson, G.; Hughes, C.; Clark, C.J. Changing social norms: The importance of “organized diffusion” for scaling up community health promotion and women empowerment interventions. Prev. Sci. 2019, 20, 936–946. [Google Scholar] [CrossRef]
  5. Lessnoff, M. (Ed.) Social Contract; Basil Blackwell: Oxford, UK, 1990. [Google Scholar]
  6. Russeau, J.-J. The Social Contract; Penguin Books: London, UK, 2004. [Google Scholar]
  7. Rawls, J. A Theory of Justice; The Belknap Press of Harvard University Press: Cambridge, MA, USA, 1971. [Google Scholar]
  8. Skyrms, B. Evolution of the Social Contract; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  9. Vanderschraaf, P. Game Theory, Evolution, and Justice. Philos. Public Aff. 1999, 28, 325–358. [Google Scholar] [CrossRef]
  10. Skyrms, B. The Stag-Hunt Game and the Evolution of Social Structure; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  11. Rousseau, J.-J. Discourse on the Origin of Inequality; Dover Publications: Mineola, NY, USA, 2004. [Google Scholar]
  12. Nowak, M.A. Evolutionary Dynamics Exploring the Equations of Life; Harvard Univ. Press: Cambridge, MA, USA, 2006. [Google Scholar]
  13. Szabó, G.; Fáth, G. Evolutionary games on graphs. Phys. Rep. 2007, 446, 97–216. [Google Scholar] [CrossRef]
  14. Grujić, J.; Lenaerts, T. Do people imitate when making decisions? Evidence from a spatial Prisoner’s Dilemma experiment. R. Soc. Open Sci. 2020, 7, 200618. [Google Scholar] [CrossRef]
  15. Szabó, G.; Toke, C. Evolutionary prisoner’s dilemma game on a square lattice. Phys. Rev. E 1998, 58, 69–73. [Google Scholar] [CrossRef]
  16. Grujić, J.; Fosco, C.; Araujo, L.; Cuesta, J.A.; Sánchez, A. Social Experiments in the Mesoscale: Humans Playing a Spatial Prisoner’s Dilemma. PLoS ONE 2010, 5, e13749. [Google Scholar] [CrossRef]
  17. Santos, F.C.; Pacheco, J.M. A new route to the evolution of cooperation. J. Evol. Biol. 2006, 19, 726–733. [Google Scholar] [CrossRef]
  18. Gibbard, A. A pareto-consistent libertarian claim. J. Econ. Theory 1974, 7, 388–410. [Google Scholar] [CrossRef]
  19. Hampton, J. Hobbes and the Social Contract Tradition; Cambridge University Press: Cambridge, UK, 1986. [Google Scholar]
  20. Kavka, G.S. Hobbes’s war of all against all. Ethics 1983, 93, 291–310. [Google Scholar] [CrossRef]
  21. Kavka, G. Hobbesian Moral and Political Theory; Princeton University Press: Princeton, NJ, USA, 1986. [Google Scholar]
  22. Skyrms, B. Signals, evolution and the explanatory power of transient information. Philos. Sci. 2002, 69, 407–428. [Google Scholar] [CrossRef]
  23. Santos, F.C.; Pacheco, J.; Skyrms, B. Co-evolution of pre-play signaling and cooperation. J Theor. Biol. 2011, 274, 30–35. [Google Scholar] [CrossRef]
  24. Watts, A. A dynamic model of network formation. Game Econ. Behav. 2001, 34, 331–341. [Google Scholar] [CrossRef]
  25. Capraro, V.; Perc, M. Mathematical foundations of moral preferences. J. R. Soc. Interface 2021, 18, 20200880. [Google Scholar] [CrossRef]
  26. Perc, M.; Jordan, J.J.; Rand, D.G.; Wang, Z.; Boccaletti, S.; Szolnoki, A. Statistical physics of human cooperation. Phys. Rep. 2017, 687, 1–51. [Google Scholar] [CrossRef]
  27. Falk, A.; Fehr, E.; Fischbacher, U. On the nature of fair behavior. Econ. Inq. 2003, 41, 20–26. [Google Scholar] [CrossRef]
  28. Bicchieri, C.; Chavez, A. Behaving as expected: Public information and fairness norms. J. Behav. Decis. Mak. 2010, 23, 161–178. [Google Scholar] [CrossRef]
  29. Santos, F.P.; Santos, F.C.; Pacheco, J.M. Social norm complexity and past reputations in the evolution of cooperation. Nature 2018, 555, 242–245. [Google Scholar] [CrossRef]
  30. Morsky, B.; Akçay, E. Evolution of social norms and correlated equilibria. Proc. Natl. Acad. Sci. USA 2019, 116, 8834–8839. [Google Scholar] [CrossRef]
  31. Chalub, F.A.C.C.; Santos, F.C.; Pacheco, J.M. The evolution of norms. J. Theor. Biol. 2006, 241, 233–240. [Google Scholar] [CrossRef]
  32. Ohtsuki, H.; Iwasa, Y. The leading eight: Social norms that can maintain cooperation by indirect reciprocity. J. Theor. Biol. 2006, 239, 435–444. [Google Scholar] [CrossRef]
  33. Capraro, V.; Perc, M. Grand Challenges in Social Physics: In Pursuit of Moral Behavior. Front. Phys. 2018, 6, 1–6. [Google Scholar]
  34. Young, H.P. The evolution of social norms. Economics 2015, 7, 359–387. [Google Scholar] [CrossRef]
  35. Szolnoki, A.; Danku, Z. Dynamic-sensitive cooperation in the presence of multiple strategy updating rules. Physica A 2018, 511, 371–377. [Google Scholar] [CrossRef]
  36. Cheng, S.; Shi, Y.; Qin, Q. Promoting diversity in particle swarm optimization to solve multimodal problems. In International Conference on Neural Information Processing; Springer: Berlin/Heidelberg, Germany, 2011; pp. 228–237. [Google Scholar]
  37. Cheng, S.; Shi, W.; Qin, Q.; Zhang, Q.; Bai, R. Population diversity maintenance in brain storm optimization algorithm. J. Artif. Intell. Soft Comput. Res. 2014, 4, 83–97. [Google Scholar] [CrossRef]
  38. Squillero, G.; Tonda, A. Divergence of character and premature convergence: A survey of methodologies for promoting diversity in evolutionary optimization. Inf. Sci. 2016, 329, 782–799. [Google Scholar] [CrossRef]
  39. Qin, J.; Chen, Y.; Fu, W.; Kang, Y.; Perc, M. Neighborhood diversity promotes cooperation in social dilemmas. IEEE Access 2018, 6, 5003–5009. [Google Scholar] [CrossRef]
  40. Sendiña-Nadal, I.; Leyva, I.; Perc, M.; Papo, D.; Jusup, M.; Wang, Z. Diverse strategic identities induce dynamical states in evolutionary games. Phys. Rev. Res. 2020, 2, 043168. [Google Scholar] [CrossRef]
  41. Amaral, M.A.; Javarone, M.A. Heterogeneity in evolutionary games: An analysis of the risk perception. Proc. R. Soc. A Math. Phys. Eng. Sci. 2020, 476, 20200116. [Google Scholar] [CrossRef]
  42. Santos, F.C.; Santos, M.D.; Pacheco, J.M. Social diversity promotes the emergence of cooperation in public goods games. Nature 2008, 454, 213–216. [Google Scholar] [CrossRef]
  43. Santos, F.C.; Pinheiro, F.L.; Lenaerts, T.; Pacheco, J.M. The role of diversity in the evolution of cooperation. J. Theor. Biol. 2012, 299, 88–96. [Google Scholar] [CrossRef]
  44. Amaral, M.A.; Arenzon, J.J. Rumor propagation meets skepticism: A parallel with zombies. Europhys. Lett. 2018, 124, 18007. [Google Scholar] [CrossRef]
  45. Smith, J.M. Evolution and the Theory of Games; Cambridge Univ. Press: Cambridge, UK, 1982. [Google Scholar]
  46. Killingback, T.; Doebeli, M. Spatial evolutionary game theory: Hawks and Doves revisited. Proc. R. Soc. Lond. B 1996, 263, 1135. [Google Scholar]
  47. Sigmund, K.; Nowak, M.A. Evolutionary game theory. Curr. Biol. 1999, 9, R503. [Google Scholar] [CrossRef]
  48. Sigmund, K. The Calculus of Selfishness; Princeton Univ. Press: Princeton, NJ, USA, 2010. [Google Scholar]
  49. Tadelis, S. Game Theory: An Introduction; Princeton University Press: Princeton, NJ, USA, 2013. [Google Scholar]
  50. Sturm, B.; Weimann, J. Experiments in Environmental Economics and Some Close Relatives. J. Econ. Surv. 2006, 20, 419–457. [Google Scholar] [CrossRef]
  51. Ledyard, J.O. Public Goods: A Survey of Experimental Research. In The Handbook of Experimental Economics; Kagel, J.H., Roth, A.E., Eds.; Princeton University Press: Princeton, NJ, USA, 1995. [Google Scholar]
  52. Isaac, R.M.; Walker, J.M. Communication and free-riding behavior: The voluntary contribution mechanism. Econ. Inq. 1998, 26, 585–608. [Google Scholar] [CrossRef]
  53. Fehr, E.; Gächter, S. Cooperation and punishment in public goods experiments. Am. Econ. Rev. 2000, 90, 980–994. [Google Scholar] [CrossRef]
  54. Rand, D.G.; Dreber, A.; Ellingsen, T.; Fudenberg, D.; Nowak, M.A. Positive Interactions Promote Public Cooperation. Science 2009, 325, 1272–1275. [Google Scholar] [CrossRef]
  55. Rubin, B. Shifts in the Social Contract: Understanding Change in American Society; Pine Forge Press: Thousand Oaks, CA, USA, 1996. [Google Scholar]
  56. Gavac, S.; Murrar, S.; Brauer, M. Group perception and social norms. In Social Psychology: How Other People Influence Our Thoughts and Actions; Summers, R., Ed.; ABC-CLIO: Santa Barbara, CA, USA, 2017. [Google Scholar]
Figure 1. Group structure. The individuals are represented by circles and the groups by rounded squares. Each individual belongs to one focal group (continuous line) and is connected to each one of the other groups with the probability q (dashed line). The two social norms, A and B, are represented by the colors, blue and red, respectively. Here, N = 9 , n = 3 , and q = 1 / 3 .
Figure 1. Group structure. The individuals are represented by circles and the groups by rounded squares. Each individual belongs to one focal group (continuous line) and is connected to each one of the other groups with the probability q (dashed line). The two social norms, A and B, are represented by the colors, blue and red, respectively. Here, N = 9 , n = 3 , and q = 1 / 3 .
Games 14 00042 g001
Figure 2. Replicator dynamics of the average game. The first diagram, on the left, shows the payoff difference π A π B in the replicator equation, as a function of the fraction of A and the snowdrift incentive α . The arrows indicate the sign of x ˙ (positive, if the arrow points up, and negative otherwise). The second diagram, on the right, shows the three main regions of the first diagram using a simplex representation for the strategies’ fractions. The white circle represents an unstable equilibrium and the black circle stable equilibrium. The white arrow represent the direction of change of x. For low values of α , the stag hunt dominates and there is an unstable equilibrium state around x * 0.1 . All agents end up adopting strategy A or B, depending on whether x is lower or higher than x * . The dashed arrow represents the invasion barrier that A would have to overcome if most of the agents adopt strategy B. For intermediary α , the system behaves as a harmony game, and everyone adopts strategy A. Finally, for high values of α , snowdrift dominates, and strategies A and B coexist. Here, r = 0.5 , δ = 0.1 , r = 1 , and δ = 0.5 .
Figure 2. Replicator dynamics of the average game. The first diagram, on the left, shows the payoff difference π A π B in the replicator equation, as a function of the fraction of A and the snowdrift incentive α . The arrows indicate the sign of x ˙ (positive, if the arrow points up, and negative otherwise). The second diagram, on the right, shows the three main regions of the first diagram using a simplex representation for the strategies’ fractions. The white circle represents an unstable equilibrium and the black circle stable equilibrium. The white arrow represent the direction of change of x. For low values of α , the stag hunt dominates and there is an unstable equilibrium state around x * 0.1 . All agents end up adopting strategy A or B, depending on whether x is lower or higher than x * . The dashed arrow represents the invasion barrier that A would have to overcome if most of the agents adopt strategy B. For intermediary α , the system behaves as a harmony game, and everyone adopts strategy A. Finally, for high values of α , snowdrift dominates, and strategies A and B coexist. Here, r = 0.5 , δ = 0.1 , r = 1 , and δ = 0.5 .
Games 14 00042 g002
Figure 3. Average fraction of the number of individuals adopting the optimal social contract, x. Panel (a) shows the fraction x as a function of the number of groups connected to each agent, q ˜ = 1 + ( n 1 ) q , while panel (b) shows that fraction as a function of the incentive α . Here, r = 0.5 , δ = 0.1 , r = 1 , and δ = 0.5 . The initial fraction of A is x ( 0 ) = 0.01 , the number of agents is N = 10,000, and the number of groups is n = 10 . In panel (a), the snowdrift incentive is α = 1 , while in panel (b), the number of groups connected to each agent is q ˜ = 10 . Notice that the dashed line in panel ( a ) ( 2.9 < q ˜ < 3.6 ) indicates a bistable region where the system falls in one of the absorbing states, A = 1 or A = 0 .
Figure 3. Average fraction of the number of individuals adopting the optimal social contract, x. Panel (a) shows the fraction x as a function of the number of groups connected to each agent, q ˜ = 1 + ( n 1 ) q , while panel (b) shows that fraction as a function of the incentive α . Here, r = 0.5 , δ = 0.1 , r = 1 , and δ = 0.5 . The initial fraction of A is x ( 0 ) = 0.01 , the number of agents is N = 10,000, and the number of groups is n = 10 . In panel (a), the snowdrift incentive is α = 1 , while in panel (b), the number of groups connected to each agent is q ˜ = 10 . Notice that the dashed line in panel ( a ) ( 2.9 < q ˜ < 3.6 ) indicates a bistable region where the system falls in one of the absorbing states, A = 1 or A = 0 .
Games 14 00042 g003
Figure 4. Diagram for the fraction x of individuals adopting the optimal social contract. Panel (a) shows x as a function of its initial value x ( 0 ) , and the average number of groups that an individual is connected to, q ˜ = 1 + ( n 1 ) q . In panel (b), the fraction x is shown as a function of x ( 0 ) and the snowdrift incentive α . Here, r = 0.5 , δ = 0.1 , r = 1 , and δ = 0.5 .
Figure 4. Diagram for the fraction x of individuals adopting the optimal social contract. Panel (a) shows x as a function of its initial value x ( 0 ) , and the average number of groups that an individual is connected to, q ˜ = 1 + ( n 1 ) q . In panel (b), the fraction x is shown as a function of x ( 0 ) and the snowdrift incentive α . Here, r = 0.5 , δ = 0.1 , r = 1 , and δ = 0.5 .
Games 14 00042 g004
Figure 5. Average fraction x of individuals adopting the optimal social contract for different system sizes. Each panel shows x as a function of the number of connections per agent, 1 + ( n 1 ) q , for a different system size. Here, the number of groups is n = 10 , the initial fraction of A is x ( 0 ) = 0.01 and the stag-hunt and snowdrift parameters are r = 0.5 , δ = 0.1 , r = 1 , and δ = 0.5 . Notice that despite the finite size effects, strategy A is still facilitated for moderate values of q ˜ .
Figure 5. Average fraction x of individuals adopting the optimal social contract for different system sizes. Each panel shows x as a function of the number of connections per agent, 1 + ( n 1 ) q , for a different system size. Here, the number of groups is n = 10 , the initial fraction of A is x ( 0 ) = 0.01 and the stag-hunt and snowdrift parameters are r = 0.5 , δ = 0.1 , r = 1 , and δ = 0.5 . Notice that despite the finite size effects, strategy A is still facilitated for moderate values of q ˜ .
Games 14 00042 g005
Figure 6. Average fraction x of individuals adopting the optimal social norm A in the square lattice model. Panel (a) shows x as a function of the stag-hunt influence radius, R S H , while in panel b, the fraction x is shown as a function of the snowdrift incentive α . Here, the influence radius for the snowdrift game in both panels is R S D = 4 , while that for stag hunt in panel (b) is R S H = 20 . The initial fraction of A is x ( 0 ) = 0.01 and the stag-hunt and snowdrift parameters are r = 0.5 , δ = 0.1 , r = 1 , and δ = 0.5 . Notice that the dashed line in panel ( a ) ( 8.5 < R S H < 9.5 ) indicates a bistable region where the system falls in one of the absorbing states, A = 0 or A = 1 .
Figure 6. Average fraction x of individuals adopting the optimal social norm A in the square lattice model. Panel (a) shows x as a function of the stag-hunt influence radius, R S H , while in panel b, the fraction x is shown as a function of the snowdrift incentive α . Here, the influence radius for the snowdrift game in both panels is R S D = 4 , while that for stag hunt in panel (b) is R S H = 20 . The initial fraction of A is x ( 0 ) = 0.01 and the stag-hunt and snowdrift parameters are r = 0.5 , δ = 0.1 , r = 1 , and δ = 0.5 . Notice that the dashed line in panel ( a ) ( 8.5 < R S H < 9.5 ) indicates a bistable region where the system falls in one of the absorbing states, A = 0 or A = 1 .
Games 14 00042 g006
Figure 7. Temporal evolution for the average fraction x of individuals adopting the optimal social norm A. The purple line shows the results for the model with α = 6 . The green line is for the model where α = 6 but is turned off after 200 time steps. Here, the initial value of A is x ( 0 ) = 0.01 , the number of groups connected to each agent is q ˜ = 10 , and the stag-hunt and snowdrift parameters are r = 0.5 , δ = 0.1 , r = 1 , and δ = 0.5 .
Figure 7. Temporal evolution for the average fraction x of individuals adopting the optimal social norm A. The purple line shows the results for the model with α = 6 . The green line is for the model where α = 6 but is turned off after 200 time steps. Here, the initial value of A is x ( 0 ) = 0.01 , the number of groups connected to each agent is q ˜ = 10 , and the stag-hunt and snowdrift parameters are r = 0.5 , δ = 0.1 , r = 1 , and δ = 0.5 .
Games 14 00042 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lütz, A.F.; Amaral, M.A.; Braga, I.; Wardil, L. Invasion of Optimal Social Contracts. Games 2023, 14, 42. https://doi.org/10.3390/g14030042

AMA Style

Lütz AF, Amaral MA, Braga I, Wardil L. Invasion of Optimal Social Contracts. Games. 2023; 14(3):42. https://doi.org/10.3390/g14030042

Chicago/Turabian Style

Lütz, Alessandra F., Marco Antonio Amaral, Ian Braga, and Lucas Wardil. 2023. "Invasion of Optimal Social Contracts" Games 14, no. 3: 42. https://doi.org/10.3390/g14030042

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop