Next Article in Journal
Overview of Different Modes and Applications of Liquid Phase-Based Microextraction Techniques
Next Article in Special Issue
New Directions in Modeling and Computational Methods for Complex Mechanical Dynamical Systems
Previous Article in Journal
Pharmacological Profile of Nigella sativa Seeds in Combating COVID-19 through In-Vitro and Molecular Docking Studies
Previous Article in Special Issue
Numerical Simulation Approach for a Dynamically Operated Sorption-Enhanced Water-Gas Shift Reactor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Network Characteristic Control of Social Dilemmas in a Public Good Game: Numerical Simulation of Agent-Based Nonlinear Dynamics

1
International Institute for Applied Systems Analysis (IIASA), A-2361 Laxenburg, Austria
2
Okinawa Institute of Science and Technology (OIST), m1919-1, Okinawa 904-0495, Japan
3
Department of Physical Science, Seoul National University, Seoul 08826, Korea
Processes 2022, 10(7), 1348; https://doi.org/10.3390/pr10071348
Submission received: 18 March 2022 / Revised: 18 June 2022 / Accepted: 8 July 2022 / Published: 11 July 2022
(This article belongs to the Special Issue Numerical Simulation of Nonlinear Dynamical Systems)

Abstract

:
This paper proposes a possible mechanism for obtaining sizeable behavioral structures by simulating a network–agent dynamic on an evolutionary public good game with available social .learning. The model considers a population with a fixed number of players. In each round, the chosen players may contribute part of their value to a common pool. Then, each player may imitate the strategy of another player based on relative payoffs (whoever has the lower payoff adopts the strategy of the other player) and change his or her strategy using different exploratory variables. Relative payoffs are subject to incentives, including participation costs, but may also be subject to mutation, whose rate is sensitized by the network characteristics (social ties). The process discussed in this report is interesting and relevant across a broad range of disciplines that use game theory, including cultural evolutionary dynamics.

1. Introduction

Many computational simulations in game theory have shown that a substantial fraction of players are willing to invest costs (i.e., contribute to a joint effort) to increase their fitness [1]. In general, the rate of incentive (reward) is sufficient to increase the average level of prosocial contributions. Indeed, the system itself is a public good, and the players are often seen as altruistic because others benefit from their costly efforts [2].
Conversely, there are always players who take advantage of the incentives of others toward the public good, called defectors (or free-riders). Defection spreads among players voluntarily and ultimately results in a cascade in the system. A solution to protect the system against defection is to refrain from potential risks [3] because their strategies are adopted by other players [4], leading to infinite regress. Moreover, if everyone contributes to the public good, risk will propagate without the need for an incentive. The number of defectors may grow through simple cultural evolution, allowing the risk potential to propagate with impunity.
Although the abovementioned assumptions can help achieve prosocial behaviors (i.e., cooperation), detailed investigations of the mechanisms combining fundamental logics that provide realistic options are still necessary [5,6]. Therefore, a computerized model may serve as an essential tool that combines actual circumstances with simulations [7]. The results can be regarded as a sampled subset of the underlying social network [8]. For example, if a plausible model of the underlying network and its agent dynamics are found, it may be possible to infer which contacts are likely to create a propagation route [9]. Indeed, communication among nearby individuals is typically more frequent than long-range connections, providing efficient paths for bias spreading, as observed in real-world networks. Furthermore, to facilitate the identification of common grounds for integrating knowledge and strategies, the mechanisms and serial algorithms that underpin the understanding of the propagation of such a risk (i.e., defection) by networked agents must be evaluated [10]. It will then be possible to agree upon definitions and reconcile the approaches adopted in multiple fields for an interdisciplinary study of game theory.
This article shows how a simple step-by-step mechanism can overcome this objection by using one of the most common properties, called social learning (imitation and exploration) in a random network. In particular, by incorporating a detailed fundamental game (a zero-sum for two players is given) into more complex situations (with many players and a non-zero-sum game) with real-world network properties (eigenvector centrality in a random structure) [11], valuable information can be gained for network–agent simulation to estimate parameter ranges from an extended public good game (PGG) dynamic. The advantage of this combined approach is its flexibility. Hence, certain prototypical results of the PGG model are extended into more realistic simulations (e.g., network–agent dynamics) that fully consider these potential consequences assuming that most people follow reliable strategies.

2. Performance (Model including Results)

Historically, numerical simulation and model-driven approaches have been successful in information gain and systems risk analysis [12]. Relatively powerful computing compared to that in reinforcement methods (i.e., statistics) is used. Thus, we applied this tool to our study of incomplete information games cases (PGG) rather than simple game dynamics cases (zero-sum games). The operating principle of this model is expected to gain momentum gradually for generalizability to real-world cases, and the algorithm optimization process undertaken is anticipated to be extended in terms of the definitions of game, payoff, and so on [13].

2.1. Definitions of Game and Payoff Estimation

The term “game” describes a set of interactions among a set of players. The players act according to their behavioral phenotypes, called strategies. The game moves are decisions originating from interactions between two (or more) co-players with different strategies, which translate into payoffs. Most conceptually straightforward games offer only two strategies to each player (up or down and left or right) and four outcomes ( α , β , γ , and δ ) (as expressed in Equation (1)), and they generally involve two players.
( α β γ δ )
However, a major challenge, even in a simple game, is to determine the ranking of payoff values. This model can achieve this objective by simply defining the dynamics of two strategies with four outcomes. We assume that players must choose between n options or strategies. In a simple case, players one and two simultaneously reveal a penny. If both pennies show heads or tails, player two must pay USD 1 to player one. On the other hand, if one penny shows heads and the other shows tails, player one must pay USD 1 to player two. The game, then, can be described as follows
P l a y e r   2
P l a y e r   1               H T H ( 1 , 1 ) ( 1 , 1 ) T ( 1 , 1 ) ( 1 , 1 ) .
The matrix describes the pair ( a i j ,   b i j ) of payoff values in the ith row and jth column. It shows that if the outcome is (−1, 1), then player one (who chooses here the row of the payoff matrix) would have done better if he or she had chosen the other row. On the other hand, if the outcome had been (1, −1), then player two (the column player) would have done better if he or she had switched. Players one and two have diametrically opposed interests here.
This strategic situation is very common in games. During a game of soccer, in a penalty kick, if the striker and keeper are mismatched, the striker is happy; but if they are matched, the keeper is happy. This logic applies to many common preference situations, and an efficient means of representing these dynamics is to let a player decide to implement a strategy with a given probability.
x = ( x 1 ,   x 2 ,   ,   x n ) ,             x 1 0   a n d   x 1 + + x n = 1
where the probability distribution presents two or more pure strategies ( x i ) from which the players choose randomly; this feature is denoted as the set of all probability ( p r ) distributions of such mixed strategies, as follows
p r ( x 1 ) + p r ( x 2 ) + + p r ( x n ) = 1 = 100 %
p r ( x i ) [ 0 , 1 ]
For all events, the player is expected to be happy for half the time and unhappy for the remaining time.
P l a y e r   2
P l a y e r   1               H ( .5 ) T ( .5 ) H ( .5 ) ( 1 , 1 ) ( 1 , 1 ) T ( .5 ) ( 1 , 1 ) ( 1 , 1 )
It does not matter what the players do because they cannot change the outcome, so they are just as happy to flip a coin (in the coin game) or to choose one of two directions (in soccer). The only question is whether one player is able to anticipate the decision of the other player. If the striker knows that the keeper is playing heads (one direction of two), the striker will avoid heads and play tails (the other direction of the two). However, it is not usually possible to guess the coin flip if the payoff is changed to provide a different outcome. In fact, if the game expands to the following arbitrarily mixed payoff conditions { A > B > C } , then the game will be expressed as follows
P l a y e r   2
P l a y e r   1               L e f t R i g h t U p ( B , A ) ( C , C ) D o w n ( C , C ) ( A , B )
The expected utility of a player ( σ u ) can be described as follows
σ u = B C A + B 2 C ,         A + B 2 C > 0
σ u = B C A + B 2 C ,         B C > 0 ,         A + B 2 C > 0
σ u = B C A + B 2 C ,         A + B 2 C > B C ,         A > C
With even a slightly different payoff related to arbitrarily given mixed conditions { A > B > C } , the payoff will always be positive for (0, 1) under the above conditions.
With this dynamic, if the probability that each outcome occurs for a percentage of instances is determined, then the payoffs of players one and two can be expressed as follows
L e f t ( 2 3 ) R i g h t ( 1 3 ) U p ( 1 3 ) ( B = 1 , A = 2 ) ( C = 0 , C = 0 ) D o w n ( 2 3 ) ( C = 0 , C = 0 ) ( A = 2 , B = 1 ) .
The probability of each outcome must be multiplied by the playoff of a particular outcome as follows
2 / 3 1 / 3 1 / 3 ( B = 1 , A = 2 1 3   2 3 = 2 9 ) ( C = 0 , C = 0 1 3   1 3 = 1 9 ) 2 / 3 ( C = 0 , C = 0 2 3   2 3 = 4 9 ) ( A = 2 , B = 1 2 3   1 3 = 2 9 )
Then, all these numbers are summed, giving the payoff of player one as follows
2 3 = 6 9 = 2 9 + 0 + 0 + 4 9 = 1 ( 2 9 ) + 0 ( 1 9 ) + 0 ( 4 9 ) + 2 ( 2 9 ) .
The earned mixed equilibrium of player one is 2/3. Then, the payoff of player two can be written as follows
2 3 = 6 9 = 4 9 + 0 + 0 + 2 9 = 2 ( 2 9 ) + 0 ( 1 9 ) + 0 ( 4 9 ) + 1 ( 2 9 ) .
Thus, the earned mixed equilibrium of player two is also 2/3. After checking for the underlying assumption of probability distribution p r ( x i ) [ 0 , 1 ] of the game, it becomes clear that all these payoffs will always be positive for [0, 1] because they all must be added in the following manner
2 9 + 1 9 + 4 9 + 2 9 = 1 .
Furthermore, they satisfy the following rule of probability distributions
p r ( x 1 ) + p r ( x 2 ) + + p r ( x n ) = 1 = 100 % .
This relative welfare distribution derived directly from two by two games will enable a fundamental assumption to be set for evolutionary dynamics.

2.2. Replicator Dynamics

The core implementation of game theory lies in games with imperfect information. As described above, the zero-sum game is a classic example and an appropriate benchmark for application with imperfect information [14]. The latter is very important because real-world problems often fall into this category [15]. Thus, we applied these approaches because of their generalizability to real-world cases being undertaken in public safety, wildlife conservation, public health, and other fields.
First, let the model consider a population of players, each with a given strategy. Occasionally, the players meet randomly and play the game according to a plan. If we suppose that each player is rational, individuals consider several types of different payoffs for each person, which can be expressed as follows
x i = Pr ( i ) π ( i ) ,
where each player has the payoff { π ( i ) }, which shows how well that type ( i ) is doing, and each type has a proportion {Pr(i)}. Then, the players choose certain strategies that they consider to be the best outcomes for the entire population.
Here, we consider a population of types, and those populations succeed at various levels. Some do well, and some do not. The dynamics of the model supposes a series of changes in distribution across types, such that there is a set of types {1, 2, …, N}, a payoff for each type ( π i ), and a proportion for each one ( P r i ). The strategy of each player in each round is given as a probability that is the ratio of this weight to those of all possible strategies; x ˙ i is the probability that an individual player will use a strategy times the payoff [ P r ( i ) π ( i ) ], divided by the sum of the weights of all strategies [ j = 1 N P r ( j ) π ( j ) ]
x ˙ i = P r ( i ) π ( i ) j = 1 N P r ( j ) π ( j ) ,
where P r ( i ) is the proportion of each type, and π ( j ) is the payoff for all types.
Thus, the probability that the individual player will act in a certain way in the next round is only the relative weight of that action. Specifically, we propose that there are different probabilities P r ( i ) of using different strategies ( x ,   y ,   and   z ), i.e., strategies x , y, and z have probabilities of 40%, 40%, and 20%, respectively. These probabilities could lead one to guess that strategies x and y are better than z . However, one can also look at the payoffs π ( i ) of the different strategies. For instance, payoffs of 5, 4, and 6 can be obtained when using strategies x, y, and z, respectively. This information prompts one to consider the strategy to use, and the answer depends on both the payoff and probability.
With this dynamic, the model presented herein describes how individuals could choose what to do or which strategies are best. Given that after a certain move, some will appear to be doing better than others, the ones doing the worst are likely to copy the ones doing better. Based on the cultural evolutionary assumptions for PGGs, we specified the respective frequencies of actions of cooperators ( P c ), defectors ( P d ), and loners ( P l ) (see Table 1). The experimenter assigns a value to each player; then, the players may contribute part (or all) of their value to the game (a common pool).
In each round, a sample ( S ) of individuals is chosen randomly from the entire population N. S (0 ≤ S N ) individuals participate in the game, paying a cost ( g ). Each round requires at least two participants (cooperator and defector), and others must be nonparticipants. The cooperators contribute a fixed amount of value c > 0 and share the outcome multiplied by the interest rate r ( 1 > r > N ) equally among the other S 1 participants; defectors are in the round but do not contribute values. During the round, the payoffs for strategies P c , P d , and P l (denoted by x ,   y ,   and   z , respectively) are determined with a participation cost c and an interest rate multiplied by the fixed contribution cost r c based on the relative frequencies of the strategies ( n c / N ). If n c denotes their number among the public good players, the return of public good (i.e., payoff to the players in the group) depends on the number of cooperators, and the net payoff for the strategy is given by
P c = c + r c n c N ,     P d = r c n c N ,     1 < r < N ,
where P c is the payoff of the cooperators, P d is the payoff of the defectors, and r c is the interest rate ( r ) multiplied by the fixed contribution cost ( c ) for the common good. For the expected payoff values of cooperators ( P C ) and defectors ( P D ), a defector in a group with S 1 co-players ( S = 2 , , n ) obtains a payoff of r c x / ( 1 z ) on average from the common good because the nonparticipants ( z ) have a payoff of 0 [11]
P D = ( r c x 1 z g ) ( 1 z n 1 ) ,
where z n 1 is the probability of finding no co-player. This equation can also be written as the derivative of z with respect to n for any power n , n z ( n 1 ) , which is known as the power rule and can be symbolically represented as the exponent (i.e., z 2 = 2 z 1 , z 3 = 3 z 2 , , z n = n z n 1 ) . This is what it means for the derivative of z n to be n z n 1 and for the population to be reduced to loners (nonparticipants). In addition, cooperators contribute effort c with a probability 1 z n 1 . Hence
P D P C = c ( 1 z n 1 ) .
The average payoff ( P ¯ ) in the population is then given by
P ¯ = ( 1 z n 1 ) [ ( r 1 ) c x ( 1 z ) g ] .
The replicator equation gives the evolution of the three strategies in the population
x ˙ = x ( P x P ¯ ) ,     y ˙ = y ( P y P ¯ ) ,     z ˙ = z ( P z P ¯ ) .
The frequencies x i of the strategies i can simply be represented as
x ˙ i = x i ( P i P ¯ ) ,
where x i denotes the frequency of strategy i , P i is the payoff of strategy i , and P ¯ represents the average payoff in the population. Accordingly, a strategy will spread or dwindle depending on whether it does better or worse than average. This equation holds that populations can evolve, in the sense that the frequencies x i of strategies change with time.
In Figure 1, we let the state x ( t ) depend on time, which can be denoted as x ˙ i ( t ) , where x ˙ i changes as d x i / d t . Categorization by strategy in voluntary PGG shows the prototype in which cooperators are dominated by defectors, defectors by loners, and loners by cooperators. The mechanism is particularly focused on the growth rates of the relative frequencies of strategies. In other words, the state of the population evolves according to the replicator equation, where the growth rate ( x ˙ i / x i ) of the frequency of a strategy corresponds to the difference between its payoff ( P i ) and the average payoff ( P ¯ ) in the population.

2.3. Imitation Dynamics with Updating Algorithm

In the cultural evolutionary context considered herein, strategies are unlikely to be inherited, but they can be transmitted through social learning [16]. If it is assumed that individuals imitate each other, replicator dynamics will be yielded again. Specifically, a randomly chosen individual from the population occasionally imitates a given model with a certain likelihood. Thus, the probability that an individual switches from one strategy to the other is given by
p = x i j ( P i j P j i ) x j .
This equation is simply the replicator equation, but it states that a player ( P i ) making a comparison with another player ( P j ) will adopt the strategy of the other player only if it promises a higher payoff. This switch is more likely to occur if the difference in payoff is a function of the frequencies of all strategies, based on pairwise interactions [17]. The focal individual compares his or her payoff ( P i = π f ) with the payoff of the role individual ( P j = π r ); then, the focal individual chooses to imitate (or not) the role individual given the following
p = [ 1 + e β ( π r π f ) ] 1 ,     β = s e l e c t i o n   i n t e n s i t y .
This mechanism (Table 2, Table 3 and Table 4) holds the factorial for the payoff, i.e., how many combinations of r objects can we take from n objects as follows
n ! = k = 1 n k ,               n C r = n ! / r ! ( n r ) ! ,
with the Gillespie algorithm (stochastic dynamic model; see the code in Table 5 for more detail) for updating the system ( a r 1 / a t o t < z 1 < a r / a t o t ).
The above procedures assume a well-mixed population with a finite number of strategies that are proportional to their relative abundances given that the fitness values are frequency dependent, coexisting at steady or fluctuating frequencies of the evolutionary game (Figure 2). The mechanism is a combination of the rational and copying processes; in other words, an individual chooses rationally from a nearby individual because it seems that his or her strategy would produce a successful outcome.
The simulation results in Figure 3 indicate that the system has different effects on the intermediate interest rate with participation costs. An increase in the interest rate ( r ) prompts the population to undergo stable oscillations relative to a global attractor, where the players participate by contributing to the public good. However, if the contribution is too expensive, i.e., if the participation cost is g  ( r 1 ) c + l for rewarding or g ( r 1 ) c for punishing, the players opt out of participation (Figure 4). In this scenario, nonparticipation becomes the global attractor (bottom right plot of Figure 3).

2.4. Replicator–Mutator Dynamics

Not all learning occurs from others; individuals can also learn from personal experience. The dynamics of the replicator equation describes only selection, not drift or mutation. An intelligent player may adopt a strategy, even if no one else in the population is using it, if the strategy offers the highest payoff. Dynamics can also be modified with the addition of a small, steady rate of miscopying for any small linear contribution that exceeds the role of dynamics. Consequently, the stability of the system changes, making the system structurally unstable. This feature can be interpreted as the exploration rate, and it corresponds to the mutation term in genetics [18]. Thus, by adding a mutation rate ( μ ) with a frequency-dependent selection, it can be expected that the impact of mutations can show a more general approach to evolutionary games, without explicit modelling of their origin [19].
d x i d t = x i ( P i P ¯ ) + μ ( 1 x i ) 2 μ x i
In the context of the model, both of these types of dynamics occur, i.e., individuals copy both more prominent strategies and strategies that are doing better than others. The fate of an additional strategy can be examined by considering the replicator dynamics in the augmented space (mutation) and computing the growth rate of the fitness that such types obtain in the case of evolution (shown in Figure 5). The mechanism holds for the ordinary differential equation, where the differential equation contains one or more functions of an independent variable and its derivatives for updating the system, i.e., d y d x , d 2 y d x 2 , , d n y d x n ; here, x   and   y are independent variables, and y = f ( x ) is an unknown function of x .
Figure 5. Cultural evolutionary dynamics with replicator–mutator dynamics. The parameters of the upper plots are N = 5, c = 1, g = 0.5, r = 3, and μ = 1 × 10−10 (left), and μ = 1 × 10−1 (right). The bottom plots have N = 5, c = 1, g = 0.5, r = 1.8, and μ = 1 × 10−10 (left), and μ = 1 × 10−1 (right) (see Table 6 and Table 7 for more details regarding the added parameters). The colors indicate prototypes as proportions of the implementation by cooperators (blue), defectors (red), and loners (yellow to green).
Figure 5. Cultural evolutionary dynamics with replicator–mutator dynamics. The parameters of the upper plots are N = 5, c = 1, g = 0.5, r = 3, and μ = 1 × 10−10 (left), and μ = 1 × 10−1 (right). The bottom plots have N = 5, c = 1, g = 0.5, r = 1.8, and μ = 1 × 10−10 (left), and μ = 1 × 10−1 (right) (see Table 6 and Table 7 for more details regarding the added parameters). The colors indicate prototypes as proportions of the implementation by cooperators (blue), defectors (red), and loners (yellow to green).
Processes 10 01348 g005
Figure 5 demonstrates that mutation has a significant effect on the transition of strategies. The system settles into the different effects of the intermediate mutation rate. As this rate decreases, red individuals appear (plot on the left side), which prompts the players to participate by contributing to the public good. Conversely, as long as the mutation rate is sufficiently high, nonparticipation becomes a global attractor; selfish players continually defect by refraining from contributing (plot on the right side of Figure 5).

2.5. Replicator–Mutator including Network Dynamics

Currently, the proposed models cannot explain cooperation in communities with different average numbers of social ties. To impose the number of social ties as a parameter, the primary feature of a random graph [20] was used for the network characteristics, as follows. Firstly, individuals in the model are considered as vertices (fundamental element drawn as a node) and sets of two elements are drawn as lines connecting two vertices (where lines are called edges) (left side of Figure 6). Nodes are graph elements that store data, and edges are the connections between them; however, the edges can store data as well. The edges between nodes can describe any connection between individuals (called adjacency). The nodes can contain any amount of data with the information that has been chosen to store in this application, and the edges include data regarding the connection strength.
Networks have additional properties, i.e., edges can have direction, which means that the relationship between two nodes only applies in one direction, not the other. A “directed network” is a network that shows a direction. In the present model, however, we used an undirected network, featuring edges with no sense of direction because with a network of individuals and edges that indicate two individuals that have met, directed edges may be unnecessary. Another essential property of this structure is connectivity. A disconnected network has some vertices (nodes) that cannot be reached by other vertices (right side of Figure 6).
A disconnected network may feature one vertex that is off to the side and has no edges. It could also have two so-called “connected components,” which form a connected network on their own but have no connections between them. Thus, a connected network has no disconnected vertices, which could be a criterion for describing a network as a whole, called connectivity. The fulfillment of this criterion would depend on the information contained in the graph, usually controlled by the number of nodes and number of connections.
An object-oriented language was used to enable the creation of vertex and edge objects and assign properties to them. A vertex is identified by the list of edges that it is connected to, and the converse is true for edges. However, operations involving networks may be inconvenient if one must search through vertex and edge objects. Thus, we represent connections in networks that simply use a list of edges (left side of Figure 7). The edges are each represented with an identifier of two elements. Those elements are usually numbers corresponding to the ID numbers of vertices. Thus, this list simply shows two nodes with an edge between them, and an edge list encompasses all smaller lists. As an edge list contains other lists, it is sometimes called a two-dimensional list. We represent the edge list in a network as an adjacency list. Our vertices normally exhibit the ID number that corresponds to the index in an array (right side of Figure 7).
In this array, each space is used to store a list of nodes, such that the node with a given ID is adjacent to the index with the same number. For instance, an opening at index 0 represents a vertex with an ID of 0. This vertex shares an edge with one node, so that the reference to that node is stored in the first spot in the array. Thus, because the list contains other lists, the adjacency list is two-dimensional, enabling an adjacency matrix to be used that is essentially a two-dimensional array; however, all lists within it have the same length.

2.6. Social Ties from Eigen Centrality Algorithm

Owing to the collection of nodes influenced by connection probabilities corresponding to the adjacency list, the distribution of the connections in the network can be used for the social characteristics (probability of degree) as follows
v V ( G ) deg ( v ) = deg ( v 1 ) + + deg ( v n ) = 2 | E ( G ) | .
In this context, one may consider whether the individuals in the network interact with each other more often than with others, the conditions under which social beings are willing to cooperate, and the algorithm that can be characterized by the influence of a node in a network (eigenvector centrality of a graph). When considering the network as an adjacency matrix of A (Table 8), the eigenvector centrality (Table 9) must satisfy the following equation
A x = λ x
We can normalize the vector to its maximum value, bringing the vector components closer to 1. Moreover, they must be able to adjust their own changes to thrive. To understand a cooperative network of interaction, both the evolution of the network and strategies within it should be considered simultaneously.
d x i d t = x i ( P i P ¯ ) + [ μ ( 1 x i ) 2 μ x i ] [ 2 s t N N ( N 1 ) 2 ] ,
where s t represents social ties due to the influence of an eigenvector centrality ( λ x ) between individuals and N is the number of nodes [21] of the sample population. Furthermore, ( 2 s t ) / N denotes the actual connections in the network, and N ( N 1 ) / 2 denotes the potential connections in the network. A potential connection is one that may exist between two individuals, regardless of whether it actually does. One individual may know another individual, and this object may be connected to the other one.
Whether the connection actually exists is irrelevant for a potential connection. In contrast, an actual connection is one that actually exists (social ties), i.e., two individuals know each other, and the objects are connected. In relation to these small linear contributions and their dynamics, structural instability can be interpreted as a characteristic of the network and influenced by the exploration rate, which corresponds to the idea of mutation in genetics.
After grouping the network characteristics that incorporate the decisions of individuals by establishing new links or giving up existing ones [22], we propose a version of evolutionary game theory and discuss the dynamic coevolution of individual strategies and the network structure [23]. In this model, the dynamics operate such that the population moves over time as a function of payoffs, and the proportions based on replicator–mutator dynamics are multiplied by its network density (Figure 8).
In Figure 8, the exploration of the individual indicates its significance sensitivity, according to the exploratory trait of the mutation rate (left and right sides of Figure 8). However, the designated network density (as the social ties of the individuals) can mediate its sensitivity. Thus, when the network density (influenced by the eigenvector centrality λ x ) is sufficiently high against the exploratory mutation rate, individuals are sensitized by mutation (center-top panels of Figure 8); however, with low network density, the phase portraits are not sufficiently sensitive to changes in the mutation rate (center-bottom panels of Figure 8). This phenomenon of systemic sensitivity to external influence produces more interesting evolutionary patterns.

3. Discussion

In this study, the proposed model for the PGG represents a highly nonlinear system of replicator equations that can be evaluated via purely analytical means. For large incentive (r > 2), stable oscillations are observed, but when the cost of participation is too high (g > 1), no one will participate [24]. Accordingly, various combinations of time averages of the frequencies and payoffs of three strategies follow. It is impossible to increase cooperation by increasing participation costs (or decreasing incentives), which favors defections and loners [25]. To promote cooperation, the incentives should be increased or participation costs decreased, which would favor cooperation in the significant interest rate as well as the experimental results [26].
The simulation in the present model reveals that the dynamics exhibit a wide variety of adaptive mechanisms that correspond to many different types of combinations, leading to various oscillations in the frequencies of the three strategies. The option to drop out of these dynamics depends on the mutation rate multiplied by the network density as social influence. In many societies, similar situations may occur, where small mutations are known to be plausible risks in every network systems and must have marginal contributions to jeopardizing the entire system (e.g., COVID-19 mutations) [27,28]. Additional incentives attract larger participatory groups, but growth may inherently cause decline through mutations in any situation. However, the average effect of the payoff of an individual remains the same depending on the network characteristics as if the possibility exists in this simulation, in relation to their social ties.
Cooperation is necessary in societies to achieve the required collaboration of thousands of people, most of whom live very differently. How does cooperation occur, and under what conditions will cooperation emerge in a world of egoists without central authority? Social institutions, as with anything that evolves, have likely been affected by accidental developments caused by novel combinations of ordinary features [29]. This situation makes it more difficult to address several questions pertaining to emergence in decision making. Will conflict or cooperation be more prevalent in the future? Will the complexity that results from interdependence make us unable to think and foster bias [30]? Do the current troubles between neighbors point to a more alarming direction, in which more interconnected neighbors can disagree more easily, enter into disputes that are difficult to resolve, and ultimately despise one another? Perhaps it is easier to address such questions with a complexity tool. Simulation using models is a worthwhile pursuit in the context of several problems related to decision making.
Understanding the complexity of game dynamics is one such challenge in which we need not only new technologies and methodologies, but also new ways of thinking if we are to make progress in our systems. A good tool and model are not immediately applicable or even understandable because ground-breaking ideas challenge the current paradigms and penetrate new intellectual areas. This concept is illustrated by a simple application such as the zero-sum contribution to game dynamics, which required many years to be noticed and cited in a significant number of cases [31]. Although we used a numerical simulation to describe that the dynamics of the trajectories of components have direct physical interpretations, the quantitative model is adequate for obtaining qualitative results. Thus, the interpretation of mechanical values is useful in a comparative manner. Ultimately, we can account for uncertainty by gathering the parameters in a model-driven context. The proposed model is therefore robust with respect to the set of opinions developed in their actual fields (i.e., data driven).

4. Practical Application

We investigated how cooperation and defection change with the network characteristics with the involvement of the overall social heterogeneity [32]. For small ties among individuals, the heterogeneity remains low because the players only react slowly to social influences. On the other hand, as relationships grow, the dynamics develop sufficiently rapidly to promote the social trap of defection [33]. Greater cooperation produces additional competition at times, which leads to a reduction in the overall network heterogeneity, given the results of this simulation. The increase in heterogeneity of the pattern depends on the underlying societal organization; a considerable amount of interaction prevents the elimination of the common trap and is not quickly eliminated by cooperators [34]. Thus, the survival of cooperation relies on the capacities of individuals to adjust to adverse ties, even when the rate of mutation (or systemic risk) is high. The results indicate that simple adaptations of the social relations introduced herein coupled with the PGG account for a marginal contribution to the mitigation of systemic risk observed in realistic networks.
In the earlier discussion on PGGs, many references were made to observations of situations in which social dilemmas, such as cooperation and defection, depend on the scale of interconnection between participants [35]. However, most reported data are on the global level (macro scale) over many individuals with different parameter settings (i.e., participation cost, incentive, and mutation) rather than reflecting the changes within individuals (micro scale) over a series of rounds [36]. In addition to the general responses, the evolutionary variability (i.e., imitation and exploration) of individuals is another indicator of their behavioral integrity; from these concepts, it is indeed clear that autonomous behaviors (i.e., bias) and features (i.e., utility) differ in accordance with the relative weights (i.e., network–agent dynamics) of their micro-variable potentials [37]. Thus, based on the interconnected condition, computerized simulation is expected to become more reliable with regard to the key features that are mechanically frameworked in this model for increasing bias levels [38]. These factors extended from this study may be useful criteria for decision making and suggest a broader hypothesis for further research into the interconnectedness of social dilemmas in mathematical decisions of (public good) game theory.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education [Grant Number: 2020R1l1A1A01056967] (PI: Chulwook Park). This work was also supported under the framework of international cooperation program managed by the National Research Foundation of Korea [Grant Number: 2021K2A9A1A0110218711] (PI: Chulwook Park).

Institutional Review Board Statement

The study provided written informed consent approved by the local ethics committee (SNUIRB No. 1509/002-002) conforming to the ethical standards of the 1964 Declaration of Helsinki (Collaborative Institutional Training Initiative Program, report ID 20481572).

Informed Consent Statement

Not applicable.

Data Availability Statement

All data and materials are our own. The materials and data used to support findings of this study are included in the Supplementary information file.

Acknowledgments

The author sincerely thanks the members of the Evolution and Ecology Program at the International Institute for Applied Systems Analysis (IIASA) for their valuable support during the development of this article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Rockenbach, B.; Milinski, M. The efficient interaction of indirect reciprocity and costly punishment. Nature 2006, 444, 718. [Google Scholar] [CrossRef] [PubMed]
  2. Fowler, J.H. Altruistic punishment and the origin of cooperation. Proc. Nat. Acad. Sci. USA 2005, 102, 7047–7049. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Boyd, R.; Richerson, P.J. Punishment allows the evolution of cooperation (or anything else) in sizable groups. Ethol. Sociobiol. 1992, 13, 171–195. [Google Scholar] [CrossRef]
  4. Kinateder, M.; Merlino, L.P. The evolution of networks and local public good provision: A potential approach. Games 2021, 12, 55. [Google Scholar] [CrossRef]
  5. Laland, K.N. Social learning strategies. Anim. Learn. Behav. 2004, 32, 4–14. [Google Scholar] [CrossRef] [PubMed]
  6. Di Muro, M.A.; La Rocca, C.E.; Stanley, H.E.; Havlin, S.; Braunstein, L.A. Recovery of interdependent networks. Sci. Rep. 2016, 6, 22834. [Google Scholar] [CrossRef]
  7. Bonabeau, E. Agent-based modeling: Methods and techniques for simulating human systems. Proc. Nat. Acad. Sci. USA 2002, 99, 7280–7287. [Google Scholar] [CrossRef] [Green Version]
  8. Vespignani, A. Twenty years of network science. Nature 2018, 558, 528–529. [Google Scholar] [CrossRef]
  9. Pacheco, J.M.; Traulsen, A.; Nowak, M.A. Coevolution of strategy and structure in complex networks with dynamical linking. Phys. Rev. Lett. 2006, 97, 258103. [Google Scholar] [CrossRef] [Green Version]
  10. Jackson, M.O.; Yariv, L. Diffusion of behavior and equilibrium properties in network games. Am. Econ. Rev. 2007, 97, 92–98. [Google Scholar] [CrossRef] [Green Version]
  11. Jackson, M.O.; Rogers, B.W.; Zenou, Y. The economic consequences of social-network structure. J. Econ. Lit. 2017, 55, 49–95. [Google Scholar] [CrossRef] [Green Version]
  12. Aristodemou, L.; Tietze, F. The state-of-the-art on intellectual property analytics (IPA): A literature review on artificial intelligence, machine learning and deep learning methods for analysing intellectual property (IP) data. World Pat. Inf. 2018, 55, 37–51. [Google Scholar] [CrossRef]
  13. Gao, Z.; Chen, M.Z.Q.; Zhang, D. Special Issue on “Advances in condition monitoring, optimization and control for complex industrial processes”. Processes 2021, 9, 664. [Google Scholar] [CrossRef]
  14. Sasaki, T.; Brännström, Å.; Dieckmann, U.; Sigmund, K. The take-it-or-leave-it option allows small penalties to overcome social dilemmas. Proc. Nat. Acad. Sci. USA 2012, 109, 1165–1169. [Google Scholar] [CrossRef] [Green Version]
  15. Cimini, G.; Squartini, T.; Saracco, F.; Garlaschelli, D.; Gabrielli, A.; Caldarelli, G. The statistical physics of real-world networks. Nat. Rev. Phys. 2019, 1, 58–71. [Google Scholar] [CrossRef] [Green Version]
  16. Avital, E.; Jablonka, E. Social learning and the evolution of behaviour. Anim. Behav. 1994, 48, 1195–1199. [Google Scholar] [CrossRef] [Green Version]
  17. Traulsen, A.; Hauert, C. Stochastic evolutionary game dynamics. Rev. Nonlin. Dyn. Complex. 2009, 2, 25–61. [Google Scholar]
  18. Sigmund, K.; De Silva, H.; Traulsen, A.; Hauert, C. Social learning promotes institutions for governing the commons. Nature 2010, 466, 861. [Google Scholar] [CrossRef] [Green Version]
  19. Nowak, M.A.; Sigmund, K. Evolutionary dynamics of biological games. Science 2004, 303, 793–799. [Google Scholar] [CrossRef] [Green Version]
  20. Erdos, P.; Rényi, A. On Cantor’s series with convergent∑ 1/qn. Ann. Univ. Sci. Budapest. Eötvös. Sect. Math 1959, 2, 93–109. [Google Scholar]
  21. Santos, F.C.; Santos, M.D.; Pacheco, J.M. Social diversity promotes the emergence of cooperation in public goods games. Nature 2008, 454, 213. [Google Scholar] [CrossRef]
  22. Traulsen, A.; Hauert, C.; De Silva, H.; Nowak, M.A.; Sigmund, K. Exploration dynamics in evolutionary games. Proc. Nat. Acad. Sci. USA 2009, 106, 709–712. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Zhang, X.; Zou, B.; Feng, Z.; Wang, Y.; Yan, W. A review on remanufacturing reverse logistics network design and model optimization. Processes 2022, 10, 84. [Google Scholar] [CrossRef]
  24. Sigmund, K. The Calculus of Selfishness; Princeton University Press: Princeton, NJ, USA, 2010; Volume 6. [Google Scholar]
  25. Li, X.; Jusup, M.; Wang, Z.; Li, H.; Shi, L.; Podobnik, B.; Stanley, E.H.; Havlin, S.; Boccaletti, S. Punishment diminishes the benefits of network reciprocity in social dilemma experiments. Proc. Nat. Acad. Sci. USA 2018, 115, 30–35. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. West, S.A.; Griffin, A.S.; Gardner, A. Evolutionary explanations for cooperation. Curr. Biol. 2007, 17, R661–R672. [Google Scholar] [CrossRef] [Green Version]
  27. Helbing, D. Globally networked risks and how to respond. Nature 2013, 497, 51. [Google Scholar] [CrossRef]
  28. World Health Organization. Coronavirus disease 2019 (COVID-19): Situation report. In Weekly Epidemiological and Operational Updates; World Health Organization: Geneva, Switzerland, 2020; Volume 82. [Google Scholar]
  29. Scholz, R.W.; Bartelsman, E.J.; Diefenbach, S.; Franke, L.; Grunwald, A.; Helbing, D.; Viale Pereira, G. Unintended side effects of the digital transition: European scientists’ messages from a proposition-based expert round table. Sustainability 2018, 10, 2001. [Google Scholar] [CrossRef] [Green Version]
  30. Roth, S.; Valentinov, V.; Heidingsfelder, M.; Pérez-Valls, M. CSR beyond economy and society: A post-capitalist approach. J. Bus. Ethics 2020, 165, 411–423. [Google Scholar] [CrossRef]
  31. Davidai, S.; Ongis, M. The politics of zero-sum thinking: The relationship between political ideology and the belief that life is a zero-sum game. Sci. Adv. 2019, 5, eaay3761. [Google Scholar] [CrossRef] [Green Version]
  32. Santos, F.C.; Pacheco, J.M.; Lenaerts, T. Cooperation prevails when individuals adjust their social ties. PLoS Comp. Biol. 2006, 2, e140. [Google Scholar] [CrossRef]
  33. Burton-Chellew, M.N.; West, S.A. Payoff-based learning best explains the rate of decline in cooperation across 237 public-goods games. Nat. Hum. Behav. 2021, 5, 1330–1338. [Google Scholar] [CrossRef] [PubMed]
  34. Axelrod, R. An evolutionary approach to norms. Am. Pol. Sci. Rev. 1986, 80, 1095–1111. [Google Scholar] [CrossRef]
  35. Jaeggi, A.V.; Burkart, J.M.; Van Schaik, C.P. On the psychology of cooperation in humans and other primates: Combining the natural history and experimental evidence of prosociality. Phil. Trans. R. Soc. Lond. B Biol. Sci. 2010, 365, 2723–2735. [Google Scholar] [CrossRef] [Green Version]
  36. Park, C. Role of recovery in evolving protection against systemic risk: A mechanical perspective in network-agent dynamics. Complexity 2021, 2021, 4805404. [Google Scholar] [CrossRef]
  37. Nowak, M.A. Five rules for the evolution of cooperation. Science 2006, 314, 1560–1563. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Asprilla-Echeverria, J. The social drivers of cooperation in groundwater management and implications for sustainability. Groundwater Sustain. Dev. 2021, 15, 100668. [Google Scholar] [CrossRef]
Figure 1. Schematic illustration of model (PGG) with replicator dynamics. Colors indicate prototypes as a proportion of implementation by cooperators (blue = contribute), defectors (red = not contribute), and loners (yellow to green = not participate). The dotted line denotes the relative frequencies, where defectors dominate cooperators, loners dominate defectors, and cooperators dominate loners.
Figure 1. Schematic illustration of model (PGG) with replicator dynamics. Colors indicate prototypes as a proportion of implementation by cooperators (blue = contribute), defectors (red = not contribute), and loners (yellow to green = not participate). The dotted line denotes the relative frequencies, where defectors dominate cooperators, loners dominate defectors, and cooperators dominate loners.
Processes 10 01348 g001
Figure 2. Simulation 1 (prototype) with replicator dynamics: (a) the plot on the left side shows a well-mixed population with a finite number of strategies (cooperators: blue arrow, defectors: red arrow, loners: yellow arrow) with incentive. (b) The parameters are N = 5, c = 1, and g = 0.5, and the interest rate is r = 3 (see Table 2, Table 3 and Table 4 for more detail regarding the parameters). The plot on the right side shows the categorized oscillation of strategies (cooperators = blue, defectors = red, loners = yellow).
Figure 2. Simulation 1 (prototype) with replicator dynamics: (a) the plot on the left side shows a well-mixed population with a finite number of strategies (cooperators: blue arrow, defectors: red arrow, loners: yellow arrow) with incentive. (b) The parameters are N = 5, c = 1, and g = 0.5, and the interest rate is r = 3 (see Table 2, Table 3 and Table 4 for more detail regarding the parameters). The plot on the right side shows the categorized oscillation of strategies (cooperators = blue, defectors = red, loners = yellow).
Processes 10 01348 g002
Figure 3. The upper (a) and bottom left (b) plots include the participation cost (g). The parameters are N = 5, r = 3, c = 1, and g = 2. The right bottom plot (c) corresponds to g = 3. Both plots on the bottom side (b,c) show the categorized oscillation of the strategies (cooperators = blue, defectors = red, loners = yellow).
Figure 3. The upper (a) and bottom left (b) plots include the participation cost (g). The parameters are N = 5, r = 3, c = 1, and g = 2. The right bottom plot (c) corresponds to g = 3. Both plots on the bottom side (b,c) show the categorized oscillation of the strategies (cooperators = blue, defectors = red, loners = yellow).
Processes 10 01348 g003
Figure 4. Cultural evolutionary dynamics with incentive. The parameters are N = 5, c = 1, g = 0.5, r = 3 (plot on the left = a), and r = 1.8. (plot on the right = b), with individuals attracted by the social trap. Both plots (a,b) show the categorized strategies (cooperators = blue, defectors = red).
Figure 4. Cultural evolutionary dynamics with incentive. The parameters are N = 5, c = 1, g = 0.5, r = 3 (plot on the left = a), and r = 1.8. (plot on the right = b), with individuals attracted by the social trap. Both plots (a,b) show the categorized strategies (cooperators = blue, defectors = red).
Processes 10 01348 g004
Figure 6. Schematic representations of (a) nodes and (b) lines and (c,d) their connectivity.
Figure 6. Schematic representations of (a) nodes and (b) lines and (c,d) their connectivity.
Processes 10 01348 g006
Figure 7. Schematic representations of edge (left side) and adjacency (right side) lists. (a) Edge list (2D), ((0, 1), (1, 2), (1, 3), (2, 3)); (b) Adjacency list (2D), 0 = (0, 1), 1 = (1, 2), 2 = (1, 3), 3 = (2, 3).
Figure 7. Schematic representations of edge (left side) and adjacency (right side) lists. (a) Edge list (2D), ((0, 1), (1, 2), (1, 3), (2, 3)); (b) Adjacency list (2D), 0 = (0, 1), 1 = (1, 2), 2 = (1, 3), 3 = (2, 3).
Processes 10 01348 g007
Figure 8. Cultural evolutionary dynamics using replicator–mutator, including network characteristics. In the left plot, r = 3, c = 1, g = 0.5, and μ = 1 × 10−10. In the center-left plot (ND = network density; see Table 8 for more detail about the added parameters), r = 3, c = 1, g = 0.5, μ = 1 × 10−10, upper ND = 0.9, and bottom ND = 0.1. In the center-right plot, r = 3, c = 1, g = 0.5, μ = 1 × 10−10, upper ND = 0.9, and bottom ND = 0.1. In the right plot, r = 3, c = 1, g = 0.5, and μ = 1 × 10−3. The dotted lines show the homoclinic orbit for three assessment strategies (with different initial points). U_D denotes an unstable region of defection and cooperation (U_C).
Figure 8. Cultural evolutionary dynamics using replicator–mutator, including network characteristics. In the left plot, r = 3, c = 1, g = 0.5, and μ = 1 × 10−10. In the center-left plot (ND = network density; see Table 8 for more detail about the added parameters), r = 3, c = 1, g = 0.5, μ = 1 × 10−10, upper ND = 0.9, and bottom ND = 0.1. In the center-right plot, r = 3, c = 1, g = 0.5, μ = 1 × 10−10, upper ND = 0.9, and bottom ND = 0.1. In the right plot, r = 3, c = 1, g = 0.5, and μ = 1 × 10−3. The dotted lines show the homoclinic orbit for three assessment strategies (with different initial points). U_D denotes an unstable region of defection and cooperation (U_C).
Processes 10 01348 g008
Table 1. Strategies of potential players.
Table 1. Strategies of potential players.
Cooperators (ready to join the group and contribute to its effort) P c
Defectors (who join but do not contribute) P d
Loner (unwilling to join the PGG) P l
Table 2. Strategies of potential players (C: cooperation; D: defection; L: no participation).
Table 2. Strategies of potential players (C: cooperation; D: defection; L: no participation).
DefinitionParameterRange
Cooperator C n.a.
Defector D n.a.
Loner L n.a.
Table 3. Model variables (prototype), parameters, and default parameter values.
Table 3. Model variables (prototype), parameters, and default parameter values.
DefinitionParameterRange (Applied)
Number of individuals M 100
Number of samples N 5
Rounds per generation t t 1
Number of generations t 10,000
Participation costg0.5
Investment of participation c 1
Participation benefit (interest rate) r 3
Table 4. Imitation parameter values.
Table 4. Imitation parameter values.
Imitation dynamicsDefinitionParameterRange
Selection intensity s ( 0 , )
Imitation probability p r ( 0 , 1 )
Table 5. Code book; applied Gillespie algorithm.
Table 5. Code book; applied Gillespie algorithm.
for ii in range(t):  #steps with for loop
pc=PC(C,D,M)  #define cooperator
pd=PD(C,D,M)  #define defector
pl=0  #define loner
a1=L/M+L*D/(M**2*(1+math.exp(pl-pd)))  #rate of changing L to D
a2=L/M+L*C/(M**2*(1+math.exp(pl-pc)))  #rate of changing L to C
a3=D/M+D*C/(M**2*(1+math.exp(pd-pc)))   #rate of changing D to C
a4=D/M+D*L/(M**2*(1+math.exp(pd-pl)))  #rate of changing D to L
a5=C/M+C*D/(M**2*(1+math.exp(pc-pd)))  #rate of changing C to D
a6=C/M+C*L/(M**2*(1+math.exp(pc-pl)))  #rate of changing C to L
atot=a1+a2+a3+a4+a5+a6  #total rate
ran=random()
if (0<=ran<a1/atot):
D=D+1
L=L-1
elif(a1/atot<=ran<(a1+a2)/atot):
L=L-1
C=C+1
elif((a1+a2)/atot<=ran<(a1+a2+a3)/atot):
D=D-1
C=C+1
elif((a1+a2+a3)/atot<=ran<(a1+a2+a3+a4)/atot):
D=D-1
L=L+1
elif((a1+a2+a3+a4)/atot<=ran<(a1+a2+a3+a4+a5)/atot):
C=C-1
D=D+1
else:
C=C-1
L=L+1
Table 6. Model variables, parameters, and default parameter values.
Table 6. Model variables, parameters, and default parameter values.
DefinitionParameterRange (Applied)
Number of individuals M 100
Number of samples N 5
Rounds per generation t t 1
Number of generations t 10,000
Participation cost g 0.5
Investment of participation c 1
Participation benefit r 3
Mutation rate μ 1 × 10−10
Table 7. Imitation and exploration of parameter values.
Table 7. Imitation and exploration of parameter values.
DefinitionParameterRange
Imitation dynamicsSelection intensity s ( 0 , )
Imitation probability p r ( 0 , 1 )
Exploration dynamicsExploration probability p e ( 0 , 1 )
Mean for normal increment m u ( 0 , 1 )
Standard deviation for normal increment σ ( 0 , 1 )
Table 8. Created network characteristics of parameter values.
Table 8. Created network characteristics of parameter values.
DefinitionParameterRangeCreated Random Network
Number of individuals n ( 0 , ) Processes 10 01348 i001
(in case: n = 10, p = 0.9)
Connection probability p ( 0 , 1 )
Adjacency matrix
(linear dimension)
m × nrow, col
Table 9. Calculated social ties ( s t ) from λ x = s t ( 0 , 1 ) of a random graph.
Table 9. Calculated social ties ( s t ) from λ x = s t ( 0 , 1 ) of a random graph.
n _0 n _1 n _2 n _3 n _4 n _5 n _6 n _7 n _8 n _9
p 0.2: λ x = s t 0.1480.3120.4270.1740.1640.4020.2070.3620.5090.207
p 0.9: λ x = s t 0.3020.3340.3050.2690.3020.3340.3340.3340.3340.305
Note: For a synthetic network produced using n = 10 ( n _0 n _9) as an example.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Park, C. Network Characteristic Control of Social Dilemmas in a Public Good Game: Numerical Simulation of Agent-Based Nonlinear Dynamics. Processes 2022, 10, 1348. https://doi.org/10.3390/pr10071348

AMA Style

Park C. Network Characteristic Control of Social Dilemmas in a Public Good Game: Numerical Simulation of Agent-Based Nonlinear Dynamics. Processes. 2022; 10(7):1348. https://doi.org/10.3390/pr10071348

Chicago/Turabian Style

Park, Chulwook. 2022. "Network Characteristic Control of Social Dilemmas in a Public Good Game: Numerical Simulation of Agent-Based Nonlinear Dynamics" Processes 10, no. 7: 1348. https://doi.org/10.3390/pr10071348

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop