Learning and Evolution in Games, 1st Edition

A special issue of Games (ISSN 2073-4336). This special issue belongs to the section "Learning and Evolution in Games".

Deadline for manuscript submissions: 31 July 2024 | Viewed by 9616

Special Issue Editors


E-Mail Website
Guest Editor
Behavioral Game Theory, University of Zurich & ETH Zürich, 8050 Zürich, Switzerland
Interests: learning in games; behavioral and experimental game theory; cooperative game theory
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Department of Economics, Bar-Ilan University, Ramat Gan 5290002, Israel
2. Department of Economics, University of California, San Diego, CA 92161, USA
Interests: evolutionary game theory; learning in games; social learning; multi-dimensional learning; replicator dynamics; bounded rationality; non-monotone dynamics; equilibrium selection; evolutionary foundation of economic behavior

Special Issue Information

Dear Colleagues,

This issue aims to create a unique and innovative platform for diverse scientific submissions that may have been previously rejected by standard publishing networks or excluded from papers due to length restrictions.

We welcome papers, remarks on previously published works, generalizations, corrections, discussions on negative experimental outcomes, methodological observations, or fascinating excerpts from prior works. We particularly invite contributions discussing strategic interactions where agents deploy practical heuristics to understand their environment and the behavior of others, and how these learnings influence their actions. Submissions dealing with broad economic applications or biological applications, whether theoretical, applied, or experimental, are especially welcome.

We warmly invite authors to submit papers that have been rejected elsewhere. These resubmissions should include reviews from previous submissions, detailed edits made in response to those comments, and a declaration of honor confirming the authenticity and completeness of the provided reviews. We eagerly await your manuscript submissions for this Special Issue of Games.

Dr. Heinrich H. Nax
Prof. Dr. Yuval Heller
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Games is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • evolutionary game theory
  • learning in games
  • Bayesian learning
  • behavioral game theory
  • belief-based learning
  • best-response dynamics
  • conditional cooperation
  • cultural evolution
  • decentralized control
  • dynamic matching
  • dynamic systems
  • emergence of conventions
  • equilibrium selection
  • evolution of cooperation
  • evolution of social norms
  • group selection
  • imitation
  • kin selection
  • learning algorithms
  • machine learning
  • misspecified learning
  • preference evolution
  • reciprocity
  • reinforcement learning

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

2 pages, 173 KiB  
Editorial
The “Black Box” Method for Experimental Economics
by Heinrich H. Nax
Games 2023, 14(2), 23; https://doi.org/10.3390/g14020023 - 01 Mar 2023
Viewed by 1182
Abstract
How humans behave in repeated strategic interactions, how they learn, how their decisions adapt, and how their decision-making evolves is a topic of fundamental interest in behavioral economics and behavioral game theory [...] Full article
(This article belongs to the Special Issue Learning and Evolution in Games, 1st Edition)

Research

Jump to: Editorial

22 pages, 340 KiB  
Article
On a Special Two-Person Dynamic Game
by Akio Matsumoto, Ferenc Szidarovszky and Maryam Hamidi
Games 2023, 14(6), 67; https://doi.org/10.3390/g14060067 - 24 Oct 2023
Viewed by 1013
Abstract
The asymptotical properties of a special dynamic two-person game are examined under best-response dynamics in both discrete and continuos time scales. The direction of strategy changes by the players depend on the best responses to the strategies of the competitors and on their [...] Read more.
The asymptotical properties of a special dynamic two-person game are examined under best-response dynamics in both discrete and continuos time scales. The direction of strategy changes by the players depend on the best responses to the strategies of the competitors and on their own strategies. Conditions are given first for the local asymptotical stability of the equilibrium if instantaneous data are available to the players concerning all current strategies. Next, it is assumed that only delayed information is available about one or more strategies. In the discrete case, the presence of delays has an effect on only the order of the governing difference equations. Under continuous scales, several possibilities are considered: each player has a delay in the strategy of its competitor; player 1 has identical delays in both strategies; the players have identical delays in their own strategies; player 1 has different delays in both strategies; and the players have different delays in their own strategies. In all cases, it is assumed that the equilibrium is asymptotically stable without delays, and we examine how delays can make the equilibrium unstable. For small delays, the stability is preserved. In the cases of one-delay models, the critical value of the delay is determined when stability changes to instability. In the cases of two and three delays, the stability-switching curves are determined in the two-dimensional space of the delays, where stability becomes lost if the delay pair crosses this curve. The methodology is different for the one-, two-, and three-delay cases outlined in this paper. Full article
(This article belongs to the Special Issue Learning and Evolution in Games, 1st Edition)
Show Figures

Figure 1

12 pages, 274 KiB  
Article
On the Nash Equilibria of a Duel with Terminal Payoffs
by Athanasios Kehagias
Games 2023, 14(5), 62; https://doi.org/10.3390/g14050062 - 21 Sep 2023
Viewed by 958
Abstract
We formulate and study a two-player duel game as a terminal payoffs stochastic game. Players P1,P2 are standing in place and, in every turn, each may shoot at the other (in other words, abstention is allowed). If Pn [...] Read more.
We formulate and study a two-player duel game as a terminal payoffs stochastic game. Players P1,P2 are standing in place and, in every turn, each may shoot at the other (in other words, abstention is allowed). If Pn shoots Pm (mn), either they hit and kill them (with probability pn) or they miss and Pm is unaffected (with probability 1pn). The process continues until at least one player dies; if no player ever dies, the game lasts an infinite number of turns. Each player receives a positive payoff upon killing their opponent and a negative payoff upon being killed. We show that the unique stationary equilibrium is for both players to always shoot at each other. In addition, we show that the game also possesses “cooperative” (i.e., non-shooting) non-stationary equilibria. We also discuss a certain similarity that the duel has to the iterated Prisoner’s Dilemma. Full article
(This article belongs to the Special Issue Learning and Evolution in Games, 1st Edition)
12 pages, 637 KiB  
Article
Social Learning for Sequential Driving Dilemmas
by Xu Chen, Xuan Di and Zechu Li
Games 2023, 14(3), 41; https://doi.org/10.3390/g14030041 - 11 May 2023
Viewed by 1297
Abstract
Autonomous driving (AV) technology has elicited discussion on social dilemmas where trade-offs between individual preferences, social norms, and collective interests may impact road safety and efficiency. In this study, we aim to identify whether social dilemmas exist in AVs’ sequential decision making, which [...] Read more.
Autonomous driving (AV) technology has elicited discussion on social dilemmas where trade-offs between individual preferences, social norms, and collective interests may impact road safety and efficiency. In this study, we aim to identify whether social dilemmas exist in AVs’ sequential decision making, which we call “sequential driving dilemmas” (SDDs). Identifying SDDs in traffic scenarios can help policymakers and AV manufacturers better understand under what circumstances SDDs arise and how to design rewards that incentivize AVs to avoid SDDs, ultimately benefiting society as a whole. To achieve this, we leverage a social learning framework, where AVs learn through interactions with random opponents, to analyze their policy learning when facing SDDs. We conduct numerical experiments on two fundamental traffic scenarios: an unsignalized intersection and a highway. We find that SDDs exist for AVs at intersections, but not on highways. Full article
(This article belongs to the Special Issue Learning and Evolution in Games, 1st Edition)
Show Figures

Figure 1

15 pages, 1197 KiB  
Article
The Black Box as a Control for Payoff-Based Learning in Economic Games
by Maxwell N. Burton-Chellew and Stuart A. West
Games 2022, 13(6), 76; https://doi.org/10.3390/g13060076 - 16 Nov 2022
Cited by 2 | Viewed by 1987
Abstract
The black box method was developed as an “asocial control” to allow for payoff-based learning while eliminating social responses in repeated public goods games. Players are told they must decide how many virtual coins they want to input into a virtual black box [...] Read more.
The black box method was developed as an “asocial control” to allow for payoff-based learning while eliminating social responses in repeated public goods games. Players are told they must decide how many virtual coins they want to input into a virtual black box that will provide uncertain returns. However, in truth, they are playing with each other in a repeated social game. By “black boxing” the game’s social aspects and payoff structure, the method creates a population of self-interested but ignorant or confused individuals that must learn the game’s payoffs. This low-information environment, stripped of social concerns, provides an alternative, empirically derived null hypothesis for testing social behaviours, as opposed to the theoretical predictions of rational self-interested agents (Homo economicus). However, a potential problem is that participants can unwittingly affect the learning of other participants. Here, we test a solution to this problem in a range of public goods games by making participants interact, unknowingly, with simulated players (“computerised black box”). We find no significant differences in rates of learning between the original and the computerised black box, therefore either method can be used to investigate learning in games. These results, along with the fact that simulated agents can be programmed to behave in different ways, mean that the computerised black box has great potential for complementing studies of how individuals and groups learn under different environments in social dilemmas. Full article
(This article belongs to the Special Issue Learning and Evolution in Games, 1st Edition)
Show Figures

Figure 1

10 pages, 1209 KiB  
Article
The Strategy Method Risks Conflating Confusion with a Social Preference for Conditional Cooperation in Public Goods Games
by Maxwell N. Burton-Chellew, Victoire D’Amico and Claire Guérin
Games 2022, 13(6), 69; https://doi.org/10.3390/g13060069 - 25 Oct 2022
Cited by 2 | Viewed by 2162
Abstract
The strategy method is often used in public goods games to measure an individual’s willingness to cooperate depending on the level of cooperation by their groupmates (conditional cooperation). However, while the strategy method is informative, it risks conflating confusion with a desire for [...] Read more.
The strategy method is often used in public goods games to measure an individual’s willingness to cooperate depending on the level of cooperation by their groupmates (conditional cooperation). However, while the strategy method is informative, it risks conflating confusion with a desire for fair outcomes, and its presentation may risk inducing elevated levels of conditional cooperation. This problem was highlighted by two previous studies which found that the strategy method could also detect equivalent levels of cooperation even among those grouped with computerized groupmates, indicative of confusion or irrational responses. However, these studies did not use large samples (n = 40 or 72) and only made participants complete the strategy method one time, with computerized groupmates, preventing within-participant comparisons. Here, in contrast, 845 participants completed the strategy method two times, once with human and once with computerized groupmates. Our research aims were twofold: (1) to check the robustness of previous results with a large sample under various presentation conditions; and (2) to use a within-participant design to categorize participants according to how they behaved across the two scenarios. Ideally, a clean and reliable measure of conditional cooperation would find participants conditionally cooperating with humans and not cooperating with computers. Worryingly, only 7% of participants met this criterion. Overall, 83% of participants cooperated with the computers, and the mean contributions towards computers were 89% as large as those towards humans. These results, robust to the various presentation and order effects, pose serious concerns for the measurement of social preferences and question the idea that human cooperation is motivated by a concern for equal outcomes. Full article
(This article belongs to the Special Issue Learning and Evolution in Games, 1st Edition)
Show Figures

Figure 1

Back to TopTop