Next Article in Journal
A Fundamental Non-Classical Logic
Previous Article in Journal
From the Venerable History of Logic to the Flourishing Future of Logics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Logics for Strategic Reasoning of Socially Interacting Rational Agents: An Overview and Perspectives

Department of Philosophy, Stockholm University, 10691 Stockholm, Sweden
Logics 2023, 1(1), 4-35; https://doi.org/10.3390/logics1010003
Submission received: 26 June 2022 / Revised: 25 November 2022 / Accepted: 19 January 2023 / Published: 6 February 2023

Abstract

:
This paper is an overview of some recent and ongoing developments of formal logical systems designed for reasoning about systems of rational agents who act in pursuit of their individual and collective goals, explicitly specified in the language as arguments of the strategic operators, in a socially interactive context of collective objectives and attitudes which guide and constrain the agents’ behavior.

1. Introduction

The concepts of agency and rationality are fundamental for describing and understanding the modern society. Rational agents can be humans, intelligent robots, automated devices, or other autonomous entities that act and interact in pursuit of their individual and collective preferences and goals. In accordance with their various essential attributes, such as beliefs, knowledge, and abilities to observe, learn, remember, reason and communicate, rational agents plan and execute strategies (in AI, often also called “policies”) to achieve their goals. The common environment in which agents act and interact is commonly called a multiagent system. This generic concept covers a variety of entities, including teams of agents, social groups and networks, organizations, markets, and entire societies. Nowadays, the modeling and analysis of systems of rational agents is a broad interdisciplinary field of extensive research, bringing together ideas, approaches, and methods from humanities, social sciences, artificial intelligence, decision and game theory, mathematics, and computer science.
Agents and groups (also often called “coalitions”) of rational agents in multiagent systems typically have competing, sometimes adversarial to each other, interests and goals. When the goals are purely quantitative, such systems are modeled as multiplayer games, studied in game theory, where rational players act so as to optimize their payoffs. Agents may also have purely qualitative (“win–lose”) or combined goals. A fundamental aspect of reasoning both within and about multiagent systems is the reasoning about abilities of agents and coalitions to design and execute strategies to achieve their goals, or to prevent others from achieving theirs, hereafter called (multiagent) strategic reasoning.
Importantly, the behavior of rational agents is usually guided not only by their individual and group goals, but also by those of the others, as well as by common (e.g., societal) goals. Furthermore, their behavior is constrained by various individual, collective, and societal norms (such as rights, obligations, and prohibitions). The totality of common goals, attitudes, norms, and other constraints under which agents act and interact I will call socially interactive context. That context leads to a complex interplay of cooperation and competition between the agents in the system.
Since the 1980s, formal logic has emerged as a useful and efficient methodological and technical tool for modeling, analysis, and reasoning about multiagent rational interaction. A rich variety of logical frameworks for strategic reasoning in, and about, multiagent systems have been developed. Early formal logical systems capturing agents’ abilities were developed with philosophical motivations and applications in mind, including Parikh’s Game Logic [1], where he also suggested perhaps the first applications of logic to multiagent social scenarios, such as cake-cutting, exchanging secrets (see further Example 10), etc.; Brown’s modal logic of ability [2]; and Belnap and Perloff’s STIT logics [3]. In the late 1990s–early 2000s, two important logical systems for multiagent strategic reasoning were introduced independently: Pauly’s coalition logic  CL , introduced in [4,5], and Alur, Henzinger, and Kupferman’s alternating time temporal logic  ATL , introduced first in [6], and then, in its final version, in [7] (cf. [8,9] for more recent and detailed discussion on these and related works).
Some remarks1 are due on conceptual issues arising in regards to the terminology used in this paper, and also generally adopted in the literature on the topic. To begin with, let me emphasize that other, not discussed in the present work important concepts related to rational agency, such as beliefs, desires, intentions, goals, and commitments have been extensively explored in the philosophy and AI literature. Among the most influential such approaches are [10,11,12], where (especially in the latter) agents’ goals and what they imply are explicitly indicated in the language. While being important and influential, these works are not about strategic reasoning regarding the agents’ goals and their achievement which is the focus of the present paper, so they will not be discussed further here.
Now, let me explain the meaning and use of some key concepts and terms used in this paper, to avoid possible misunderstandings, especially on the background of the alternative approaches mentioned above. First, the use of the word “goal” in the context of these logical frameworks and formal systems should be understood in a precise and possibly somewhat narrow sense. Agents’ goals are not some intentional objects permanently or temporarily ascribed to the agents, or intrinsically associated with them. The agents’ goals in the present work are not explicitly specified as such in the language, but are determined by the context and identified with properties (respectively, sets) of states in the model—or, more generally, with properties (respectively, sets) of plays in that model—which the agents are assumed to pursue to realize with their strategic behavior. These sets of states, or plays, are represented in the language by formulae which extensionally specify them by means of their formal semantics. Thus, the logical languages considered here cannot express statements such as “the agent a has as its goal to realize G”, nor even “the goal of agent a is to realize G”, but rather “the agent a has a strategy that guarantees (under the assumed interactive context) the truth of the formula G”, where G is regarded as “ a ’s goal” in this context. “Coalitional goals” should be understood likewise. Thus, agents’ goals are only contextually specified and can freely change, even within the same formalized statement, but this should be taken only as a terminological choice, rather than a conceptual commitment to what “goal” means. Next, “rationality” of agents here should also be understood in a concrete and precise sense: once a goal (in the sense described above) is assigned to a rational agent, that agent is assumed to act in pursuit of that goal, and that assumption naturally constrains the agent’s behavior. This is all that “rationality” is assumed to mean here. Lastly, the phrase “socially interactive context” should also be considered merely as a convenient and sufficiently intuitive term, rather than a precise social concept. Though not precisely defined, that term is hopefully well illustrated in the various logical frameworks presented here.
There are other interesting and important approaches to logics for games and strategic behavior, which—mostly for practical reasons—I will only mention here, but will not be discussed further in this paper:
  • The STIT-based approach, building on the theory of “Seeing To It That” (STIT), originating from the seminal work of Belnap and Perloff [3]. For exploring the closer links of the present paper with STIT, the reader is referred to [13] and the overview chapter [14] on using STIT for strategic reasoning, as well as to [15] for temporal extensions of STIT and applications to normative reasoning, to [16] for using STIT for reasoning about social influence, and to [17] for providing semantics of temporal STIT in concurrent game models used in this paper.
  • The dynamic logic based approach, originating from Parikh’s Game Logic [1], and more recently explored further in [18,19] and other related works.
For further references and a broader view on logics for analyzing games, I also refer the reader to [20].
Now, a few words about the main precursors of the logics presented here. The logic CL was introduced in [4,5] with the explicit intention to formalize reasoning about one-step (local) strategic abilities of agents and coalitions, i.e., strategic abilities to guarantee the achievement of their explicitly specified goals in the immediate outcome of their (individual or collective) action, regardless of the respective actions of all remaining agents. The logic ATL was initially introduced in [6,7] as a logical formalism for formal specification and verification of open (interacting with environment) computer systems where the agents represent concurrently executed processes. Still, it was gradually adopted as one of the most popular and standard logical systems for reasoning about long-term strategic abilities of agents and coalitions in extended concurrent multiplayer games. Technically, ATL can be described as an extension of CL with the long-term temporal operators G , F , and U , which were previously adopted, inter alia, in the branching-time temporal logic CTL (which can be regarded as a single-agent fragment of ATL ). Both CL and ATL feature a special type of modal operators, denoted by [ C ] in CL , respectively by C in ATL , parameterized with a coalition of agents C, such that, given a coalitional goal of C formalized as (ensuring the truth of) a formula ϕ , the formula [ C ] ϕ intuitively says that the coalition C has a collective strategy σ C that guarantees the satisfaction of ϕ in every outcome state (for CL ), respectively, in every outcome play (for ATL ) that can occur when the agents in C execute their strategies in σ C , regardless of the choices (strategic or not) of actions of the agents that are not in C.
Thus, both CL and ATL capture reasoning about unconditional strategic abilities of agents within coalitions, acting in full cooperation with each other in pursuit of their coalitional goals, aiming to succeed against any possible behavior of their opponents, which are thus regarded as their complete adversaries (in the context of CL ) or as randomly behaving environment (in the original context of ATL ). This, however, is a rather narrow and extreme perspective, as agents in the real world are seldom either complete allies or complete adversaries to each other. In reality, all agents acting in the system (except for the environment, or absolute adversaries) have their own goals and act in pursuit of their fulfillment, rather than just to prevent the proponents from achieving their goals. Thus, the strategic interactions of rational agents usually involve a complex interplay of cooperation and competition, both driven by the individual and collective goals of all agents, which may be partly or fully allied or adversarial to each other. To adequately model and capture the behavior and interactions of rational agents within a socially interactive context requires much richer and more versatile formal logical frameworks than those provided by the logics CL , ATL and their close variations. Several richer frameworks have been proposed more recently, including strategy logics [21,22,23,24]; the socially friendly coalitional logic  SFCL and the group protecting coalitional logic  GPCL [25], with the latter subsequently extended with temporal operators for long-term goals to the temporal logic of coalitional goal assignments  TLCGA in [26]; the logic for conditional strategic reasoning  ConStR (cf. [27,28]); and several recently proposed more special-purpose logics for strategic reasoning, focusing on know-how strategies [29], ethical dilemmas [30], responsibility [31], blameworthiness [32], etc.
This paper aims to provide an accessible and not too technical, yet fairly detailed, overview of the recent developments in the area and to outline some perspectives and challenges for further research. To keep the paper within reasonable scope and size, I will not discuss here the aspects of socially interactive context related to agents’ knowledge, nor to normative constraints, but will focus on the strategic reasoning guided only by various patterns of cooperation and competition between agents and coalitions. I will usually assume here complete knowledge of the agents about the system and their full observability (with or without memory) and ability to communicate and coordinate during the “play”. These assumptions are sometimes not justified, and then substantial further complications arise, both conceptual and computational, with respect to analyzing the agents’ strategic abilities and designing their strategies in the context of agents’ incomplete or imperfect information. For more on these issues, the reader is referred to, e.g., [8,9,33], and the entire volume [34], where the latter is published.

Structure of the paper

Section 2 provides some technical preliminaries on concurrent game models, including plays and strategies in them, and provides an illustrative example. Then, Section 3 presents a brief informal preview of the main logical systems presented in the next five sections: the basic coalition logic CL in Section 4, its temporal extension ATL in Section 5, the logic of conditional strategic reasoning ConStR in Section 6, the socially friendly coalition logic SFCL in Section 7, and the logic of local coalitional goal assignments LCGA and its temporal extension TLCGA in Section 8. Eventually, Section 9 proposes a uniform logical framework, called basic strategy logic, which involves variables over strategies and quantification over them, and thus enables uniform embedding of all other strategic operators mentioned here. I end with brief concluding remarks in Section 10.

2. Technical Preliminaries

The reader is expected to have a basic background in propositional classical and modal logics, within the introductory chapters of, e.g., [35]. In addition, this section provides technical preliminaries needed for the definitions and understanding of the formal semantics of the various strategic operators and logics presented and discussed further. Most of it can be omitted by a reader who is only interested in the informal aspects of the topics, and can be consulted only if, and when, necessary for understanding the further content. For more technical details on these preliminaries, the reader is referred to [8,9], as well as [36] (Chapter 9) on concurrent game models and ATL .

2.1. Concurrent Game Models

Consider a fixed finite nonempty set of agents  Agt = { a 1 , , a n } , also called here players, and a fixed countable set of atomic propositions  Prop . Subsets of Agt will also be called coalitions. The Cartesian product of a family of sets { X a } a Agt is the set of tuples ( x 1 , , x n ) where x i X a i or each i = 1 , , n , denoted, as usual, by Π a Agt X a .
Definition 1. 
Let  O  be any nonempty set. A (strategic) game form for the set of agents  Agt  over the set of outcomes  O  is a tuple
G = ( Act , act , O , out )
 where
  • Act  is a nonempty set of actions;
  • act : Agt P + ( Act )  is a mapping assigning to each  a Agt  a nonempty set  act ( a )  of actions available to the player  a ;
  • out : Π a Agt act ( a ) O  is a map assigning to every action profile  ζ Π a Agt act ( a )  a unique outcome in  O .
Definition 2. 
A concurrent game model (CGM) is a tuple
M = ( Agt , S , Act , g , Prop , L )
 where
  • S  is a nonempty set of (game) states;
  • Act  is a nonempty set of actions;
  • g  is a game map, assigning to each state  w S  a strategic game form
    g ( w ) = ( Act , act w , S , out w )  over the set of outcomes  S ;
  • Prop   is a countable set of atomic propositions;
  • L : S P ( Prop )  is a labeling function, assigning to every state in  S  the set of atomic propositions true at that state.
Thus, for each  a Agt  and  w S , the set  act w ( a )  consists of the locally available actions for  a  in w. We can now define the global action function  act : Agt × S P + ( Act )  by setting  act ( a , w ) act w ( a ) . We also define the set  Act a w S act w ( a )  of possible actions for  a .
Given a concurrent game model M = ( Agt , S , Act , g , Prop , L ) , we define the following.
  • An action profile ζ Π a Agt Act a is available at the state w if it consists of locally available actions, i.e., if ζ Π a Agt act w ( a ) .
    The set of all action profiles that are available at w will be denoted by ActProf w .
  • out M is the global outcome function assigning to every state w and an action profile ζ ActProf w the unique outcome  out M ( w , ζ ) out w ( ζ ) .
    When M is fixed by the context, it will be omitted from the subscript.
  • More generally, given a coalition C Agt , a joint action for C in M is a tuple of individual actions ζ C a C act ( a ) . In particular, for any action profile ζ Π a Agt act ( a ) , ζ | C is the joint action obtained by restricting ζ to C. The joint action ζ | C is locally available at the state w iff every individual action in it is locally available for the respective agent in w.
  • For any w S , C Agt , and joint action ζ C that is available at w, we define the set of possible outcomes from the application of the joint action  ζ C  at the state w:
    Out [ w , ζ C ] = u S there exists ζ ActProf w such that ζ | C = ζ C   and out ( w , ζ ) = u .
    In the special case when C = Agt , ζ Agt is a full available action profile, so Out [ w , ζ Agt ] consists of the single outcome out ( w , ζ Agt ) .
Remark 3. 
Note that we only consider deterministic models, where every available action profile produces a single outcome. This is not an essential restriction, because every nondeterministic model can be made deterministic by adding a fictitious additional agent (say, nature) that settles the nondeterminism with its actions which settle the outcome. However, it should be noted that whether nondeterministic models are considered or not will make a difference on the validities of coalition logic  CL , as shown in Section 4.4, and in the various other logics where strategic abilities of coalitions can be expressed.
It is sometimes more convenient to eliminate the “local view” from the definition of concurrent game models, given by the game map assigning the local strategic game forms, and to redefine these models globally, as follows:
M = ( Agt , S , Act , act , Out , Prop , L )
where the components are defined as above. Clearly, the two definitions are equivalent and will be used interchangeably.
Example 4 
(Concurrent game model: Adam and Eve in hotel Life). Figure 1 pictures a concurrent game model formalizing the following story, which the reader is advised to take at face value and with a grain of salt. While it has a clearly allegoric flavor, it is not meant to formalize some meaningful (real or fiction) life story, but just to give an intuitively appealing and sufficiently nontrivial, yet not too complicated, example of a concurrent game model on which to illustrate the components of that concept. Thus, questions about the meaning or idea behind this or that state or transition do not necessarily have good answers.
Two agents, Adam and Eve, live in hotel “Life”. It has (at least) three rooms:  R 1 , R 2 , a n d   R 3 . Initially, both Adam and Eve are in room  R 1 . Every day, each of them is able to choose either to stay in the same room (action  stay , denoted by  s ), or to move to another room (action  move , denoted by  m ), with some restrictions described further. Whichever room each of them chooses, they stay there for the night, and then each chooses again to either stay in the same room, or move to another room, as specified in the model. In some cases, they also have the option to “retreat” (action  retreat , denoted by  r ).
Here are the formal components of the model, with some brief explanations:
  • The set of agents is Agt = { Adam , Eve } .
  • The set of game states is  St = { A 1 E 1 , A 1 E 2 , A 2 E 1 , A 2 E 2 , A 3 E 1 , A 1 E 3 , A 3 E 2 , A 2 E 3 }  and the names of the states represent the current locations of the agents, e.g.,  A 1 E 2  means that Adam is in  R 1  and Eve is in  R 2 , etc. Thus, the names of the states can also be interpreted as pairs of atomic propositions saying “Adam is in room X” and “Eve is in room Y”. These atomic propositions, however, will not feature in the example.
  • There are three atomic propositions of interest for us:  Prop = { T , H A , H E } , where:
    -
    T, which is true in a state iff Adam and Eve are in the same room (“together”), i.e., in  A 1 E 1  and  A 2 E 2 , as indicated in their labels;
    -
    H A , meaning “Adam is happy”, true in the states where  H A  is listed in the label;
    -
    H E , meaning “Eve is happy”, true in the states where  H E  is listed in the label.
  • The global set of actions is  Act = { stay , move , retreat } , denoted in the figure, respectively, by   s , m , r . The transitions caused by the respective ordered pairs of actions are indicated on the figure (the first for Adam, and the second for Eve). For example, if at state  A 1 E 1  both Adam and Eve choose to stay, then the transition, labeled with (s,s), is looping back to the same state, whereas if Adam chooses to stay and Eve chooses to move, the transition, labeled with (s,m), leads to the state   A 1 E 2 , etc. These transitions also define the global outcome function  Out .
  • The global action function act (defining the available actions to every agent at every state) can be readily extracted from the figure. For example,  act ( Adam , A 1 E 2 ) = { s , m } ,   act ( Eve , A 1 E 2 ) = { s , m , r } , etc.
  • Lastly, the labeling function  L : St P ( Prop )  is explicitly given in the figure, where the label of each state is given in  { } .

2.2. Plays and Strategies in Concurrent Game Models

Given a (globally defined concurrent game model M = ( Agt , S , Act , act , Out , Prop , L ) , a partial play, or a history in M is either an element of S or a finite word of the form:
h = w 0 ζ 0 w 1 w n 1 ζ n 1 w n
where w 0 , , w n S and for each i < n , ζ i is a locally available action profile in ActProf w i .
Some partial plays in Example 4 are:
A 1 E 1 ; A 1 E 1 ( s , m ) A 1 E 2 ; A 1 E 1 ( s , m ) A 1 E 2 ( m , m ) A 2 E 1 ( s , m ) A 2 E 2 ; etc.
The last state in a history h will be denoted by l ( h ) . The set of histories in M is denoted by Hist ( M ) . The sequence of states w 0 w 1 w n obtained by forgetting the action profiles in the history h is called the (initial) path induced by h and denoted path ( h ) .
A (memory-based) strategy for player  a is a map σ a assigning to each history h = w 0 ζ 0 ζ n 1 w n in Hist ( M ) an action σ a ( h ) from act ( a , w n ) . A strategy σ a is path-based, if it assigns actions only based on the path generated by the history (not taking into account the intermediate action profiles), i.e., σ a ( h ) = σ a ( h ) whenever path ( h ) = path ( h ) . A strategy σ a is memoryless, or positional, if it assigns actions only based on the current (last) state, i.e., σ a ( h ) = σ a ( h ) whenever l ( h ) = l ( h ) .
Given a coalition C Agt , a joint strategy for C in the model M is a tuple Σ C of individual strategies, one for each player in C. For every joint strategy Σ C and a history h, we denote by Σ C ( h ) the joint action prescribed by Σ C on h.
A (global) strategy profile  Σ is a joint strategy for the grand coalition Agt , i.e., an assignment of a strategy to each player. We denote the set of all strategy profiles in the model M by StratProf M and the set of all joint strategies for a coalition C in M by StratProf M ( C ) . Thus, StratProf M = StratProf M ( Agt ) .
Given a strategy profile Σ , the play induced by  Σ  at  w S is the unique infinite word
play ( w , Σ ) = w 0 ζ 0 w 1 ζ 1 w 2 ζ 2
such that w 0 = w and, for each n < ω we have w n + 1 = out ( w n , ζ n ) , and
ζ n + 1 = Σ ( w 0 ζ 0 ζ n w n + 1 )
The infinite word w 0 w 1 w 2 obtained by forgetting the action profiles in this infinite play is called the path induced by  Σ  at w, denoted path ( Σ , w ) .
Here are some simple, informally described, positional strategy profiles and the plays induced by them in Example 4:
  • Consider the strategy σ 1 which prescribes the action s if Adam and Eve are currently together, or else m , if possible, otherwise, again, s . If both Adam and Eve follow that strategy starting from A 1 E 1 , the induced play is
    play ( A 1 E 1 , ( σ 1 , σ 1 ) ) = A 1 E 1 ( s , s ) A 1 E 1 ( s , s )
    Respectively, the induced play starting from A 1 E 2 is
    play ( A 1 E 2 , ( σ 1 , σ 1 ) ) = A 1 E 2 ( m , m ) A 2 E 1 ( m , m ) A 1 E 2 ( m , m )
  • Consider the strategy σ 2 which prescribes the action s if the player is currently happy, or else r if possible, otherwise m , if possible, otherwise, s .
    If both Adam and Eve follow σ 2 starting from A 1 E 1 , the induced play is
    play ( A 1 E 1 , ( σ 2 , σ 2 ) ) = A 1 E 1 ( m , m ) A 2 E 2 ( s , s ) A 2 E 2 ( s , s )
  • If Adam follows σ 1 and Eve follows σ 2 , the induced play starting from A 1 E 1 is
    play ( A 1 E 1 , ( σ 1 , σ 2 ) ) = A 1 E 1 ( s , m ) A 1 E 2 ( m , r ) A 1 E 3 ( m , s ) A 2 E 3 ( m , s ) A 1 E 3 ( m , s )
  • Lastly, if Adam follows σ 1 and Eve follows the strategy σ 3 which prescribes the action s only if both players are currently happy or if no other action is available, or else r if possible, otherwise m , then the induced play starting from A 1 E 1 is this happy ending:
    play ( A 1 E 1 , ( σ 1 , σ 3 ) ) = A 1 E 1 ( s , m ) A 1 E 2 ( m , r ) A 1 E 3 ( m , m ) A 2 E 2 ( s , s ) A 2 E 2 ( s , s )
More generally, given a coalition C Agt , a state w S , and a joint strategy Σ C for C, the set of outcome plays induced by the joint strategy  Σ C  at w is the set of plays
Plays ( w , Σ C ) = play ( w , Σ ) Σ StratProf M such that   Σ ( a ) = Σ C ( a ) for all   a C
Given a strategy profile Σ , we also denote Plays ( w , Σ , C ) Plays ( w , Σ | C ) . We will likewise use the notation paths ( w , Σ , C ) for the set of paths obtained from the plays in Plays ( w , Σ , C ) . Since these only depend on the strategies assigned to players in C, I shall freely use the notation Plays ( w , Σ , C ) and paths ( w , Σ , C ) , even when Σ is defined for all members of C, but not necessarily for all other players.

3. A Variety of Strategic Abilities

This short section is a brief informal preview of the types of strategic abilities and logical operators and systems formalizing them that will be presented further in the paper.
Most of the natural claims about strategic abilities of agents and coalitions (for convenience, hereafter I will often treat individual agents as singleton coalitions) can be stated in terms of existence of a strategy profile Σ (in particular, an action profile ζ ) that satisfies certain requirements formalizing these abilities. Here is a natural hierarchy of types of strategic abilities and associated claims about them, which will be discussed further.
  • Strictly competitive and unconditional, where all agents, respectively, coalitions, act only in pursuit of their own goals and can be assumed to regard all others either as adversaries, or as behaving randomly. An alternative way of thinking here is that these are abilities of a given agent, respectively, coalition, to achieve goals independently of the actions of all other agents. Both interpretations make good sense in the context of this paper. The typical claim for such kind of local (immediate, one step) abilities is:
    “The coalition A has a joint action to ensure satisfaction of the coalitional goal of A in every outcome state that may result from that joint action”.
    This is the informal semantic reading of the strategic operator [ A ] in the coalition logic CL [4,5], which will be presented in Section 4.
    Respectively, here is a typical long-term strategic ability claim:
    “The coalition A has a joint strategy to ensure satisfaction of the coalitional goal of A in every outcome play resulting from A following that strategy”.
    This is the informal reading of the strategic operator A in the alternating time temporal logic ATL and family [7], which will be presented in Section 5.
  • Competitive, but conditional on the other agents’ expected actions, where coalitions (respectively, agents) still act only in pursuit of their own goals, but, when deciding on their course of action, they take into account the goals and the respective expected actions of the other players, so they are not treated as adversaries (or, behaving randomly), but as rational agents pursuing their own goals. Here is a typical such claim expressing conditional strategic ability:
    “For some (or, every) joint action of the coalition A that ensures satisfaction of its goal γ A , the coalition B has a joint action of its own to ensure satisfaction of its goal γ B .
    There are (at least) two naturally arising readings of such conditional claims, as “proactive ability” and as “reactive ability”, and two respective versions of local strategic operators formalizing them. These were first introduced in [27] (respectively, as “de dicto” and “de re” abilities) and further studied in [28]. Conditional abilities will be discussed in more detail in Section 6.
  • Socially cooperating abilities, where agents and coalitions still act in pursuit of their own goals, but when deciding on their course of action take into account the goals of other agents in the system and make allowance, if possible, for their satisfaction, too. Thus, agents and coalitions are assumed, i.e., not only rational but also cooperative, whenever possible to reconcile their interests with those of the others. Two natural examples of strategic operators formalizing such abilities are:
    (a)
    the “cooperative ability” operator O c , again introduced and studied in [27,28], which, when applied to (disjoint) coalitions A and B with respective goals ϕ and ψ , formalizes the statement saying that
    A has a joint action σ A which guarantees the satisfaction of ϕ and enables B to also apply a joint action σ B that guarantees ψ ”.
    This will be presented in more detail in Section 6.
    (b)
    The “socially friendly” coalitional operator SF, introduced and studied in [25], which is a somewhat more general version of O c , informally says that
    A has a joint action σ A that guarantees satisfaction of its goal and also enables the complementary coalition A ¯ to realize any one of a list of secondary goals by applying a respectively suitable joint action”.
    The operator SF and the logic SFCL built on it will be presented in more detail in Section 7.
  • Abilities for cooperation, protecting the interests of agents and coalitions. These capture the idea, complementary to that of SFCL , that, while socially responsible rational agents and coalitions contribute with their individual and collective actions to the society of all agents, they wish to perform that in a way that protects their individual and collective interests and goals. Such abilities are expressed by means of the coalitional goal assignment operator [ · ] , introduced in [25] as a local operator and extended with temporalized long-term goals in [26]. A coalitional goal assignment γ is a mapping which assigns a goal formula to each possible coalition, and the operator [ γ ] formalizes the claim that there is a strategy profile σ of Agt , the restriction of which to every coalition C is a joint action σ C that guarantees the satisfaction of its goal γ ( C ) regardless of any possible behavior of the remaining agents. The coalitional goal assignment operator and the logics LCGA and TLCGA built on it will be presented in more detail in Section 8.

4. Coalition Logic and Unconditional Local Strategic Reasoning

The case of strictly competitive (noncooperative) and unconditional abilities, presented in this section, is the extreme case of strategic abilities, where agents, respectively, coalitions, act only in pursuit of their own goals and regard all others either as adversaries or as behaving randomly. In this section, I will present briefly the basic multiagent logic CL formalizing these abilities and will illustrate its use with some examples.

4.1. The Basic Logic for Coalitional Strategic Reasoning CL

As noted earlier, a typical claim for such kind of local strategic abilities is of the following type: “The coalition C has a joint action to guarantee satisfaction of the coalitional goal of C in every outcome state that may occur as a result from that joint action”.
This is the informal semantic reading of the strategic operator [ C ] in the coalition logic CL introduced in [4]; cf. also [5,37]. CL extends the classical propositional logic with coalitional strategic modal operators  C , for any coalition of agents C . The formulae of CL are defined by the following formal grammar:
φ p | | ¬ φ | ( φ φ ) | C φ
All other standard propositional connectives and constants, including ⊥, ∧, →, and ↔, are defined as usual. I will write i instead of { i } .

4.2. Expressing Claims about Strategic Abilities in CL

Here are some examples of formalizing statements about agents’ strategic abilities in CL , based on the story in Example 4. The reader may wish to check whether they are all intuitively true in the model in Figure 1, say, at state A 1 E 1 .
1.
Eve H E ¬ Adam ¬ H E
If Eve has an action ensuring that she becomes happy, then
Adam cannot prevent Eve from reaching a state where she is happy.
2.
The statement above naturally generalizes (with Win being an atomic proposition) to
i Win ¬ Agt \ { i } ¬ Win
If the agent  i  has an action to guarantee reaching a “winning” outcome, then the coalition of all other agents cannot prevent  i  from reaching a “winning” outcome.
It should be intuitively clear that this statement expresses a valid principle of CL . Indeed, we will see shortly that this is the case.
3.
¬ Adam T ¬ Eve T { Adam , Eve } T
Neither Adam nor Eve has an action ensuring that they stay together, but they have a joint action ensuring that.
4.
¬ Adam H A ¬ Eve H E { Adam , Eve } ( H A H E )
Neither Adam nor Eve has an action ensuring happiness for himself/herself but they have a joint action ensuring happiness for both.
5.
¬ Adam ( H A ¬ Eve H E )
Adam cannot act so as to ensure at the outcome state that both Adam is happy and Eve does not have an action to ensure reaching her happiness.
6.
{ Adam , Eve } ( Adam H A Eve H E )
Adam and Eve can act jointly so that at the outcome state Adam has an action to ensure reaching his happiness only if Eve has an action to ensure reaching her happiness.
7.
Adam Eve ( H A H E Adam H E Eve H A )
Adam can act to ensure (at the outcome state), that Eve can then act to ensure that they are both happy and that each of them can act so as to keep the other happy.
Thus, we see that the language of CL allows expressing meaningful and nontrivial claims about agents’ local (immediate) strategic abilities. However, CL does not allow expressing long-term (temporal) goals and claims, e.g., that Adam and Eve have a joint strategy to stay together forever, or to eventually reach happiness. For that, a more expressive logic is needed, also involving temporal operators. It is introduced in Section 5.

4.3. Formal Semantics of CL

The formulae of CL are interpreted in concurrent game models. Consider a CGM M = ( Agt , S , Act , act , Out , Prop , L ) . Truth of a CL-formula  ψ  at a state s in  M , denoted M , s ψ , as in modal logic, is defined by structural induction on formulae via the clauses:
  • M , s p iff p L ( s ) , for p Prop .
  • M , s ¬ ϕ iff M , s ϕ .
  • M , s ( ϕ 1 ϕ 2 ) iff M , s ϕ 1 and M , s ϕ 2 .
  • M , s C ϕ iff there exists a joint action σ C available at s,
    such that M , u ϕ for each u Out [ s , σ C ] .
Note that this definition makes no implicit assumption that the agents in C know the goals of the remaining agents, nor the actions that they may possibly use to achieve these goals.
Here are examples of truth of some of the CL formulae listed in Section 4.2 in the CGM for Example 4, given in Figure 1, which are left for the reader to formally verify.
  • M , A 1 E 1 ¬ Adam T ; M , A 1 E 1 ¬ Eve T .
  • M , A 1 E 1 { Adam , Eve } T .
  • M , A 1 E 1 ¬ Adam H A ¬ Eve H E { Adam , Eve } ( H A H E ) .
  • M , A 1 E 1 { Adam , Eve } ( Eve H E ¬ Adam H A { Adam , Eve } ( H A H E ) ) .

4.4. Axiomatizing the Validities of CL

A CL formula ϕ is called:
  • (Logically) valid if M , s ϕ for every CGM M and a state s M .
  • Satisfiable if M , s ϕ for some CGM M and a state s M .
Pauly [4,5] obtained a complete axiomatization of the valid formulae of CL , by extending the classical propositional logic PL with the following axiom schemes and rule:
  • Agt -maximality: ¬ Ø ¬ φ Agt φ
    This axiom says that the grand coalition Agt can act collectively so as to guarantee any goal that is satisfied in at least one outcome state. Note that the validity of this axiom presupposes that the models are deterministic, implying that the coalition of all agents Agt can enforce any particular possible outcome.
  • Safety: ¬ C
    No coalition has the ability to ensure the falsum will become true.
  • Liveness: C
    Every coalition has the ability to ensure the truth will become true.
  • Superadditivity: for any C 1 , C 2 Agt such that C 1 C 2 = Ø :
    ( C 1 φ 1 C 2 φ 2 ) C 1 C 2 ( φ 1 φ 2 )
    This axiom scheme says that two disjoint coalitions, each of which has a joint action to guarantee satisfaction of their own goal, can join forces (simply by each of them following their respective joint action) to guarantee satisfaction of both goals.
  • C -monotonicity Rule:
    φ 1 φ 2 C φ 1 C φ 2
Some important derivable validities include:
  • Outcome Monotonicity: C ( φ 1 φ 2 ) C φ 1 .
    This is derived directly by applying the C -monotonicity rule.
  • Coalition Monotonicity: C 1 φ C 1 C 2 φ .
    This is derived by applying the superadditivity axiom scheme to the coalitions C 1 and C 2 \ C 1 with respective goals φ and ⊤. Note that this validity together with Agt -maximality also implies the validity of Agt φ Agt ¬ φ , for any formula φ .
  • Coalition complementarity: C φ ¬ C ¯ ¬ φ , where C ¯ = Agt \ C .
    This is derived by applying the superadditivity axiom scheme to the coalitions C and C ¯ with respective goals φ and ¬ φ , and then the safety axiom scheme.
The problem of deciding whether a given formula in the logic CL is valid, respectively, satisfiable, is decidable [4,5]. This and other technical results about CL are subsumed by respective results for the more expressive logic ATL , presented in the next section.

5. Logics for Unconditional Long-Term Strategic Reasoning

The logic CL can only capture strategic reasoning about abilities to achieve local, i.e., one-step goals. It can be naturally extended with temporal operators to reason about long-term unconditional strategic abilities, of the type:
“The coalition C has a joint strategy to guarantee achievement of the coalitional goal of C in every outcome play resulting from C following that strategy”.
This is the informal reading of the strategic operator C in the alternating time temporal logic  ATL and family, introduced (independently from the logic CL ) in [7].
I will present and illustrate here the family of ATL -related logics. For more detailed and technically involved surveys on these logics, see [8,9].

5.1. The Alternating-Time Temporal Logic ATL *

The full alternating-time temporal logic ATL * , introduced in [7], involves long-term temporal operators, the standard repertoire being “next-time X , “always G , and “until U .
The strategic operator in ATL * is C ϕ , denoting the claim that
“The coalition C has a joint strategy that guarantees the satisfaction of the goal ϕ on every outcome play induced by that joint strategy“,
where ϕ is any formula of ATL * . The language of ATL * is formally defined as follows. It involves two sorts of formulae: state formulae, that are evaluated at game states, and path formulae, that are evaluated on game plays. These are, respectively, defined by the following grammars, where p Prop and C Agt :
State formulae: φ p ¬ φ ( φ φ ) C γ .
Path formulae: γ φ ¬ γ ( γ γ ) X γ G γ ( γ γ ) .
Thus, the path formulae are used to formalize goals.
The language of ATL * is too expressive, and that both creates problems with its intuitive interpretation (see discussions on [38,39,40]) and raises the complexity cost of computing the truth of an ATL * to 2EXPtime (cf. [7]). To alleviate these, the fragment ATL was also introduced in [7], which only contains state formulae, defined by the grammar:
φ p ¬ φ ( φ φ ) C X φ C G φ C ( φ φ ) ,
Thus, the coalitional goals in ATL only involve simple patterns with a single temporal operator, which will be called simple temporal goals. We define C F φ as C ( φ ) .
The logic CL embeds into ATL as the fragment where the goals can only be of the type X φ , for a state formula φ (cf. [41]), i.e., C φ C X φ .
Hereafter, in this section, I will focus on the logic ATL .

5.2. Formal Semantics of ATL

Similar to CL , the formulae of ATL * are interpreted in concurrent game models. To keep the presentation less technical, I will only give the formal semantics for the fragment ATL , which consists only of state formulae; for full details of the semantics of ATL * , see, e.g., [8].
Consider a CGM M = ( Agt , S , Act , act , Out , Prop , L ) . Truth of an  ATL -formula  ψ  at a state s in  M , denoted M , s ψ , is defined by structural induction on formulae. The propositional cases are similar to in CL , and the only new clauses are those for the strategic operators. They all follow the same pattern, which is essentially the semantic clause for in ATL * :
M , s C γ iff there exists a joint strategy Σ C for C such that γ is true on every play in Plays ( s , Σ C ) , i.e., every play starting at s and induced by Σ C .
The specific clauses for the temporal operators defining the goal γ are as follows, where s 0 = s :
  • M , s C X φ iff there exists a joint strategy Σ C such that M , s 1 φ for every play s 0 , s 1 , Plays ( s , Σ C ) .
  • M , s C G φ iff there exists a joint strategy Σ C such that M , s i φ for every play s 0 , s 1 , Plays ( s , Σ C ) and i 0 .
  • M , s C ( φ U ψ ) iff there exists a joint strategy Σ C such that for every play s 0 , s 1 , Plays ( s , Σ C ) there is i 0 for which M , s i ψ and M , s j φ for all j such that 0 j < i .
Satisfiability and validity in ATL are defined similar to in CL .
It can be shown (cf., e.g., [36]) that memoryless strategies suffice for the semantics of ATL formulae, but that is no longer the case even when conjunctions of simple temporal goals are allowed. Indeed, consider the ATL * formula ϕ = a ( F p F q ) and a concurrent game model in Figure 2 for the only agent a . Clearly, there is no positional strategy to satisfy ϕ at s 0 , as it must always prescribe the same action, L or R, there, so either s 1 or s 2 , but not both, will always be visited. On the other hand, there is a simple strategy for a using a bit of memory that can satisfy ϕ , e.g., prescribing the first time when s 0 is visited the action L, and the next time—the action R.

5.3. Expressing Claims about Strategic Abilities in ATL

Here are some examples of formalizing in ATL statements about agents’ long-term strategic abilities.
1.
Eve F H E ¬ Adam G ¬ H E
If Eve has an action ensuring that she eventually becomes happy, then Adam cannot prevent Eve forever from reaching a state where she is happy.
It should be intuitively clear that this statement expresses a valid principle of ATL . Indeed, this is the case.
2.
( Yin G Safe Yin F Goal ) Yin ( Safe Goal )
If  Yin  has a strategy to keep the system in safe states forever and has a strategy to eventually achieve its goal, then  Yin  has a strategy to keep the system in safe states until it achieves its goal.
The formula above is not logically valid. Indeed, the strategies of Yin to keep the system in safe states forever and to eventually achieve its goal may be incompatible.
3.
( Yin G Safe Yang F Goal ) { Yin , Yang } ( Safe Goal )
If  Yin  has a strategy to keep the system in safe states forever and  Yang  has a strategy to eventually reach a goal state, then  Yin  and  Yang  together have a strategy to stay in safe states until a goal state is reached.
Assuming that Yin and Yang are distinct agents, the formula above is logically valid, unlike the previous one. Indeed, this is due to the independence of the actions of the two agents, and hence of their abilities to execute their respective strategies.
4.
A F B G ¬ φ
The coalition A has a joint strategy to eventually ensure that the coalition B has a strategy to prevent φ from ever happening.
This example raises the question of how the semantics works in the case of nested strategic operators. Suppose, the coalitions A and B intersect and a is an agent in both of them. Then, the claim of the external strategic operator A requires, inter alia, existence of a strategy for the agent a within a joint strategy σ A for A that guarantees the eventual satisfaction of the subformula B G ¬ φ . However, when evaluating that subformula, to justify its truth, one has to identify a respective joint strategy σ B for B . Now, the question arises whether the strategy of a within σ B should not be assumed to be already fixed by σ A or, conversely, whether the strategy of a within σ A should not be assumed to be already fixed by σ B . Note that the standard formal semantics for ATL * (in particular, of ATL ) presented here does not impose any such constraints, but, rather, treats these strategies independently. That is, the standard semantics of ATL * does not commit the agents in A to the strategies they adopt in order to bring about the truth of the formula  A γ . That creates some conceptual issues with the very concept of “strategy”, independently addressed in different ways in [38,39,42,43], where several proposals were made in order to incorporate strategic commitment or uncommitment and persistent strategies in the syntax and semantics of ATL * . These raise a number of still open technical problems regarding constructing provably complete axiomatizations, proving decidability, and designing decision procedures for the variations of ATL and ATL * mentioned above.

6. The Logic of Conditional Strategic Reasoning ConStR

Let us now look at the reasoning of (and about) an agent, conditional on that agent’s knowledge of the goals and choices of actions of the other agents. Note that, while knowledge is not explicitly included in the syntax of the logics considered here, it is implicit in their reasoning, or in the reasoning of the external observers reasoning about the agents’ abilities, and it is also implicitly assumed in the formal semantics.

6.1. Conditional Strategic Reasoning: An Informal Discussion

I first focus on a simple case: two agents, Alice and Bob, acting independently, and possibly concurrently with other agents. Alice has a goal, γ A , to achieve. Suppose that Alice has several possible choices of action that would possibly, or certainly, ensure the achievement of her goal. Bob also has a goal, γ B , of his own and has several possible actions (there may be other agents, besides Alice and Bob, also acting in pursuit of their own goals, but we will ignore them here). Now, based on his knowledge about Alice’s goal and possible choices of actions she may take towards that goal, Bob decides on his own choice of action in pursuit of his goal.
Here is a simple illustrating scenario, discussed in [27]. Alice and Bob are students at Downtown University. Alice is coming to campus today, to meet with her supervisor. Bob wants to meet with Alice somewhere on campus today. Alice may not know that, and they may have no direct communication. In turn, Bob may, or may not, know what Alice is going to do on campus. Now, using his knowledge of what, where, and when Alice intends to do, Bob wants to come up with a plan of where and how to meet her.
This calls for a conditional strategic reasoning with statements of the type:
For some/every action of Alice that guarantees achievement of her goal  γ A ,
Bob has/does not have an action of his own to guarantee achievement of his goal  γ B .
I will focus here on local conditional strategic reasoning, which only refers to the immediate actions of the agents, not to their long-term global strategies.
As we will see further, the standard logics for multiagent strategic reasoning, such as CL and ATL , cannot capture this type of conditional reasoning. A more expressive logic for conditional strategic reasoning is needed here.
Depending on Bob’s knowledge about Alice’s goal and of her possible or expected choices of action, there can be several possible cases for Bob’s reasoning.

6.1.1. Case 1: Bob Does Not Know Alice’s Goal or Actions

The simplest case is when Bob does not know Alice’s goal, or her available actions, and therefore has no a priori expectations about her choice of action. Then, Bob can only ensure that γ B will occur if he has an action to make γ B true, regardless of how Alice (and all others) act. For instance, in our scenario, suppose the campus has only one entrance. If Bob is standing by the only entrance of the campus, then he is sure to meet Alice when she comes, no matter what she will do there. This can be simply expressed in coalition logic as [ B ] γ B .

6.1.2. Case 2: Bob Knows Alice’s Goal and Possible Actions

Suppose now, that Bob knows Alice’s goal, as well as all possible actions of Alice that can ensure the satisfaction of her goal. Thus, Bob knows that Alice will perform one of these actions, but possibly does not know which one. For instance, in our scenario, Bob knows that Alice is coming to campus to meet with her supervisor and she can meet with him either in his office, or in the lecture room, or in the café.
We can express Bob’s conditional ability to achieve his goal as follows:
Whichever way Alice acts towards achieving her goal  γ A ,
Bob can act so as to ensure achievement of his goal  γ B ”.
This claim can no longer be expressed in coalition logic, except in the special case when the satisfaction of Alice’s goal guarantees satisfaction of Bob’s goal, too (thus, Bob need not do anything about that). That can again be simply expressed in CL as [ Ø ] ( γ A γ B ) .

6.1.3. Reactive and Proactive Ability

This case above admits two essentially different readings, discussed further: as a reactive ability, and as a proactive ability. (In [27], these were called ability de dicto and ability de re.)
Bob’s reactive ability. Suppose first, that Bob will know Alice’s choice of action when he is to choose his action. Then, Bob’s ability to achieve his goal is reactive, meaning that for every action of Alice that ensures her goal γ A , Bob has an action of his, possibly dependent on Alice’s action, that would also ensure his goal, γ B . For instance, in our scenario, suppose that Alice’s supervisor tells Bob where and when he is going to meet with Alice. Then Bob can wait for Alice at the respective place. This claim cannot be expressed in CL , so a new operator is needed for it.
Bob’s proactive ability. Suppose now that Bob will not know Alice’s action when he is to choose his. For instance, in our scenario, suppose all that Bob knows is that Alice will meet with her supervisor either in his office, or in the lecture room, or in the café, but does not know where. Now, for Bob to ensure that his goal will be achieved, he must have a uniform choice of action to make γ B true, when applied with any action of Alice that ensures the truth of her goal, γ A . For instance, in our scenario, suppose that all meeting places for Alice and her supervisor are in the same building, so Bob can wait for her at the only entrance of that building. That is, Bob must have a proactive ability to achieve his goal.
This cannot be expressed in CL , either, so a new operator is needed again.
Remark 5. 
The notions of proactive and reactive ability, respectively, generalize the notions of  α -effectivity and  β -effectivity in game theory (cf., e.g., [44]). Indeed, in the context of the Alice and Bob story above, proactive and reactive ability of Bob correspond precisely, respectively, to α-effectivity and β-effectivity when either  γ A  (so, any action of Alice is equally good for achieving  γ A ) or when Bob does not know  γ A  and hence can expect any action from Alice. Furthermore, one can still associate proactive and reactive ability with α-effectivity and β-effectivity if one explicitly assumes individual rationality of Alice, implying that she would only choose actions that would ensure  γ A . Thus, one can regard proactive and reactive ability, respectively, as conditional α -effectivity and β -effectivity, terminology which will be used further.
Lastly, an important point: even though knowledge of the agent (here, Bob) about the other’s goals and possible actions is essential, it will not feature in our formal logical language, nor in the formal semantics, but only in the external reasoner’s analysis, in which case conditional strategic ability applies.

6.1.4. Case 3: Assuming Alice’s Cooperation

Suppose now that Alice also knows Bob’s goal and can choose to cooperate with Bob by selecting a suitable action σ A , that would not only guarantee achievement of her goal γ A , but will also enable Bob to supplement σ A with an action σ B of his, which would then also guarantee achievement of his goal γ B (we also assume that Alice knows enough about Bob’s possible actions). This scenario cannot be formalized in CL , either.

6.2. Modal Operators for Conditional Strategic Reasoning

I will now present three new binary modal operators for conditional strategic reasoning, for any coalitions A and B , with intuitive semantics corresponding to the three reasoning cases in Section 6.1, i.e., formalizing, respectively, reactive abilities, proactive abilities, and abilities under cooperation.
Below, A and B are coalitions and γ A and γ B are their respective goals. In the special case when A and B are singletons, they are assumed to be different agents.
( O α )
[ A ] α ( γ A ; B γ B ) means that the coalition B \ A of agents who are in B but not in A  has a joint action  σ B \ A  such that if  A  applies any joint action that guarantees the truth of  γ A , then  B \ A  can ensure the truth of  γ B  by applying  σ B .
This operator formalizes the notion of a coalition’s proactive ability, discussed in the special case of single-agent coalitions in Section 6.1.3, respectively, to the game-theoretic notion of conditional α-effectivity, hence the notation. Note that the agents in B A (if any) are assumed to act on behalf of A in its pursuit of γ A .
( O β )
[ A ] β ( γ A ; B γ B ) means that for any joint action  σ A  of  A  that guarantees the truth of  γ A , when applied by  A  there is a joint action  σ B \ A  that guarantees  γ B  when additionally applied by  B \ A . Note that  [ A ] β ( ; B γ B )  is vacuously true for any  A , B , and  γ B , as then there cannot be such joint actions  σ A  that enable satisfying ⊥. This may sounds odd, but it is no special phenomenon in  ConStR , as the same effect occurs in FOL with universal quantification over an empty set of objects.
This operator formalizes a claim of the ability of the coalition B to choose a suitable joint action so as to achieve the goal γ B assuming that A acts so as to achieve the goal γ A , if B is to choose their joint action after  B learns the joint action of A . In this case, the actions of the agents in B A (if any) are assumed to be already fixed by γ A .
This corresponds to the notion of agents’ reactive ability discussed in Section 6.1.3, respectively, to the game-theoretic notion of conditional β-effectivity, hence the notation.
( O c )
A c ( γ A ; B γ B ) means that A  has a joint action  σ A  which, when applied, guarantees the truth of  γ A  and enables  B \ A  to apply a joint action  σ B \ A  that guarantees  γ B  when additionally applied by  B \ A .
This operator formalizes Case 3 discussed in Section 6.1, where A knows the goal of B and can choose to cooperate with B by selecting an action among those that ensure satisfaction of γ A which is also suitable for B .

6.3. The Logic ConStR : Language and Some Definable Operators

We fix a finite nonempty set of agents Agt and a countable set of atomic propositions Prop . The formulae of ConStR , where p Prop and A , B Agt are defined as follows:
ϕ p ¬ ϕ ( ϕ ϕ ) [ A ] α ( ϕ ; B ϕ ) [ A ] β ( ϕ ; B ϕ ) A c ( ϕ ; B ϕ )
Here are some definable operators and expressions in ConStR , which can be easily seen from the informal semantics above, and can also be easily verified with the formal semantics introduced further.
  • The coalitional operator from CL is definable by means of each of O c , O α , O β as follows:
    ( O α )
    [ C ] ϕ [ Ø ] α ( ; C ϕ ) .
    This claims an unconditional ability of C to choose an action that guarantees ϕ .
    ( O β )
    [ C ] ϕ [ Ø ] β ( ; C ϕ ) , or [ C ] ϕ [ C ¯ ] β ( ; C ϕ ) .
    The only strategy of the empty coalition guarantees the satisfaction of ⊤.
    ( O c )
    [ C ] ϕ C c ( ϕ ; C ϕ ) or [ C ] ϕ C c ( ϕ ; C ) .
  • A c ( γ A ; B γ B ) is equivalent to A ( γ A B \ A γ B ) in ATL * .
  • The negated O c operator ¬ A c ( ϕ ; B ψ ) says that every joint action of A that, when applied, guarantees the truth of ϕ , would prevent B from acting additionally so as to guarantee ψ . This formalizes the conditional reasoning scenario where the goals of A and B are conflicting, so whichever way A acts towards its goal would block B from acting to guarantee achievement of B ’s goal.
  • [ A ] β ( ; B ψ ) essentially formalizes the case when the agents in B are not informed about the goal of A , but have to choose their action after learning the action of A .
  • On the other hand, [ A ] ( ϕ | ψ ) [ A ] β ( ϕ ; Ø ψ ) , also equivalent to [ A ] α ( ϕ ; Ø ψ ) , says that any joint strategy of A that guarantees ϕ to be true also guarantees ψ to be true. That formalizes the claim of an observer who knows both the goal ϕ and the possible joint actions of A , that the outcome from the joint action of A will also satisfy ψ .
  • A ( ϕ | ψ ) ¬ [ A ] ( ϕ | ¬ ψ ) says that there is a joint strategy of A that ensures the truth of ϕ and also enables the satisfaction of ψ .
    A ( ϕ | ψ ) is also definable as A c ( ϕ ; A ¯ ψ ) , where A ¯ = Agt \ A .
  • The coalitional operator [ A ] from CL is a special case of the above: [ A ] ϕ A ( ϕ | ) .

6.4. Formal Semantics of ConStR

Given coalitions A , B Agt and joint actions σ A for A and σ B for B , we define the ordered join of  σ A  and  σ B to be the joint action σ A σ B for A B which equals σ A when restricted to A and equals to σ B when restricted to B \ A . Thus, in particular, σ A σ B = σ A for any B A Agt and σ A σ B = σ A σ B when A and B are disjoint.
Now, let M = ( Agt , S , Act , g , Prop , L ) be a concurrent game model.
The formal semantics of ConStR extend the semantics of CL to the new operators as follows (recall the definition of Out [ s , σ ] from Section 2):
  • M , s [ A ] α ( ϕ ; B ψ ) iff
          
    B has a joint action σ B such that for every joint action σ A of A ,
    if M , u ϕ for every u Out [ s , σ A ] , then M , u ψ for every u Out [ s , σ A σ B ] .
  • M , s [ A ] β ( ϕ ; B ψ ) iff
          
    for every joint action σ A of A such that M , u ϕ for every u Out [ s , σ A ] ,
    B has a joint action σ B (generally, dependent on σ A ) such that
    M , u ψ for every u Out [ s , σ A σ B ] .
  • M , s A c ( ϕ ; B ψ ) iff
          
    A has a joint action σ A , such that M , u ϕ for every u Out [ s , σ A ] and
    B has a joint action σ B such that M , u ψ for every u Out [ s , σ A σ B ] .
Remark 6. 
The semantics of each of the operators above can be restated to consider joint actions for  B \ A  rather than the whole  B . For instance, it can be easily verified for the latter operator, that  M , s [ A ] α ( ϕ ; B ψ )  iff  B \ A  has a joint action  σ B \ A  such that for every joint action  σ A  of  A , if  M , u ϕ  for every  u Out [ s , σ A ] , then  M , u ψ  for every  u Out [ s , σ A σ B \ A ] .
As usual, formula ϕ of ConStR is called valid in  ConStR , denoted ConStR ϕ , iff M , u ϕ for every concurrent game model M and state u M ; ϕ is satisfiable in  ConStR , iff M , u ϕ for some concurrent game model M and state u M .
Here are some easy observations on the relationships between the operators in ConStR :
Proposition 7. 
The following validities and nonvalidities in  ConStR  hold:
1. 
ConStR [ a ] α ( p ; b q ) [ a ] β ( p ; b q ) .
2. 
ConStR [ a ] β ( p ; b q ) [ a ] α ( p ; b q ) .
3. 
ConStR a c ( p ; b q ) [ a ] β ( p ; b q ) .
4. 
ConStR a c ( p ; b q ) [ a ] α ( p ; b q ) .
5. 
ConStR [ a ] α ( p ; b q ) a c ( p ; b q ) .
6. 
ConStR [ a ] β ( p ; b q ) a c ( p ; b q ) .
7. 
ConStR ( [ A ] p [ a ] β ( p ; b q ) ) a c ( p ; b q ) .
Proof. 
  • Immediately from the semantic definitions, as implies .
  • A counter-model is shown further, in Example 8.
  • A counter-model is shown in Example 8.
  • Follows from 1 and 3 above.
  • This holds for the trivial reason that if in a given model the agent a has no action that ensures the truth of p in the given state, then the antecedent is vacuously true, whereas the consequent is false there.
  • Likewise.
  • An easy semantic exercise.
Example 8. 
Consider the game model  M  in Figure 3, with two players, a  and  b , where at state  s 0   a  has 3 actions, a 1 , a 2 , a n d   a 3 , and  b  has 2 actions, b 1 a n d   b 2 .
Note that:
1. 
M , s 0 [ b ] q , whereas  M , s 0 a c ( p ; b q ) .
Thus, an agent may have only conditional ability to achieve its goal.
2. 
M , s 0 [ a ] α ( p ; b q ) . Indeed, neither  b 1  nor  b 2  ensures q against both choices  a 1  and  a 2  of  a . Thus, b  does not have a uniform action to ensure q against any action of  a that ensures p.
3. 
M , s 0 [ a ] β ( p ; b q ) . Indeed, a  has two actions at state  s 0  to ensure p: a 1  and  a 2 . For each of them, b  has an action to ensure q: choose  b 2  if  a  chooses  a 1 , and choose  b 1  if  a  chooses  a 2 .
Therefore, ConStR [ a ] β ( p ; b q ) [ a ] α ( p ; b q ) . Thus, the claim made by the proactive ability operator  O α  is stronger than the claim made by the reactive ability operator  O β .
4. 
On the other hand, if the outcomes of  ( a 2 , b 1 )  and  ( a 2 , b 2 )  are swapped, then  [ a ] α ( p ; b q )  becomes true at  s 0 in the resulting model.
5. 
Furthermore, clearly, [ a ] α ( p ; b q ) [ a ] β ( p ; b q )  is a valid formula in  ConStR .
6. 
If the model is modified by making p true also at  s 6 , then in the resulting model  M  we have  M , s 0 a c ( p ; b q )  and  M , s 0 [ a ] β ( p ; b q ) .
Therefore, ConStR a c ( p ; b q ) [ a ] β ( p ; b q ) .
7. 
Note also that ( M , s 0 )  does not satisfy the  ATL  formula  [ [ a ] ] ( X p b X q )  (where  [ [ C ] ] ϕ ¬ C ¬ ϕ ); hence, it is not equivalent to  [ a ] β ( p ; b q ) .
For more examples and observations on the mutual nonexpressiveness of the operators in ConStR , see [28]. Moreover, using the notions of bisimulation defined there for each of these operators, one can show that none of the three binary operators in ConStR can be defined in terms of the other two.

6.5. On the Axiomatization and Decidability for ConStR

A system of axioms Ax ConStR for ConStR is presented in [28]. It involves lists of axioms for each of the operators O c , O α , and O β , as well the following two interaction axiom schemes:
( ConStR  1) 
[ A ] α ( ϕ ; B ψ ) [ A ] β ( ϕ ; B ψ ) .
( ConStR  2) 
( [ A ] ϕ [ A ] β ( ϕ ; B ψ ) A c ( ϕ ; B ψ ) .
These schemes are generated by uniform substitutions in the respective validities 1 and 7 in Proposition 7. Notably, the only axiom scheme identified there which distinguishes the operators O α and O β is the following antimonotonicity (with respect to A ) scheme:
[ A C ] α ( ϕ ; B ψ ) [ A ] α ( ϕ ; B ψ )   for any   C Agt ,
which is valid for O α but not O β . For proof of this claim, and for the other axiom schemes and inference rules for each of the operators of ConStR , see [28].
The only additional (to modus ponens) inference rules are the O-monotonicity rules, for O being each of O α , O β , and O c . For instance, here is the O c  -monotonicity rule:
ϕ ϕ , ψ ψ A c ( ϕ ; B ψ ) A c ( ϕ ; B ψ )
The completeness proof for Ax ConStR is currently still under development, so its completeness is currently still a conjecture, and so is even the completeness of each of the three main subsystems, one for each of the respective operators. For all that I currently know, these seem to be quite challenging problems.
Another conjecture is the decidability of satisfiability in ConStR , even in each of its three main subsystems. Here, I only claim that it can be proved by a model-theoretic argument, based on the bounded tree-model property, meaning that every satisfiable formula of SFCL is satisfiable in a finite tree-like model of height and branching factor effectively bounded above in terms of the size of the formula. (Detailed proof of this claim is currently under construction, so the claim is, at present, still a working conjecture.)

7. The Socially Friendly Coalition Logic SFCL

The “socially friendly coalition logic” SFCL was introduced in [25] with the aim to capture and formalize the idea that, while socially responsible (or, “friendly”) rational agents and coalitions act in pursuit of their own goals, when deciding on their course of action they can take into account the goals of other agents in the system and make allowance for cooperation enabling their satisfaction, too, whenever it is possible, to reconcile their goals with those of the others. Formally, the idea is implemented by introducing in the formal language a “socially friendly coalitional operator”, presented here.

7.1. Socially Friendly Coalitional Operators

SF 
The socially friendly coalitional operator SF takes any positive number of formulae ϕ , ψ 1 , , ψ k as arguments and places them together into a single formula as follows:
C ( ϕ ; ψ 1 , , ψ k ) .
I will call the formula ϕ above the primary goal of the formula (and of the coalition C) and the formulae ψ 1 , , ψ k —the secondary goals of the formula.
The intuitive meaning of the formula C ( ϕ ; ψ 1 , , ψ k ) is that
C has a joint action σ C that guarantees satisfaction of ϕ and also enables the complementary coalition C ¯ to satisfy any one of the goals ψ 1 , , ψ k by applying a respectively suitable joint action”.
The operator SF is a multiagent extension of the modal operator ( ψ 1 , , ψ k ; ϕ ) (note the different order of the arguments) in the instantial neighborhood logic (INL) introduced and studied in [45].
The special case of the “socially friendly coalitional operator” with a single secondary goal [ A ] ( ϕ ; ψ ) is equivalent to the operator A ( ϕ | ψ ) defined in Section 6.
In particular, the strategic operator of CL is also definable here, as C ϕ C ( ϕ ; ) .
SF1 
A refinement of SF: C ; C 1 , , C n ( ϕ ; ϕ 1 , , ϕ n ) , meaning:
C has a collective action σ C that guarantees ϕ, and is such that, when fixed,
each C i has a collective action that guarantees ϕ i ”.
This definition presumes that if C intersects with C i , then the agents in C C i are already committed to σ C . On the other hand, the collective actions claimed to exist for each C i need not be compatible, similar to in the intuitive semantics of C ( ϕ ; ψ 1 , , ψ k ) , which is a special case of SF1, where C 1 = = C n = Agt \ C .
SF2 
C 1 ϕ 1 ; ; C k ϕ k meaning: “ C 1 has a collective action to guarantee ϕ 1 , and given that action… C k has a collective action to guarantee ϕ k ”.
This is a sequential version of SF1 where the coalitions C 1 , . . . , C k are arranged in decreasing priority order.
Hereafter, I will consider SF only.

7.2. Syntax and Semantics of the Logic SFCL

The formulae of SFCL are defined by the following grammar:
ϕ p ¬ ϕ ( ϕ ϕ ) C ( ϕ ; ϕ , , ϕ ) ( p Prop )
The standard definitions of the other propositional connectives apply.
In addition, I define C ϕ C ( ϕ ; ) . Given a finite list of formulae Ψ = ψ 1 , , ψ k , I will write C ( ϕ ; Ψ ) .
The fragment of SFCL containing only formulae C ( ϕ ; ψ ) where ψ is a single formula will be denoted by SFCL 1 .
The formal semantics of SFCL is given in terms of truth of an  SFCL -formula at a state s of a concurrent game model  M inductively, as in CL , with the following main clause:
  • M , s C ( ϕ ; Ψ ) iff
        
    there exists a joint action σ C of C available at s,
        
    such that M , u ϕ for each state u in its outcome set O [ s , σ C ] ,
        
    and for each ψ Ψ there is a state v O [ s , σ C ] such that M , v ψ .

7.3. Example 1: Negotiating the Family Budget

Example 9. 
This example is an adapted and extended version of one from [25]. Consider the family couple Ann and Bill and their son Charlie. Each of them has saved some money. Now, they are sitting in a family meeting and negotiating on how to spend their savings. Ann wishes for a complete kitchen renovation, Bill wants a new car, and Charlie dreams of a holiday trip to Disneyland.
Ann considers two options for a kitchen: a cheaper IKEA version, or a designer’s luxury version. Bill thinks of three options for a car: a cheap used black Ford, a more expensive new green Tesla, or an even more expensive, vintage pink Cadillac. As for Charlie, he would prefer the whole family to go for an expensive week-long family excursion to Disneyland in Paris, but could also settle for a cheaper 2-day car trip to Disneyland Park in California.
The possible actions of every family member or a group are to pay for any option of their wish that they can afford, and then leave the rest of their savings in the family money pool for the other(s) to use. Let us denote the respective goals by:
  • C K (cheap kitchen), E K (expensive kitchen), K = C K E K (any kitchen);
  • C C (cheap car), A C (average car), E C (expensive car), C = C C A C E C (any car);
  • C T (cheap trip), E T (expensive trip), T = C T E T (any trip).
Calculations show that Ann’s choices and money produce the following strategic powers:
  • Ann can afford to pay for an expensive kitchen and then let the others choose some car or some trip, formally: [ Ann ] ( E K ; C , T ) .
  • Alternatively, Ann can settle with a cheap kitchen, and then let the others choose between some car plus some trip, or an expensive car, or an expensive trip; formally: [ Ann ] ( C K ; C C C T , E C , E T ) .
  • However, if Ann opts for an expensive kitchen, then the family cannot afford an average car plus a trip; formally: ¬ [ Ann ] ( E K ; A C T ) .
Likewise, it turns out that Bill has the following strategic powers:
  • [ Bill ] ( C ; K , T ) ;
  • [ Bill ] ( C C ; E K , E T ) .
  • However, ¬ [ Bill ] ( E C ; A K T ) .
Charlie alone cannot afford to pay in full for any trip, but can only leave his savings in the family pool and try to negotiate some trip with his parents. Formally: ¬ [ Charlie ] ( T ; ) .
Lastly, here are some coalitional powers:
  • [ { Ann , Bill } ] ( C E K ; C T ) ;
  • [ { Ann , Charlie } ] ( C K E T ; C C ) ;
  • [ { Bill , Charlie } ] ( A C C T ; K ) .
Now, one may ask, for instance, whether it can be derived from all of the above whether the whole family can afford to buy the pink Cadillac and also drive with it to Disneyland in California, i.e., whether [ { Ann , Bill , Charlie } ] ( E C C T ; ) ; or whether Ann can obtain her dream designer’s kitchen and still enable Bill to buy a new Tesla or the family to go to Disneyland in Paris, i.e., [ Ann ] ( E K ; A C , E T ) , etc.

7.4. Example 2: Job Applicants

Consider four job candidates, Alice, Bob, Carl, and Diana, who have applied to three companies: Banana, Megasoft, and Fakebook, each advertising one position.
Suppose no candidate has strong preferences between these jobs.
All candidates have been interviewed by each of the companies.
Each company has selected and ranked in a priority order some of them, in which order they are offered the job and have to accept it or not. If an offer is not accepted, it goes to the next-ranked candidate, until the position is filled (if at all).
The rankings have been communicated to the candidates, as follows:
Banana: Megasoft: Fakebook:
1. Alice, 2. Bob, 3. Carl.1. Diana, 2. Bob.1. Alice, 2. Diana.
Each of the candidates must now make their strategic choice of which offer to accept, if and when given the option. Suppose the candidates can communicate with each other.
Here are some true statements formalized in SFCL , where Alice, Bob, Carl, and Diana are labeled as A , B , C , D and e X means “X obtains a job”:
  • [ A ] ( e A ; e B e D , e C e D , e B e C )
    Indeed, Alice can take the offer from Fakebook and leave it to the others to decide on the other two positions. Note that Bob can then choose either offer and enable either Carl or Diana to obtain a job, but can also decline both offers and thus enable both of them to obtain a job.
  • [ A ] ( ; e B e C e D )
    Alternatively, Alice can act selflessly by declining both offers, and thus enable all others to obtain a job (by Bob choosing Fakebook).
  • ¬ [ C ] ( e C ; ) [ C ] ( ; e C )
    Carl cannot be sure to obtain a job, but all others can be sure of this.
  • ¬ [ D ] ( e D ; e B e C ) [ D ] ( e D ; e B , e C )
    Diana cannot be sure to obtain a job and enable the others to see to it that both Bob and Carl obtain a job, but she can ensure that she obtains a job (by accepting the offer from Megasoft) and then the others ensure that either Bob or Carl obtains a job, too.
  • [ { A , D } ] ( e A e D ; e B , e C )
    Alice and Diana together can ensure that they both obtain a job and then either of the other two can obtain a job, too, up to their choice.
  • [ { A , D } ] ( e A e D ¬ e B ¬ e C )
    Alice and Diana can also be mean and act together so that only they obtain jobs, by accepting the respective offers from Banana and Megasoft and leaving the Fakebook position unfilled.

7.5. Socially Friendly Coalition Logic SFCL: A System of Axioms

The system of axioms A x SFCL for SFCL , proposed in [25], combines A x CL with a multiagent extension of the axiomatization of the instantial neighborhood logic (INL) [45], plus some additional axiom schemes. Here is just one of these additional schemes from A x SFCL , where Ψ is a finite list of SFCL -formulae and Ψ , θ is the list obtained by appending θ to Ψ :
(INL5) 
C ( ϕ ; Ψ ) C ( ϕ ¬ θ ; Ψ ) C ( ϕ ; Ψ , θ )
This is a valid scheme. Indeed, for any formula θ , the strategy for C that ensures ϕ and enables each formula in the list Ψ either also ensures ¬ θ , in which case it can be added conjunctively to ϕ , or else enables θ , in which case it can be added to the list Ψ .
The only additional (to modus ponens) inference rule is C -monotonicity:
ϕ ϕ , ψ 1 ψ 1 , , ψ k ψ k C ( ϕ ; ψ 1 , , ψ k ) C ( ϕ ; ψ 1 , , ψ k ) .
Note that the rule RE in the axiomatic system for INL in [45] is an admissible rule here, and that can be proved by a routine induction on the structure of φ in that rule. For the Boolean cases, that is due to the completeness of the purely propositional fragment (where MP suffices), and the case of φ = C ( ) is handled by using the C -monotonicity rule.
A completeness proof for A x SFCL was presented in [25], but an error was subsequently found in it (on which an erratum has been posted). The correct proof of completeness is currently still under development, and therefore the completeness claim is currently only a working conjecture.
Another conjecture is the decidability of satisfiability in SFCL . Here, I only claim that it can be proved by a model-theoretic argument, similar to the decidability of ConStR , based on the bounded tree-model property, meaning that every satisfiable formula of SFCL is satisfiable in a finite tree-like model of height and branching factor effectively bounded above in terms of the size of the formula. A proof of this claim, which is, at present, formally still a working conjecture, will be presented in a subsequent work.
Further open problems and directions for further research are related to various natural generalizations of SF, including SF1 and SF2. To my knowledge, they have not been explored and no technical results are even conjectured for them yet, except for some special cases explored in the context of conditional strategic reasoning in Section 6.

8. The Logic of Local Coalitional Goal Assignments (LCGA)

The logic LCGA was introduced in [25] as the “group protecting coalition logic ( GPCL )”, with the aim to formalize the idea, complementary to that of SFCL , that, while socially responsible rational agents and coalitions contribute with their individual and collective actions to the society of all agents, they wish to achieve that in a way that protects their individual and collective interests and goals. Formally, the idea is implemented by introducing in the formal language a “coalitional goal assignment operator” (called “group protecting coalitional operator” in [25]), presented here.

8.1. The Coalitional Goal Assignments Operator

The coalitional goal assignments operator takes a list of coalitions C 1 , , C k and a list of formulae representing their goals ϕ 1 , , ϕ k and produces a formula of the type:
[ C 1 ϕ 1 , , C k ϕ k ] .
The intuitive meaning of the claim formalized by the formula above is that there is an action profile σ of Agt , such that for each i, the restriction of σ to the coalition C i is a joint action σ i that guarantees the satisfaction of ϕ i , even if some, or all, other agents deviate from the action profile σ . In particular, C ϕ is equivalent to [ C ϕ ] .
Now, I will define a more general and succinct representation of that operator. First, a coalitional goal assignment is a mapping γ : P ( Agt ) Γ , where Γ is a set of goal formulae (which may, but need not be, the full logical language under consideration). Thus, for every coalition C, γ ( C ) expresses the goal of C.
Usually, most of the possible coalitions do not have coalitional goals and hence do not form. That can be formalized by assigning the “truth” ⊤ as a goal to each of them.
Now, the operator defined above naturally generalizes to one expressed simply as [ γ ] , which is a concise representation of [ C 1 γ ( C 1 ) , , C 2 n γ ( C 2 n ) ] , where C 1 , , C 2 n is a (fixed) canonical enumeration of the set P ( Agt ) of all subsets (coalitions) of Agt .

8.2. Syntax and Semantics of the Logic LCGA

The formulae of LCGA are given by the following grammar:
ϕ p ¬ ϕ ( ϕ ϕ ) [ γ ] ( p Prop )
where p Prop and γ is a coalitional goal assignment. All other standard propositional connectives are defined as usual.
I will use the notation [ C 1 ϕ 1 , , C k ϕ k ] as an explicit record of [ γ ] where γ is the (unique) coalitional goal assignment defined by γ ( C 1 ) = ϕ 1 , , γ ( C k ) = ϕ k , and γ ( C ) = for every other C P ( Agt ) . Clearly, both notations for the coalitional goal assignment operator are expressively equivalent. Furthermore, while the alternative option seems generally more succinct, that gain vanishes if [ γ ] is defined as a partial goal assignment, assigning only the nontrivial goals.
The semantics of LCGA is given in terms of truth of an LCGA -formula at a state s of a concurrent game model  M . The only new semantic clause is the one for [ · ] :
M , s [ γ ] iff there exists an action profile σ Σ Agt , available at s
and such that M , u γ ( C ) for every C Agt and every u O [ s , σ | C ] .
The intuitive reading is spelled out in more detail as follows:
There is a strategy profile of the grand coalition in which all agents participate with their individual strategies in such a way that, each agent or coalition guarantees the satisfaction of its own goal against any possible deviations of all other agents, thus protecting its individual (respectively, coalitional) interests.
The clause for [ γ ] can also be given in terms of extensions, where Σ Agt = a Agt Σ a :
[ [ [ γ ] ] ] M = s S σ Σ Agt : O [ s , σ C ] [ [ γ ( C ) ] ] M for all C Agt

8.3. Some Observations and Examples of the Expressiveness of LCGA

  • The operator A c ( ϕ ; B ψ ) , introduced in Section 6, is definable in terms of [ · ] : A c ( ϕ ; B ψ ) = [ A ϕ , A B ψ ] . Nevertheless, it has a different motivation and intuitive interpretation there.
  • The fragment SFCL 1 of SFCL (subsuming CL ) embeds into LCGA by defining C ( ϕ ; ψ ) equivalently as [ C ϕ , Agt ψ ] .
    However, this does not generalize to [ C ] ( ϕ ; ψ 1 , , ψ k ) for k 2 .
    Conversely, the operator [ C 1 ϕ 1 , , C n ϕ n ] cannot be expressed in terms of the SFCL operators [ C ] ( ϕ ; ψ 1 , , ψ k ) , either.
    These nonexpressiveness claims can be proved by using the respective notions of bisimulations introduced for each of these operators in [25].
  • [ C 1 ϕ 1 , C 1 C 2 ϕ 2 , , C 1 C k ϕ k ] is equivalent to the sequential version SF2 of the operator SF, mentioned in Section 7.1 as [ C 1 ϕ 1 ; ; C k ϕ k ] , where the coalitions C 1 , . . . , C k and their goals are arranged in decreasing priority order.
  • On the other hand, [ C 1 ϕ 1 , C 1 ϕ 2 , , C k ϕ k ] is essentially expressible, up to a natural transformation of the models and semantics, in the group strategic STIT (cf. [13,14]), as ( [ C 1 stit ] X ϕ 1 . . . [ C k stit ] X ϕ k ) . This observation (suggested by a reviewer) leads to a new computationally well-behaved fragment of the group strategic STIT, which is known to be not only undecidable, but even nonaxiomatizable; cf. [14].
The following example is from [25], where it was adapted from [1].
Example 10. 
[password protected data sharing] Consider a scenario involving two players, Alice (A) and Bob (B). Each of them owns a server storing some data, to which access is password-protected. The two players want to exchange passwords, but neither player is sure whether to trust the other. Thus, their common goal is to successfully cooperate and exchange passwords, but each player also has the private goal not to give away their password in case the other one turns out to be untrustworthy and refuses access to their data (a side remark: nowadays, such deals are arranged by smart contracts).
Let us write H A for “Alice has access to the data on Bob’s server” and H B for “Bob has access to the data on Alice’s server”. Thus, the best possible outcome for Alice is H A H B and the worst possible outcome is ¬ H A H B ; symmetrically for Bob. Therefore, can the two players cooperate to exchange passwords in a way that satisfies the provisos above? Clearly, we cannot answer that question in any definitive way without having the precise details of the actual setup.
For example, suppose we define the game as follows: each player chooses a password, which may or may not be the correct one, and sends it to the other player, and they perform that simultaneously. Then, the game ends. In this game, the coalition { A , B } can certainly force an outcome where each player has the other’s password, i.e., the LCGA -formula
[ { A , B } ( H A H B ) ]
 holds true. However, the strategy profile satisfying this goal assignment does not satisfy the players’ individual private goals, since each runs the risk of giving away their password without obtaining the other’s in return, if the other deviates. Therefore, the problem is more correctly described by the following stronger formula of LCGA :
[ { A , B } H A H B , A H B H A , B H A H B ]
Clearly, in the simple setup above, such a strategy profile does not exist, but suppose we add a second round to the game, in which each player can first verify the other’s password received in the first round (without being able to access the data yet) and then can either confirm their own password they have sent, if the other’s password is correct, or else change their password (thus making the shared one useless) if the other’s is not correct. Then, there is a strategy profile that satisfies the specification above, and thus enables the players to exchange passwords in a way that protects their individual interests, too.

8.4. Axiomatic System for LCGA

The axiomatic system A x LCGA for the logic LCGA presented here was first proposed in [25] (for the logic GPCL ) and then refined in the present form in [26]. Note, however, that the notation [ C ϕ ] here corresponds to the notation used in [25], not the one in [26].
In addition, I will use further the following notation from [26]:
γ [ C ψ ] , to denote the coalitional goal assignment obtained from γ by reassigning the goal of the coalition C to be ψ , and
γ | C , to denote the coalitional goal assignment obtained from γ by “restricting it to C”, meaning reassigning the trivial goal ⊤ to all coalitions not included in C.
Here are the axiom schemes of A x LCGA with brief explanations:
  • (Triv)  [ γ ] , where γ is the trivial goal assignment, mapping each coalition to ⊤.
  • (Safe)  ¬ [ Agt ]
    Even the grand coalition of all agents cannot ensure the falsum to become true.
  • (Merge)  [ C 1 θ 1 ] [ C n θ n ] [ C 1 θ 1 , . . . , C n θ n ] ,
    where C 1 , , C n are pairwise disjoint.
    This axiom scheme generalizes the superadditivity axiom of coalition logic (cf. Section 4.4.) It is valid, because, if the coalitions C 1 , , C n are pairwise disjoint, then they can join together their collective strategies for their respective coalitional goals into one strategy profile that ensures satisfaction of all these collective goals.
  • (GrandCoalition)  [ γ ] ( [ γ [ Agt ( φ ψ ) ] ] [ γ [ Agt ( φ ¬ ψ ) ] ] ) , where γ ( Agt ) = φ .
    This axiom scheme is valid because any strategy profile for Agt generates a unique successor state. If a state formula ψ is true in it, then it can be added to the coalitional goal of Agt , otherwise its negation can be added to it.
  • (Case)  [ γ ] ( [ γ [ C ( φ ψ ) ] ] [ γ | C [ ( Agt ¬ ψ ] ] ) , where γ ( C ) = φ .
    This scheme says that for any coalition C, state formula ψ , and a strategy profile Σ , either its projection Σ C to C ensures the truth of ψ in all successor states enabled by Σ C —in which case ψ can be added to the goal of C enforced by Σ —or, otherwise, ¬ ψ is true in some of these successor states, in which case it can be added to γ | C as the goal of the grand coalition Agt enforced by Σ .
  • (Con)  [ γ ] [ γ [ C ( φ ψ ) ] ] where γ ( C ) = φ and γ ( C ) = ψ for some C C .
    Given any coalition C and subcoalition C , this scheme says that the goal of C can be added for free to the goal of C. Indeed, if there is any strategy profile Σ that ensures that C and C can force their respective goals, then Σ also ensures that C can force the conjunction of these goals.
The rules of inference are modus ponens (MP) and goal monotonicity (G-Mon):
γ ( C ) ψ [ γ ] [ γ [ C ψ ] ]
The axiomatic system A x LCGA was first proved sound and complete in [25] (for the logic GPCL ) and in the present form in [26]. Moreover, the satisfiability problem for LCGA is decidable, which follows from completeness results together with the finite model property, as shown in [25,26].

8.5. The Temporal Logic of Coalitional Goal Assignments TLCGA

LCGA can only express local goals, referring to the immediate outcomes states. Usually, however, agents and coalitions have long-term goals, which need explicit temporal operators to be formalized. This was the motivation to introduce and study the temporal extension TLCGA of LCGA in [26]. It uses the standard temporal operators X , U , G to express temporalized goals, similarly to ATL (Section 5).
It is convenient to classify the formulae of TLCGA in two sorts, state formulae and path formulae, and to define their sets by mutual induction, as follows:
StateFor : φ p ¬ φ ( φ φ ) [ γ ]
PathFor : θ X φ ( φ φ ) G φ
where p Prop and γ : P ( Agt ) PathFor is a coalitional goal assignment.
Thus, the path formulae are auxiliary, used to express temporalized goals. As before, I will use [ C 1 ϕ 1 , , C k ϕ k ] as an explicit notation for [ γ ] with support { C 1 , , C k } .
Note the change in the notation from LCGA : the goals ϕ 1 , , ϕ k from now on, refer to the current state, not to a successor. Thus, LCGA is embedded in TLCGA as its X -fragment XCGA , where the path formulae are only of the type X φ .
TLCGA also extends ATL : C θ [ C θ ] , for any θ PathFor .

8.6. An Aside: Equilibria and Co-Equilibria

The logic TLCGA can be used to express (see [26]) the fundamental game-theoretic concept of Nash equilibrium: a strategy profile ensuring that no player can deviate to improve the outcome with respect to his/her private goal. This concept is good for quantitative goals, but not for qualitative (win/lose) ones, because rational players whose goals are not satisfied (the “losers”) can still deviate in weak equilibria. Thus, the arguably more suitable for qualitative goals concept of co-equilibrium was proposed in [26], meaning a strategy profile ensuring that every player (and coalition) achieves their private goal, even if all other agents deviate. Existence of a co-equilibrium is expressed precisely by the TLCGA formula [ a 1 ϕ 1 , , a k ϕ k ] .

8.7. An Example

The following example, illustrating the use of TLCGA to express natural statements, is adapted from [26].
Example 11. 
Three sheep and three wolves are standing on a river bank and all animals want to cross the river (say, because a pride of lions is approaching). There is just one boat, which takes up to two animals. There is no boatman, but the animals can sail the boat across the river. However, if on either side of the river, at any time the wolves outnumber the sheep, then they eat them up there. Of course, the sheep do not want that to happen. Therefore, do the animals have a “safe” strategy to cross the river without any being eaten?
Let us first look at a simple specification in ATL of the claim that such strategy exists:
Sheep Wolves ( ¬ e c )
where
  • c is the atomic proposition “all animals have crossed the river”.
  • e is the atomic proposition “some sheep are eaten”.
Thus, the formula above says that all animals have a strategy to eventually all cross the river and meanwhile no sheep are eaten. Is this what we want? Not quite, as the strategy may be such that the wolves can deviate from it at an opportune moment and eat some of the sheep. Thus, here is a better specification, in  TLCGA :
[ Sheep Wolves F c , Sheep G ¬ e ] ,
 saying “all animals have a collective goal to cross the river, but the coalition of sheep have their coalitional goal that no sheep is ever eaten”, and this is what we really want. Therefore, is there a solution—a strategy profile—satisfying this specification? Well, it depends on a subtle but crucial detail: if all animals act simultaneously, where each animal can either stay on the river bank or board the boat (but if more than two animals board the boat, the boat cannot sail), then it is not difficult to see that such strategy does not exist. However, if at every round the wolves act first, and then the sheep act, then such strategy exists, and I leave the fun to the reader to design such strategy. Note that the strategy of each sheep will now depend also on how the wolves have just acted, so any deviation of the wolves from the agreed-upon “safe” strategy can be counteracted by the strategies of the sheep, and this is the key point.

9. Basic Strategy Logic: A Unifying Formalism for Strategic Reasoning

The semantics of all strategic operators introduced here are defined in terms of quantification in the metalanguage over actions and strategies of the agents and coalitions involved. Thus, the natural idea arises to add explicit quantification over strategies in the object language and have all these operators definable there. That idea was realized in the family of strategy logics, the first (two-agents) version of which was introduced in [21], and later extended and further studied in [22,23,24]. The common feature of these strategy logics is that they formally treat strategies as primitive objects, similar to elements of standard structures for first-order logic. I will refer to these generically as standard strategy logic, denoted SSL .
Since quantification over strategies is essentially a second-order quantification over functions, the language of SSL is very expressive and it should not be surprising that the standard systems of SSL are not recursively axiomatizable in general. Still, even some restricted fragments of SSL are quite expressive and enable specifying important properties of non-zero-sum games in a simple and natural way. In particular, the “one-alternation” fragment of SSL already subsumes, by expressiveness, the logic ATL * and is strong enough to express, inter alia, the existence of Nash equilibria and secure equilibria (cf. [21]).
I will define and use here a simple generic version of a strategy logic, which I call basic strategy logic, further denoted BSL . It builds on some temporalized language for expressing goals, such as LTL (cf., e.g., [36] (Chapter 6)) that defines a set of goal formulae  Γ which are evaluated on plays in concurrent game models. BSL extends Γ with standard Boolean connectives, with variables associated with agents and ranging over strategies for them, by means of strategy assignments, and with quantification over such variables within the formulae. More precisely, with each agent a , the language of BSL associates a strategy variable for a , denoted by x a . Let StrVar be the set of strategy variables for all agents. Then, for every nonempty coalition of agents C = { c 1 , . . . , c m } , I will use x C to denote the tuple of variables ( x c 1 , . . . , x c m ) . Furthermore, I will write x C as a shorthand for the string x c 1 . . . x c m , and likewise for x C .
Formally, the language of BSL involves two sets of formulae: a set of goal formulae  Γ and a set of state formulae  StateFor , defined as follows:
StateFor : φ p ψ ¬ φ ( φ φ ) x a φ ,
where p Prop , ψ Γ , and x a StrVar . All other standard propositional connectives and constants, including ⊥, ∧, →, ↔, are defined as usual. In addition, x a ¬ x a ¬ .
The semantics of BSL is given in concurrent game models, using, additionally, strategy assignments, which are functions defined on StrVar and assigning to every strategy variable x a a strategy of the respectively specified in the semantics type (e.g., memory-based or positional) for the agent a . Given such strategy assignment v and a strategy σ a for agent a , I will denote by v [ x a σ a ] the modified strategy assignment which reassigns the value of x a to be σ a . Likewise, v [ x C σ C ] is defined for any coalition C .
Given a state s in a CGM M , every strategy assignment v in M defines the strategy profile Σ ( v ) , where Σ ( v ) a = v ( x a ) , and therefore determines the unique play in M , generated by Σ ( v ) , which will be denoted by play ( s , v ) . Then, the truth of any goal formula can be evaluated on that play.
The semantics of BSL can now be defined in terms of truth of a formula at a state s in a given CGM M , for a given strategy assignment v , through the clauses:
  • M , s , v ψ , for ψ Γ iff M , play ( s , v ) ψ
    (assuming that truth of the goal formulae in Γ on plays has already been defined).
  • M , s , v x a φ iff M , s , v [ x a σ a ] φ for some strategy σ a for a .
The latter clause extends in a straightforward way to x C φ and x C φ , for any coalition C .
Note that, similar to the classical first-order logic, every bound occurrence of a variable x a is bound by the innermost occurrence of a quantifier binding x a , so in quantification patterns such as Q 1 x a . . . Q k x a . . . ϕ , where each Q i is ∃ or ∀, all occurrence of Q i x a containing the innermost one in its scope are vacuous.
Now, one can uniformly translate the semantic conditions of the various strategic operators introduced here into formulae in the language of BSL , by using a uniform compositional translation τ , as follows:
  • CL / ATL *
    τ ( C φ ) x C x C ¯ τ ( φ ) ,
             where C ¯ = Agt \ C ; likewise, for C φ .
  • ConStR  ( O α ) 
    τ ( [ A ] α ( ϕ ; B ψ ) ) x B \ A x A ( x A ¯ τ ( ϕ ) x A B ¯ τ ( ψ ) ) .
        
    Note that the joint strategy for B \ A assigned to x B \ A is supposed to ensure, together with the joint strategy for A assigned to x A , the truth of the (translation of the) goal formula ψ whenever the joint strategy for A assigned to x A ensures the truth of the (translation of the) goal formula ϕ against any joint strategy of the agents in A ¯ , including those in B \ A .
  • ConStR  ( O β ) 
    τ ( [ A ] β ( ϕ ; B ψ ) ) x A ( x A ¯ τ ( ϕ ) x B \ A x A B ¯ τ ( ψ ) ) .
        
    Note the difference with the previous translation, reflecting the difference between the semantics of ( O α ) and ( O β ): the joint strategy for B now depends on the joint strategy for A , and the strategies of the agents in B A may not differ from those already fixed in the latter.
  • ConStR  ( O c ) 
    τ ( A c ( ϕ ; B ϕ ) ) x A ( x A ¯ τ ( ϕ ) x B \ A x A B ¯ τ ( ψ ) ) .
        
    This formula captures the semantics of O c in a straightforward way.
  • SFCL  (SF) 
    τ ( C ( ϕ ; ψ 1 , , ψ k ) ) x C x C ¯ τ ( ϕ ) m = 1 k x C ¯ τ ( ψ m ) .
        
    Likewise, the formula above formalizes precisely the semantic definition of SF.
  • SFCL  (SF1) 
    τ ( C ; C 1 , , C k ( ϕ ; ϕ 1 , , ϕ k ) ) x C x C ¯ τ ( ϕ ) m = 1 k x C m \ C x C C m ¯ τ ( ϕ m ) .
        
    This formula says that C has a collective strategy (in particular, a collective action) assigned to x C that guarantees (the translation of) ϕ , and is such that, when fixed,
    each C m has a collective action (already fixed for the agents in C m C ) that guarantees (the translation of) ϕ m against any behavior of the noncommitted agents, i.e., those in C C m ¯ . This is precisely the semantics of SF1, defined in Section 7.1.
  • SFCL  (SF2)  τ ( C 1 ϕ 1 ; ; C k ϕ k )
        
    (for simplicity, assuming that C 1 , . . . , C k are pairwise disjoint)
    x C 1 x C 1 ¯ τ ( ϕ 1 ) x C 2 \ C 1 x C 1 C 2 ¯ τ ( ϕ 2 ) x C k \ ( C 1 . . . C k 1 ) x C 1 . . . C k ¯ τ ( ϕ k ) .
    This formula likewise expresses the semantics of the sequential version SF2 of SF1, defined in Section 7.1. To achieve this, every time it claims existence of a joint strategy of C i it assumes that the strategies of all agents in the already previously mentioned coalitions are already fixed, and that joint strategy must succeed against any behavior of the not-yet-committed agents only.
  • LCGA
    τ ( [ γ ] ) x Agt C Agt x C ¯ τ ( γ ( C ) ) .
        
    Again, this formula captures the formal semantics of [ γ ] in a straightforward way.
Thus, the logic BSL is more expressive than any of the other logics introduced here. Still, it is not as expressive as the versions of SSL introduced in [23,24], which decouple the strategy variables and quantification over them from the agents to which they are assigned, and thus enable assigning the same strategy to different agents, which (in my view, is unnatural and) BSL cannot do. Still, the second-order quantification over strategies is fully enabled in BSL , so it is to be expected that BSL is not recursively axiomatizable in general. However, in the natural special case where only finite models over a fixed finite set of agents and with positional strategies are considered, there are clearly only finite such strategies in the model, so model checking of BSL in such models is decidable, whereas satisfiability is recursively enumerable. The complexity of model checking, as well as the questions of axiomatizability and decidability, of the validities of BSL in semantics restricted to such models are left to future work.

10. Concluding Remarks: Outlook and Perspectives

Logic-based strategic reasoning in socially interactive context is still in its relatively early stage of development, and the logical systems presented in this paper are just a representative sample of such special-purpose driven developments. While these systems were introduced with different motivations, they share the common purpose of formalizing natural and important patterns of strategic interaction between rational agents. Furthermore, as seen in the previous section, these can all be treated as fragments of the basic strategy logic BSL , so the natural question arises: why, instead of the many bespoke alternatives presented here, is BSL not adopted as a uniform logical language for formalizing strategic reasoning? Of course, that can be achieved, and the study of some fragments of strategy logic have shown that to be a viable approach. However, as I indicated earlier, there are computational reasons in favor of staying within the purely modal framework of the logics introduced here, where actions and strategies are not explicitly referred to and quantified over in the language, but are only present in the semantics. There are many aspects of strategic reasoning, and a logical language that can adequately capture them all would be too heavy to use, so if one only wants to reason about some such specific aspects, then one can naturally look for a minimal language that suffices for that purpose. The standoff between the two approaches—propositional modal logics vs. quantified strategy logics—is quite analogous to the standoff between plain modal logic and first-order logic, and the pros and cons for one over the other approach in both cases are very similar. Still, it would be interesting and useful to compare in deeper detail the expressiveness and the tradeoff between it and the computational complexity of these two approaches in the case of logics for strategic reasoning, to a degree closer to the very-well-explored parallels between modal logic and first-order logic (cf. relevant chapters in [46] for ample details and further references.)
Some major open problems and directions of further exploration of the logic-based strategic reasoning in socially interactive context include:
  • Adding agents’ knowledge in the semantics, and explicitly in the language, by assuming that the agents reason and act under imperfect information.
  • Taking into account the normative aspects and constraints of the socially interactive context, including obligations, permission, and prohibitions, which socially responsible rational agents must respect in their strategic behavior.
  • Analysis of the expressiveness and computational complexity of the basic strategy logic BSL which should eventually determine whether and to what extent BSL may be regarded as a viable alternative to the propositional modal approach behind the logical systems for strategic reasoning presented here.
  • In particular, an interesting currently open question is whether there is a finite set of modal strategic operators with semantics that can be translated to BSL which provide expressive completeness, if not for the full language of BSL , then at least for natural and reasonably expressive fragments of it.
  • Completeness results for some of the systems of axioms mentioned here, including the three main fragments of ConStR , the entire ConStR , and SFCL . Ultimately, a complete axiomatic system for BSL , if possible, or for substantially rich fragments of it.
  • Finite tree-model property and decidability results for the logical systems presented here, as well as other fragments of BSL . In particular, it would be tableaux-based deductive systems and decision methods for the logics studied here, by adapting such systems developed in, e.g., [47,48].
From a more general perspective, this work can be regarded as a step towards developing a unifying and optimally rich, yet computationally feasible, technical framework for logic-based strategic reasoning of rational agents in a socially interactive context.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

I thank the anonymous reviewers for their useful comments and corrections.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Parikh, R. The logic of games and its applications. North-Holl. Math. Stud. 1985, 102, 111–139. [Google Scholar]
  2. Brown, M.A. On the Logic of Ability. J. Philos. Log. 1988, 17, 1–26. [Google Scholar] [CrossRef]
  3. Belnap, N.; Perloff, M. Seeing to it that: A canonical form for agentives. Theoria 1988, 54, 175–199. [Google Scholar] [CrossRef]
  4. Pauly, M. Logic for Social Software. Ph.D. Thesis, University of Amsterdam, Amsterdam, The Netherlands, 2001. [Google Scholar]
  5. Pauly, M. A Modal Logic for Coalitional Power in Games. J. Log. Comput. 2002, 12, 149–166. [Google Scholar] [CrossRef]
  6. Alur, R.; Henzinger, T.A.; Kupferman, O. Alternating-Time Temporal Logic. In Proceedings of the 38th IEEE Symposium on Foundations of Computer Science, Miami Beach, FL, USA, 19–22 October 1997; pp. 100–109. [Google Scholar]
  7. Alur, R.; Henzinger, T.A.; Kupferman, O. Alternating-Time Temporal Logic. J. ACM 2002, 49, 672–713. [Google Scholar] [CrossRef]
  8. Bulling, N.; Goranko, V.; Jamroga, W. Logics for Reasoning About Strategic Abilities in Multi-player Games. In Models of Strategic Reasoning: Logics, Games, and Communities; van Benthem, J., Ghosh, S., Verbrugge, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2015; pp. 93–136. [Google Scholar]
  9. Ågotnes, T.; Goranko, V.; Jamroga, W.; Wooldridge, M. Knowledge and Ability. In Handbook of Epistemic Logic; van Ditmarsch, H., Halpern, J., van der Hoek, W., Kooi, B., Eds.; College Publications: London, UK, 2015; pp. 543–589. [Google Scholar]
  10. Bratman, M. Intentions, Plans, and Practical Reason; Harvard University Press: Cambridge, MA, USA, 1987. [Google Scholar]
  11. Rao, A.S.; Georgeff, M.P. Modeling Rational Agents within a BDI-Architecture. In Proceedings of the 2nd International Conference on Principles of Knowledge Representation and Reasoning (KR’91), Cambridge, MA, USA, 22–25 April 1991; Allen, J.F., Fikes, R., Sandewall, E., Eds.; Morgan Kaufmann: Burlington, MA, USA, 1991; pp. 473–484. [Google Scholar]
  12. Cohen, P.R.; Levesque, H.J. Intention is Choice with Commitment. Artif. Intell. 1990, 42, 213–261. [Google Scholar] [CrossRef]
  13. Broersen, J.M. A stit-Logic for Extensive Form Group Strategies. In Proceedings of the 2009 IEEE/WIC/ACM International Conference on Web Intelligence and International Conference on Intelligent Agent Technology—Workshops, Milan, Italy, 15–18 September 2009; IEEE Computer Society: Washington, DC, USA, 2009; pp. 484–487. [Google Scholar] [CrossRef]
  14. Broersen, J.M.; Herzig, A. Using STIT Theory to Talk About Strategies. In Models of Strategic Reasoning—Logics, Games, and Communities; Lecture Notes in Computer Science; van Benthem, J., Ghosh, S., Verbrugge, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2015; Volume 8972, pp. 137–173. [Google Scholar] [CrossRef]
  15. Lorini, E. Temporal STIT logic and its application to normative reasoning. J. Appl. Non Class. Logics 2013, 23, 372–399. [Google Scholar] [CrossRef]
  16. Lorini, E.; Sartor, G. A STIT Logic for Reasoning About Social Influence. Stud. Log. 2016, 104, 773–812. [Google Scholar] [CrossRef]
  17. Boudou, J.; Lorini, E. Concurrent Game Structures for Temporal STIT Logic. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2018, Stockholm, Sweden, 10–15 July 2018; André, E., Koenig, S., Dastani, M., Sukthankar, G., Eds.; International Foundation for Autonomous Agents and Multiagent Systems: Richland, SC, USA, 2018; pp. 381–389. [Google Scholar]
  18. Ramanujam, R.; Simon, S. Dynamic Logic on Games with Structured Strategies. In Proceedings of the Eleventh International Conference on Principles of Knowledge Representation and Reasoning, Sydney, Australia, 16–19 September 2008; AAAI Press: Palo Alto, CA, USA, 2008; pp. 49–58. [Google Scholar]
  19. van Benthem, J.; Ghosh, S.; Liu, F. Modelling simultaneous games in dynamic logic. Synthese 2008, 165, 247–268. [Google Scholar] [CrossRef] [Green Version]
  20. van Benthem, J.; Klein, D. Logics for Analyzing Games. In The Stanford Encyclopedia of Philosophy, Summer 2020 ed.; Zalta, E.N., Ed.; Metaphysics Research Lab, Stanford University: Stanford, CA, USA, 2020. [Google Scholar]
  21. Chatterjee, K.; Henzinger, T.A.; Piterman, N. Strategy logic. Inf. Comput. 2010, 208, 677–693. [Google Scholar] [CrossRef]
  22. Mogavero, F.; Murano, A.; Vardi, M.Y. Reasoning About Strategies. In Proceedings of the FSTTCS 2010; Schloss Dagstuhl—Leibniz-Zentrum fuer Informatik (LIPIcs): Wadern, Germany, 2010; Volume 8, pp. 133–144. [Google Scholar]
  23. Mogavero, F.; Murano, A.; Perelli, G.; Vardi, M.Y. Reasoning About Strategies: On the Model-Checking Problem. ACM Trans. Comput. Log. 2014, 15, 34:1–34:47. [Google Scholar] [CrossRef]
  24. Mogavero, F.; Murano, A.; Perelli, G.; Vardi, M.Y. Reasoning about Strategies: On the Satisfiability Problem. Log. Methods Comput. Sci. 2017, 13. [Google Scholar]
  25. Goranko, V.; Enqvist, S. Socially Friendly and Group Protecting Coalition Logics. In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems AAMAS 2018, Stockholm, Sweden, 10–15 July 2018; International Foundation for Autonomous Agents and Multiagent Systems: Richland, SC, USA, 2018; pp. 372–380. [Google Scholar]
  26. Enqvist, S.; Goranko, V. The temporal logic of coalitional goal assignments in concurrent multi-player games. ACM Trans. Comput. Log. 2022, 23, 1–58. [Google Scholar] [CrossRef]
  27. Goranko, V.; Ju, F. Towards a Logic for Conditional Local Strategic Reasoning. Lecture Notes in Computer Science. In Proceedings of the 7th International Workshop LORI 2019, Chongqing, China, 18–21 October 2019; Blackburn, P., Lorini, E., Guo, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2019; Volume 11813, pp. 112–125. [Google Scholar]
  28. Goranko, V.; Ju, F. A Logic for Conditional Local Strategic Reasoning. J. Log. Lang. Inf. 2022, 31, 167–188. [Google Scholar] [CrossRef]
  29. Naumov, P.; Yuan, Y. Intelligence in Strategic Games. J. Artif. Intell. Res. 2021, 71, 521–556. [Google Scholar] [CrossRef]
  30. Naumov, P.; Yew, R. Ethical Dilemmas in Strategic Games. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Virtual, 2–9 February 2021; AAAI Press: Palo Alto, CA, USA, 2021; pp. 11613–11621. [Google Scholar]
  31. Naumov, P.; Tao, J. Two Forms of Responsibility in Strategic Games. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event, Montreal, QC, Canada, 19–27 August 2021; Zhou, Z., Ed.; pp. 1989–1995. [Google Scholar] [CrossRef]
  32. Naumov, P.; Tao, J. Blameworthiness in Security Games. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, New York, NY, USA, 7–12 February 2020; AAAI Press: Palo Alto, CA, USA, 2020; pp. 2934–2941. [Google Scholar] [CrossRef]
  33. van der Hoek, W.; Wooldridge, M. Cooperation, knowledge, and time: Alternating-time temporal epistemic logic and its applications. Stud. Log. 2004, 75, 125–157. [Google Scholar] [CrossRef]
  34. van Benthem, J.; Ghosh, S.; Verbrugge, R. (Eds.) Models of Strategic Reasoning—Logics, Games, and Communities; LNCS Volume 8972; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  35. Blackburn, P.; de Rijke, M.; Venema, Y. Modal Logic; Cambridge University Press: Cambridge, UK, 2001. [Google Scholar]
  36. Demri, S.; Goranko, V.; Lange, M. Temporal Logics in Computer Science; Cambridge Tracts in Theoretical Computer Science; Cambridge University Press: Cambridge, UK, 2016. [Google Scholar]
  37. van der Hoek, W.; Pauly, M. Modal Logic for Games and Information. In Handbook of Modal Logic; Blackburn, P., van Benthem, J., Wolter, F., Eds.; Elsevier: Amsterdam, The Netherlands, 2006; pp. 1077–1148. [Google Scholar]
  38. Ågotnes, T.; Goranko, V.; Jamroga, W. Alternating-time Temporal Logics with Irrevocable Strategies. In Proceedings of the 11th Conference on Theoretical Aspects of Rationality and Knowledge, Brussels, Belgium, 25–27 June 2007; Samet, D., Ed.; pp. 15–24. [Google Scholar]
  39. Ågotnes, T.; Goranko, V.; Jamroga, W. Strategic Commitment and Release in Logics for Multi-Agent Systems (Extended Abstract); Technical Report IfI-08-01; Clausthal University of Technology: Clausthal-Zellerfeld, Germany, 2008. [Google Scholar]
  40. Laroussinie, F.; Markey, N.; Oreiby, G. On the Expressiveness and Complexity of ATL. Log. Methods Comput. Sci. 2008, 4. [Google Scholar] [CrossRef]
  41. Goranko, V. Coalition Games and Alternating Temporal Logics. In Proceedings of the 8th Conference on Theoretical Aspects of Rationality and Knowledge (TARK VIII), Siena, Italy, 8–10 July 2001; van Benthem, J., Ed.; Morgan Kaufmann: Burlington, MA, USA, 2001; pp. 259–272. [Google Scholar]
  42. Brihaye, T.; Lopes, A.D.C.; Laroussinie, F.; Markey, N. ATL with Strategy Contexts and Bounded Memory. In Proceedings of the Logical Foundations of Computer Science: International Symposium, LFCS 2009, Deerfield Beach, FL, USA, 3–6 January 2009; Nerode, A., Ed.; Springer: Berlin/Heidelberg, Germany, 2009; LNCS Volume 5407, pp. 92–106. [Google Scholar]
  43. Walther, D.; van der Hoek, W.; Wooldridge, M. Alternating-time Temporal Logic with Explicit Strategies. In Proceedings of the 11th Conference on Theoretical Aspects of Rationality and Knowledge TARK XI, Brussels, Belgium, 25–27 June 2007; Samet, D., Ed.; Presses Universitaires de Louvain: Louvain, Belgium, 2007; pp. 269–278. [Google Scholar]
  44. Abdou, J.; Keiding, H. Effectivity Functions in Social Choice; Springer: Berlin/Heidelberg, Germany, 1991. [Google Scholar]
  45. van Benthem, J.; Bezhanishvili, N.; Enqvist, S.; Yu, J. Instantial Neighbourhood Logic. Rev. Symb. Log. 2017, 10, 116–144. [Google Scholar] [CrossRef] [Green Version]
  46. Blackburn, P.; van Benthem, J.; Wolter, F. (Eds.) Handbook of Modal Logic; Studies in Logic and Practical Reasoning; North-Holland: Amsterdam, The Netherlands, 2007; Volume 3. [Google Scholar]
  47. Goranko, V.; Shkatov, D. Tableau-based decision procedures for logics of strategic ability in multiagent systems. ACM Trans. Comput. Log. 2009, 11, 1–49. [Google Scholar] [CrossRef]
  48. Cerrito, S.; David, A.; Goranko, V. Optimal Tableau Method for Constructive Satisfiability Testing and Model Synthesis in the Alternating-Time Temporal Logic ATL+. ACM Trans. Comput. Log. 2015, 17, 4:1–4:34. [Google Scholar] [CrossRef] [Green Version]
1
These remarks are prompted by an exchange and apparent disagreement with an anonymous reviewer of this paper, who argues that the logical frameworks discussed here do not really deal with goals, rationality, or sociality. In my understanding, the core of that disagreement is in the essentially different perspectives which we apparently have on the topic and on some basic concepts in this paper, as explained further in the text.
Figure 1. A concurrent game model: Adam and Eve in hotel Life.
Figure 1. A concurrent game model: Adam and Eve in hotel Life.
Logics 01 00003 g001
Figure 2. The formula a ( F p F q ) is true at state s 0 only if the agent a can use some memory.
Figure 2. The formula a ( F p F q ) is true at state s 0 only if the agent a can use some memory.
Logics 01 00003 g002
Figure 3. The concurrent game model M in Example 8.
Figure 3. The concurrent game model M in Example 8.
Logics 01 00003 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Goranko, V. Logics for Strategic Reasoning of Socially Interacting Rational Agents: An Overview and Perspectives. Logics 2023, 1, 4-35. https://doi.org/10.3390/logics1010003

AMA Style

Goranko V. Logics for Strategic Reasoning of Socially Interacting Rational Agents: An Overview and Perspectives. Logics. 2023; 1(1):4-35. https://doi.org/10.3390/logics1010003

Chicago/Turabian Style

Goranko, Valentin. 2023. "Logics for Strategic Reasoning of Socially Interacting Rational Agents: An Overview and Perspectives" Logics 1, no. 1: 4-35. https://doi.org/10.3390/logics1010003

Article Metrics

Back to TopTop