Next Article in Journal
Fermatean Fuzzy-Based Personalized Prioritization of Barriers to IoT Adoption within the Clean Energy Context
Next Article in Special Issue
INSUS: Indoor Navigation System Using Unity and Smartphone for User Ambulation Assistance
Previous Article in Journal
An Edge Device Framework in SEMAR IoT Application Server Platform
Previous Article in Special Issue
A Web-Based Docker Image Assistant Generation Tool for User-PC Computing System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Matrices Based on Descriptors for Analyzing the Interactions between Agents and Humans

1
Université Polytechnique Hauts-de-France, CNRS, UMR 8201-LAMIH, F-59313 Valenciennes, France
2
LIMAD-EDMI, University of Fianarantsoa, Fianarantsoa BP 1264, Madagascar
*
Author to whom correspondence should be addressed.
Information 2023, 14(6), 313; https://doi.org/10.3390/info14060313
Submission received: 17 March 2023 / Revised: 22 May 2023 / Accepted: 26 May 2023 / Published: 29 May 2023
(This article belongs to the Special Issue Feature Papers in Information in 2023)

Abstract

:
The design of agents interacting with human beings is becoming a crucial problem in many real-life applications. Different methods have been proposed in the research areas of human–computer interaction (HCI) and multi-agent systems (MAS) to model teams of participants (agents and humans). It is then necessary to build models analyzing their decisions when interacting, while taking into account the specificities of these interactions. This paper, therefore, aimed to propose an explicit model of such interactions based on game theory, taking into account, not only environmental characteristics (e.g., criticality), but also human characteristics (e.g., workload and experience level) for the intervention (or not) of agents, to help the latter. Game theory is a well-known approach to studying such social interactions between different participants. Existing works on the construction of game matrices required different ad hoc descriptors, depending on the application studied. Moreover, they generally focused on the interactions between agents, without considering human beings in the analysis. We show that these descriptors can be classified into two categories, related to their effect on the interactions. The set of descriptors to use is thus based on an explicit combination of all interactions between agents and humans (a weighted sum of 2-player matrices). We propose a general model for the construction of game matrices based on any number of participants and descriptors. It is then possible to determine using Nash equilibria whether agents decide (or not) to intervene during the tasks concerned. The model is also evaluated through the determination of the gains obtained by the different participants. Finally, we illustrate and validate the proposed model using a typical scenario (involving two agents and two humans), while describing the corresponding equilibria.

1. Introduction

Interactions between humans and software agents in the context of complex tasks have been studied in different research domains, notably in human–computer interaction (HCI) and in multi-agent systems (MAS). For example, we find works in the fields of road traffic management [1,2], autonomous cooperative robotics [3], and workflow modeling [4], and more recently on the problems of explanation of reasoning [5]. From the point of view of MAS, (autonomous) agents are usually defined as entities capable of acting (and interact) without human intervention [6,7,8,9,10]. From the point of view of human–machine interaction, and particularly intelligent interaction, studies focus on the characterization of interactions between a human being and an intelligent system [11,12,13,14]. Beyond the visual aspects of interfaces, interaction models involve software mechanisms based on an adaptation principle, to ensure intelligent interactions. The cross-fertilization between these two fields of research has led to different perspectives, considering teams composed of different human beings and intelligent agents [15,16,17,18]. Research has led to models of new interactions (and tools to enable these interactions), making them more explicit. Collaboration and cooperation between software agents and humans seems to be a promising solution. Indeed, Badeig et al. [19] highlighted that these interactions require essential properties (autonomy, proactiveness, context awareness, and situatedness) to model real applications. For example, these properties supported tangible human–agent interactions with an interactive tabletop [20]. In [21], the authors proposed a model of interaction between these different participants, with the objective of improving the artificial intelligence component by relying on human expertise. The design of such (human–compatible) agents is still an open issue, as underlined by [22,23,24].
Overall, few research works have taken into account the explicit relationships between humans and software agents, with each participant having its own characteristics. The majority have taken ad hoc approaches, hence our interest in an explicit formulation of these interactions. In our previous work, we proposed interactions between an agent (driving a simulated vehicle) and a human being (driving a vehicle in a virtual environment) [25]. We also studied the context of road traffic congestion: agents and humans were trying to reduce the number of conflicting situations in a road traffic simulation [1,2]. These works focused on the modeling of particular interactions between agents and humans, which were described using matrix games.
The game theory approach is a well-known method for studying and understanding different types of interaction between individuals, and particularly their social relations [26]. The idea is based on finding a strategy that helps a group of players to maximize their own benefits (utilities). This mathematical approach has been explored in different applications based on agents (without direct intervention of humans beings); for example, in modeling (i) the land change and spatial and temporal dynamics of the urban environment [27,28]; (ii) specific behaviors of people (e.g., extremist behaviors [29], spatial segregation [30,31], and dissemination of culture [32]); (iii) social networks [33,34,35]; and (iv) resource allocation [36,37,38]. In their study, Kaviari et al. [27] claimed that “urban land development is the result of the game between different players representing different human behaviors, thus game theory can improve the efficiency of the simulation of such a problem”. These authors showed that a game theory model (in this context, to predict the growth of Zanjan city) with temporal resolution gave better results for urban planning. In this context, the decision of resident agents (to seek the best land relatively to the income level of people) was based on different criteria, such as accessibility, land price, etc. Another illustrative example was introduced by [29]: The authors proposed an agent-based model of the emergence and escalation of anxiety in situations in which individuals from two different groups encounter various hazards. This model is characterized by different criteria, called social identity and identity fusion. The model is not directly based on game theory; nevertheless, the agents make their decision by estimating a utility, which is based on the evaluation of the anxiety level and the perception of a hazard in the group (individual and collective criteria). Lemos et al. [33] studied network formation for agents from different groups. The game proposed in this approach was defined by the payoff of a social dilemma game a particular case of a 2-player matrix. This study showed that the influence on the formation of social networks, depends on the size of the minority group, the frequency with which agents react to adversarial agents, and a cooperation barrier. Noori et al. [38] dealt with water allocation policies and demand management. More specifically, the water demand and the interactions of agricultural agents (concerning products such as rice and citrus) were estimated using utilities corresponding to the level of satisfaction of stakeholders. In these studies, the authors proposed different criteria for estimating the utilities, but their approach, which was well-suited for these applications, proposed ad hoc criteria, without the direct intervention of humans in the loop. The objective of this paper was to build these decision matrices into a more general structure, with a consideration of direct interactions between agents and human participants. The problem is, thus, to allow agents to make decisions to assist or not the human participants.
The paper is organized as follows: Section 2 presents the state of the art related to different concepts useful for modeling human–agent interactions. Section 3 proposes a model based on game matrices for two players/participants. Section 4 generalizes the 2-player model to any number of participants. We show that matrices depend not only on the interactions between participants, but also on predefined criteria/descriptors. The model is then evaluated in Section 5, through a scenario involving two agents and two humans. Section 6 discusses our approach in a more general context. Finally, the last section concludes and gives some perspectives.

2. Background

We propose to list in a non-exhaustive manner the main descriptors that can be involved in intelligent assistance, and which are available in the literature (Section 2.1). We also show that these descriptors can be classified into two categories, which are associated with the relative importance of their numerical value. We provide a description of different steps of the principles considered for the modeling of interactions based on descriptors (Section 2.2). Finally, we explain the concept of a matrix game and the way to determine an equilibrium (Section 2.3).

2.1. Main Existing Criteria (or Descriptors)

Privacy is the first criterion (or descriptor) that can be taken into consideration. The greater the need to respect privacy, the less relevant it will be for a system to interact with a human to provide assistance, with the risk of transmission of confidential information [39]. For example, assistive systems in smart homes, for people in general or people with disabilities in particular, must protect their privacy. The disability of the human, whether physical (e.g., visual) or cognitive (difficulties in understanding, memorizing, etc.), is also an important criterion. For example, the greater the visual impairment (ranging from visually impaired to blind), the more crucial is the need for assistance when interacting with the system [40]. In addition, the weaker the performance of a user, the more useful the assistance of an agent [41]. For example, human performance with office software consisting of hundreds of functions and/or options may be poor for many users for complex or non-routine tasks. Another example is the saturation of road traffic (involving inter-blocking situations) controlled by human operators. In general, the lower the usability of an interactive system, the more useful an assistive system should be for the user [42]. For example, an intelligent aid can be associated with interactive software composed of multiple functionalities(for instance a CAD interactive system) and can guide the user according to the tasks to be performed. The user can be confronted with an environment having a stochastic character, leading to random events. This is the case, for instance, in power plants, production lines, or in multimodal transportation networks. In this case, the higher the level of stochasticity, the more help in anticipating events should be useful for the human [43]. In the same way, the higher the criticality of a situation, the more useful it is help the human with interventions. The goal can be, for example, to avoid possible incidents or accidents, or even disasters [44] (air traffic control is a typical example). Indeed, in a highly sensitive environment, any human error can have serious consequences, and it is useful to have assistance in detecting them or even to anticipate them [45,46]. Depending on the field of application, human error can lead to the loss of a document, an erroneous financial transaction, or an explosion at a chemical plant. In an uncertain environment, an agent can act to evaluate the reliability of the information transmitted to the human. Depending on the experience level of the user of a system, assistance may be more or less useful. This can be the case in certain social networks or in war situations. For example if the user is a novice, assistance may be crucial [47]. The higher the workload of a user, the more useful the assistance can be in reducing it [48,49]. This is the case for control tasks with complex dynamic systems composed of hundreds or sometimes thousands of variables (multi-modal transportation networks, nuclear power plants, etc.).
These criteria are called descriptors in the following. They can be classified into two categories of descriptors:
  • Category 1: an increase in the value associated with the descriptor requires a cooperative situation;
  • Category 2: a descriptor with a low value implies a situation of assistance (an intervention is recommended).
Table 1 classifies the main descriptors introduced previously, according to these two categories. This list of descriptors is not exhaustive. It is representative and could certainly be extended by analyzing in depth the specificities of certain fields of application. It aims, above all, to help readers appreciate the complexity of the problem domain and the diversity of possible descriptors.
Hypothetically, numerical values can indicate the relative importance of each category of descriptors. Indeed, for some descriptors (such as criticality), the higher the value, the more useful it will be to set up cooperation between the different participants (in order to reduce criticality). For example, let us imagine values ranging from 1 to 5, with 5 being the maximum value of the descriptor of Category 1. If the criticality has the maximum value (here 5), then an intervention with an assistance goal can be considered essential. For other descriptors (Category 2), a high value (e.g., experience level) would not necessarily require cooperation between participants. For example, if the experience level has the maximum value (here 5), then an intervention with an assistance goal can be considered unnecessary.
The characteristics of the environment, as well as the essential aspects considered by the different actors, are then defined using a set of descriptors. They thus describe certain information that is crucial in the decision-making processes and joined actions selected by the participants (with the aim of global effectiveness being obtained by the latter). These descriptors are proposed by the designer. In order to decide which descriptors to exploit, the designer can start with a global analysis of the application domain [50]. A literature review can also be conducted in parallel, to study the descriptors implemented with the purpose of assistance in the domain concerned (e.g., assistance in power plant supervision). It is also necessary to have discussions with experts of the domain, as well as with users having experience of situations in which assistance may be necessary.
Each descriptor should be computable (and/or estimable) in a reasonable time. Similarly, the descriptors depend on the weights defined a priori by the designer. For example, a designer may consider that criticality is more important than performance in a risk area (e.g., control of major-accident hazards involving dangerous substances [51]).
Initially, our previous studies dealt with three descriptors: workload, experience level, and criticality. These studies concerned a concrete application domain: traffic management by humans assisted by software agents [1,2]. In this paper, even if we have shown that many possible descriptors exist (classified into two categories), we propose to use these three descriptors. They will be sufficient to show the feasibility of the proposed model.

2.2. Methodology Principles

We assume that the environment has its own temporal dynamics, which is correlated to the actions of the different participants. It is therefore necessary to define a reasonable temporal window for the decision regarding assistance (or not) of the different agents. We will assume that this temporal window is defined by the designer. The principle of the model is shown in Figure 1. In the general case (for each cycle), the participants evaluate/assess the selected descriptors, build the game matrix, and then select a Nash equilibrium to make their decision. However, the proposed principle is necessarily partial, since the human actors do not have to build the game matrix. Indeed, we try to propose the best decision (according to Nash equilibrium determination) for the agents. In parallel, human beings make their own decision, independently of the other actors. This first figure gives an overview of the steps at the decision level, and the second figure complements this one at the temporal level.
This principle assumes a cyclical process for the reasoning of the agents (Figure 2). In this figure, we consider a temporal window t , t + Δ t for Agent A i and Human H j ; the principle remains similar for any number of agents and humans. As such, in each cycle, the participants are able to initialize and evaluate their own descriptors: for example, the workload (descriptor from Category 1 according to Table 1) and experience level (descriptor from Category 2). It is also assumed that each actor (agent or human) is able to evaluate the criticality of the environment. M a j A i , t (resp. M a j H j , t ) represents the update at time t of the descriptor M a j for A i (resp. H j ). Let us therefore decompose the temporal window Δ t into different phases:
1.
The different actors perceive the evolution of the environment and determine the level of criticality: c r t A i , t for A i (in the same way, c r t H j , t for H j );
2.
Agent A i estimates the criticality from the point of view of H j (denoted c r t H j , t ), which may differ from that determined by H j (There is no reason why we should have the equality c r t H j , t = c r t H j , t ). In the following, we assume that the criticality (based on the evaluation of the environment) of the human and that of the agent are identical;
3.
In the same way, A i estimates the workload and the experience level of each H j , denoted by WL H j , t , expl H j , t . In the following, we assume that these two descriptors of the agent are identical to those of H j ;
4.
Each agent A i builds the matrix M A x H y , t . We will come back to this in the following;
5.
A i determines the Nash equilibrium for the matrix computed from the S s i , s i ¯ t strategies, where s i is the strategy of Agent A i and s i ¯ the strategy of any actor other than A i ;
6.
We assume that there are exchanges between the different actors (for example, informative acts for the chosen strategies);
7.
We also assume that there are exchanges between the different actors, for example requests about the action to be carried out (the strategy that an actor would like to be selected by another actor). Note that these last two phases are a cyclical process that should converge fairly quickly to a consensus;
8.
The actors perform their respective actions (doing nothing is also an action), which take a certain time;
9.
We assume that the workload and the experience level can be updated by H j . The most difficult problem is to consider an update of the two descriptors for A i . Depending on the strategy selected S s i , s i ¯ t , the experience level may increase (a failure could also bring additional knowledge) depending on the success or failure of the action chosen by the players. Similarly, in the previous steps of estimating the two descriptors of H j , Agent A i could propose an estimation of the experience level according to the success/failure of H j , as well as the strategy considered optimal. In the end, the updating of these two descriptors leads to their valuation at t + 1 .
The principle of the interaction model thus leads to the determination of numerical values for each descriptor during the building of the matrices. These matrices allow the agents to make a rational decision about the necessity to cooperate or not with the humans. We briefly detail the equilibrium search model, and thus the way to select an action.

2.3. Hypothesis and Concepts of Equilibria for a Matrix Game

We assume that assistant agents and humans share the same common environment. It is accepted that like humans, assistant agents also have a limited competence for the task to be completed. This task is in fact a priori a cooperative task. However, nothing prevents us from thinking that the humans may be in a competitive interaction. Let us note n assistants agents defined by A = A 1 , A 2 , , A i , , A n , and m humans defined by H = H 1 , H 2 , , H j , , H m . We also define the set of participants p k P such as P = A H .
Let us take just two players, a software agent and a human being. We consider by convention that the actions of the first player (Assistant Agent A) are represented in the rows, and the ones of the second player (Human H) in the columns. Each actor can decide to cooperate or not, so we will use the usual convention of actions (by considering the usual notation used to deal with the problem of the prisoners’ dilemma): C (for cooperation) in the first row and column; D (for defection, the willingness not to cooperate) in the second row and column. For example, the first row and first column are associated with the strategy C C (this notation first indicates the agent strategy and then the human one). Each player can choose between two actions, i , S i C , D . A matrix game for a 2-player 2-action game is defined as:
A H C D C ( v A c c , v H c c ) ( v A c d , v H c d ) D ( v A d c , v H d c ) ( v A d d , v H d d )
Let us note any positive values for the different utilities or gains ( v A c c , v H c c , , v A d d , v H d d ). When Agent A chooses the strategy D and Human H the strategy C, we can determine the utilities of these two players using the couple ( v A d c , v H d c ) , i.e., u A ( D C ) = v A d c and u H ( D C ) = v H d c . Moreover, as we mentioned in the previous subsection, we reinitialize the matrix by calculating/estimating the different utilities, as in the work proposed in [17,25]. This is called an iterated game.
To ensure the rational behavior of the agents, one method for finding the right decision is to look for Nash equilibria [26,52,53,54,55,56]. To determine these equilibria, we choose the algorithm described in [57,58]. This algorithm essentially consists of two consecutive steps: (i) the gradual elimination of dominated strategies (if we obtain a single profile by successively eliminating (strictly) dominated strategies); (ii) the determination of the different equilibrium. Its application is possible to search for pure strategies, which seems satisfactory in this context. For a non-zero-sum game, we know that the number of Nash equilibria in pure strategies is many, and we assume that the agent then selects an equilibrium from those obtained. Note that human beings do not calculate their respective matrix (the game can be considered partially uncooperative), but they are necessary for software agents.
Recall that the Nash equilibrium is defined using a joint strategy s = s i * , s i * ¯ S 1 × S 2 × × S n for n players, where, knowing the strategy chosen by the other players, each player seeks to maximize their gain, i.e., i , s i S i , u i ( s i * , s i * ¯ ) u i ( s i , s i * ¯ ) . In this context, a player i has no interest in changing strategy unilaterally. Therefore, for two players, we consider the following inequalities according to the pair of winning strategies in a 2-player game, H and A:
  • C C : v A c c v A d c and v H c c v H c d , knowing that H cooperates, A has an interest in cooperating (for example, the task is complex enough that H felt the need to call on A and A detects an interest in cooperating with the human);
  • C D : v A c d v A d d and v H c d v H c c , knowing that H is defecting (does not cooperate), A has an interest in cooperating (for example, the task is complex enough for A to feel an interest in cooperating with the human, even if the latter was acting individually);
  • D C : v A d c v A c c and v H d c v H d d , knowing that H cooperates, A has an interest in not cooperating (for example, H felt the need to call on A but A does not consider the task complex enough and, occupied by other tasks, A does not detect an interest in cooperating with the human);
  • D D : v A d d v A c d and v H d d v H d c , knowing that H is defecting (does not cooperate), it is in A’s interest not to cooperate (for example, H did not feel the need to call on A, and A does not consider the task complex enough to offer cooperation or assistance).
For reasons of simplification and without loss of generality, we assume that the descriptors from Category 1, as well as the parameters of the matrix ( v A c c , v H c c , , v A d d , v H d d ) are defined according to the same ordered scale of discrete values v m i n . . v m a x (we assume in the following, that v m i n = 1 and v m a x = 5). The challenge is then to know if the intervention of the agent remains necessary for the intermediate values. We, therefore, also assume that there is a threshold set a priori (which we note as v f i x e d ) that could trigger the cooperative intervention of the agent. The definition of this threshold depends on the designer and the level of freedom desired by the human actors (for example, reducing the amount of intervention of an assistant agent allows humans to act more and progress in their learning of the complex system they manipulate), which is in line with the learning by doing [59] approach. We, thus, define v f i x e d = δ · v m a x . Considering that the human beings must not lose their expertise, this parameter δ (in the following, we assume that this parameter δ is identical for the different descriptors) thus sets the threshold for the descriptor where the agent can begin to intervene. A value δ = 0 means that it intervenes very quickly; its intervention is delayed when δ = 1 .
Construction of the matrix for the second category is based on the notations described previously. Suppose the following notations: (i) the value of any descriptor varying from v m i n to v m a x ; (ii) existence of a threshold set a priori that can trigger a software agent intervention, defined by v f i x e d ; (iii) v A for the current value of Agent A, and v H for Human H.
We will now describe how to construct decision matrices for two participants/players.

3. Building Two-Player Matrix Game

We have shown that the construction of matrices depends on the interpretation of the descriptors, which have been defined in two categories (cf. Table 1). We present matrix models for two players (Section 3.1 and Section 3.2). Then, we study the behavior of the agent for various descriptors (Section 3.3), in the case where there are two players. Finally, we propose three illustrations combining two and three descriptors (Section 3.4).

3.1. Representation of the Two-Player Matrix for Category 1

We propose to study the first category of descriptors, and in particular its matrix game (Section 3.1.1). We also illustrate our point for the following two descriptors: criticality and workload (Section 3.1.2).

3.1.1. Building the Matrix Game for Category 1

To the notations already proposed, we add v A for the current value of assistant Agent A and v H for that Human H for the studied criterion (i.e., the descriptor). For convenience, these are also considered to be the utility values: the payoffs that can be obtained by A and H, respectively, in the matrix game. Let us consider the two corresponding extreme situations:
  • When the value of the descriptor is equal to v m a x for A, the agent should decide to intervene to assist the human. The strategy in this case would be a cooperative situation, denoted C C (or C D if this value is low for the human). A high value of the descriptor should therefore lead to a cooperative strategy on the part of the agent. By using the notations of Equation (1)), we have: v A c c = v A , v A d c = v f i x e d , v H c c = v H , and v H c d = v f i x e d . Thus, having this inequality v A v A f i x e d (a value greater than or equal to the fixed threshold), s A = { C } will be the chosen strategy for the agent A;
  • Similarly, when the value of the descriptor is low for A, it is not in the interest of the agent to intervene. Strategies such as D C and D D are then necessary, to indicate its non-intervention. To obtain the D C strategy, the first inequality v H d c v H c c is satisfied as soon as the current value of the descriptor, for A, is less than (or equal to) the fixed threshold. The second inequality v H d c v H d d must also be satisfied. If we consider the current value ( v H d c = v H ), this will be verified for the operation where v H d d = v m i n (H considers the descriptor to be of relative importance). Strategy D D supposes the satisfaction of v A d d v A c d , with v A c d = v A and v A d d = v m i n (H partially considers the descriptor).
Therefore, the assignment of matrix values associated with constructing a descriptor from Category 1 is defined as follows, where only the values v A and v H are free ( v f i x e d being defined by δ · v m a x ):
( v A , v H ) ( v A , v f i x e d ) ( v f i x e d , v H ) ( v m i n , v m i n )
Making assumptions about the possible values of descriptors, a reasonable setting of δ seems to appear when δ [ 0.6 , 0.7 ] . For small values of δ (i.e., v f i x e d ), the Nash equilibrium is the cooperative strategy for A for any v H , if A deems the descriptor higher than the minimum. For values of δ close to 1, there are many winning strategies, and a cooperative strategy is more difficult to obtain. It should also be noted that the choice of v m i n = 0 as the lowest value would increase (with intervention rate) the willingness of the assistant agent to cooperate. More generally, we want to evaluate the percentage of intervention of the agent by varying v A on the same scale of values. For each value of v A , we set the value of δ , while varying v H (see Figure 3). We can see that an increase in values of v A increases the rate of intervention of the agent; which ends approximately at a rate of 50 % for δ = 1 . Therefore, the more A deems the descriptor to be important, the more it decides to cooperate (the more A finds an interest in it). Note that for a zero value of v A , the intervention rate converges quickly towards the non-intervention of the agent. We also note that the variation of v A in relation to the value of v H tends to reduce the intervention of the latter.

3.1.2. Illustration of the Criticality and Workload Descriptors

The criticality level of the environment ranges from 1 (normal state) to 5 (hazardous state). This assessment essentially depends on the application, and therefore we will admit that the common criticality is perceived in the same way by assistant agents as by human beings: v A = v H = c r t . In the same way, we set the notations: (i) c r t f i x e d for a fixed criticality, (ii) c r t for the current criticality, and (iii) c r t m i n for the minimal criticality (likewise c r t m a x for the maximal criticality).
The workload depends on the different actors and their analysis of their ability to carry out the task. A value 5 for the workload means that the actor feels overwhelmed by how the system works; A minimum value of 1 characterizes a task that the actor can perform without stress and/or difficulty. We note w l A and w l H for the values of the current workload of the assistant agent and the human being; w l f i x e d is the threshold of acceptability of the workload (above this threshold, agents will have to intervene); and w l m i n (respectively w l m a x ) for a minimum workload value (resp. maximum workload value). By applying the analysis of Section 3.1.1, the matrices associated with criticality and workload for two players are defined by:
( c r t , c r t ) ( c r t , c r t f i x e d ) ( c r t f i x e d , c r t ) ( c r t m i n , c r t m i n ) ( w l A , w l H ) ( w l A , w l f i x e d ) ( w l f i x e d , w l H ) ( w l m i n , w l m i n )
In the following, we consider that software agents have no personal workload; they are still able to perform a task at each iteration; at worst, we consider that the agent estimates its workload like the human, and so we set w l A = w l H . The strategic behavior of the assistant agent for the two descriptors described below are those described by v A = v H of Figure 3.

3.2. Representation of the Two-Player Matrix for Category 2

This category of descriptors (in which the experience level falls, for example) assumes that the maximum value is not critical; while the minimum value may raise a particular concern for the proper completion of the task. Using an approach similar to the previous category, we plan to study this second category of descriptors, and in particular the matrix (Section 3.2.1), and we illustrate our proposal with a particular descriptor, the experience level of participants (Section 3.2.2).

3.2.1. Building the Matrix Game for Category 2

A reasoning similar to the previous case (Category 1) leads, for this one, to the study of the two extreme situations:
  • When the value of the descriptor is low, the assistant agent may have an interest in intervening (strategies C C or C D ), before the global system deteriorates. To respect these constraints, one solution would be to swap the values proposed in Section 3.1.1. Let us then take v A c c = v f i x e d , v A d c = v A , v H c c = v f i x e d and v H c d = v H ;
  • When the value of the descriptor tends towards v m a x , the agent will select a non-intervention action. The strategies in this case would be D C and D D . Similarly, the permutation of values proposed in Section 3.1.1 follows the same analysis. For v A c d and v H d c , several values are then possible; for example, with two values: v A c d = v H d c = v m i n and v A c d = v H d c = v f i x e d .
For the descriptors from Category 2, the two-player matrix game is therefore represented by:
( v f i x e d , v f i x e d ) ( v , v H ) ( v A , v ) ( v A , v H ) where v v m i n , v f i x e d
As we use the same scale and the same parameter δ , we can assume that v f i x e d = v f i x e d and v m i n = v m i n = 1 . Figure 4 thus presents the rate of intervention of the agent for any value v A . For each of these values, we vary v H by setting δ . The different values v A tend to converge towards the expected values; i.e., we obtain an approximately 50 % chance that the agent will intervene and therefore decide to assist the humans.

3.2.2. Illustration of Experience Level

Experience level is a descriptor from Category 2. Remember that an experience level of 1 means that the actor does not really know the system and its evolution, while an experience level of 5 indicates that this actor has perfect mastery of the evolution of the system. We set e x p l A and e x p l H for the current experience level of the agent and the human being; e x p l f i x e d is the threshold of acceptability of the experience level (below this threshold, agents should/could intervene); and e x p l m i n (respectively e x p l m a x ) for a low level (resp. a high value for experience). Continuing the analysis of Section 3.2.1, the matrix concerning the experience level for two players is represented by:
( e x p l f i x e d , e x p l f i x e d ) ( e x p l , e x p l H ) ( e x p l A , e x p l ) ( e x p l A , e x p l H ) where e x p l e x p l m i n , e x p l f i x e d
In the following, we consider that software agents do not have a variable level of experience, as for human beings; they are always able to intervene wisely. Assuming then e x p l A = e x p l H , the strategic behavior of the agent would correspond to that proposed by Figure 4 for v A = v H .
By calculating the Nash equilibria for each value of v H and v A in [ 0 , 5 ] for a given δ , and by accumulating the occurrences of the cooperative strategy (C), we obtain Figure 5 (Source code that produced these figures can be found here: https://github.com/EmmanuelADAM/Julia/blob/main/majJeux.jl, accessed on 17 March 2023). Figure 5a (respectively, Figure 5b) accumulates for all δ in [ 0 , 1 ] the cooperation distributions of A for Category 1 (respectively, Category 2); the darker the color, the greater the level of agent involvement.
  • For Category 1, we notice that the agent cooperates when it evaluates the descriptor characterizing the situation more strictly than the human. In this case, the agent finds it more useful not to cooperate, even if the human asks for it; because it judges the descriptor weaker than the human values it. For example, if the agent judges the situation as critical, whatever the human says, it will offer assistance.
  • For Category 2, we note that the agent cooperates when it evaluates the descriptor describing the situation as weaker than the human. In this case, the agent thinks it is more useful not to cooperate, even if the human asks for it; because it judges the descriptor more strictly than the human. For example, if the agent judges the user to be very inexperienced, whatever the human says, it will offer assistance.
The proposed matrix game is therefore in line with what was expected: the agent assists the human when it feels the need. On the other hand, it lets the human act alone, and therefore learn and gain experience when the agent does not judge the situation to be problematic.

3.3. Combination of Descriptors for the Two-Player Matrix

Combining all the descriptors to obtain a matrix, while giving a reasonable/acceptable interpretation of the Nash equilibria, is not obvious. Indeed, these descriptors are independent features, and depend on the correspondence of the different scales of values. Nevertheless, we consider an empirical approach to their construction. Thus, we can establish a matrix (denoted M A x H y for an agent A x and a human H y ) as a weighted sum of 2-player M i associated to k > 0 descriptors (Cf. Section 3.1.1 and Section 3.2.1):
M A x H y = 1 i = 1 k λ i · i = 1 k λ i · M i with λ i N
where λ i is the parameter of a descriptor d e s c i (associated to the 2-player matrix M i ) fixed by the designer. As the matrix has a different behavior according to the category, it is essential to differentiate the two-player matrices, M i . Let us take for example that for k = k 1 + k 2 descriptors, k 1 descriptors (respectively k 2 ) belong to the first category (resp. the second category).
For a given matrix M i , we made different assumptions ( i , v f i x e d i = v f i x e d i = δ · v m a x and i , v m i n i = v m i n i = v m i n = 1 ) that should lead us to the need to evaluate the two following quantities, in order to simplify our notations:
i = 1 k 1 λ i · v f i x e d i j = 1 k 2 λ j · v f i x e d j = i = 1 k 1 λ i j = 1 k 2 λ j · δ · v m a x i = 1 k 1 λ i · v m i n i j = 1 k 2 λ j · v m i n j = i = 1 k 1 λ i j = 1 k 2 λ j · v m i n
We can then state, to simplify the writing: Λ k = λ 1 , , λ k T , Λ k = λ 1 , , λ k T , V A k = v A 1 , , v A k , V A k = v A 1 , , v A k , V H k = v H 1 , , v H k , V H k = v H 1 , , v H k , V v a l u e k = v v a l u e , , v v a l u e , with c a r d ( V v a l u e k ) = k and v a l u e { m i n , m a x , f i x e d } . Equation (2) then becomes:
Λ k 1 · V f i x e d k 1 Λ k 2 · V f i x e d k 2 = Λ k 1 Λ k 2 · V m a x k · δ Λ k 1 · V m i n k 1 Λ k 2 · V m i n k 2 = Λ k 1 Λ k 2 · V m i n k
The general framework describes the use of different descriptors with different interpretations. We thus give the results associated with the four possible strategies:
C C : Λ k 1 · V A k 1 Λ k 2 · V A k 2 Λ k 1 Λ k 2 · V m a x k · δ Λ k 1 · V H k 1 Λ k 2 · V H k 2 Λ k 1 Λ k 2 · V m a x k · δ
C D : If v x j = v f i x e d j Λ k 1 · V A k 1 Λ k 2 · V A k 2 Λ k 1 · V m i n k 1 Λ k 2 · V m a x k 2 · δ Λ k 1 Λ k 2 · V m a x k · δ Λ k 1 · V H k 1 Λ k 2 · V H k 2 If v x j = v m i n j Λ k 1 · V A k 1 Λ k 2 · V A k 2 Λ k 1 Λ k 2 · V m i n k Λ k 1 Λ k 2 · V m a x k · δ Λ k 1 · v H k 1 Λ k 2 · v H k 2
D C : If v x j = v f i x e d j Λ k 1 · V m a x k 1 Λ k 2 · V m a x k 2 · δ Λ k 1 · V A k 1 Λ k 2 · V A k 2 Λ k 1 · V H k 1 Λ k 2 · V H k 2 Λ k 1 · V m i n k 1 Λ k 2 · V m a x k · δ If v x j = v m i n j Λ k 1 · V m a x k 1 Λ k 2 · V m a x k 2 · δ Λ k 1 · V A k 1 Λ k 2 · V A k 2 Λ k 1 · V H k 1 Λ k 2 · V H k 2 Λ k 1 · V m i n k 1 Λ k 2 · V m i n k 2
D D : If v x j = v f i x e d j Λ k 1 · V m i n k 1 Λ k 2 · V m a x k 2 · δ Λ k 1 · V A k 1 Λ k 2 · V A k 2 Λ k 1 · V m i n k 1 Λ k 2 · V m a x k 2 · δ Λ k 1 · V H k 1 Λ k 2 · V H k 2 If v x j = v m i n j Λ k 1 · V m i n k 1 Λ k 2 · V m i n k 2 Λ k 1 · V A k 1 Λ k 2 · V A k 2 Λ k 1 · V m i n k 1 Λ k 2 · V m i n k 2 Λ k 1 · V H k 1 Λ k 2 · V H k 2
We discuss the two special cases, for which the descriptors are of the same nature: (i) k = k 1 (with k 2 = 0 ) and (ii) k = k 2 (with k 1 = 0 ). For these particular cases, we can see that the behavior of the matrix M A x H y is similar to the initial two-player matrices for a given descriptor. Furthermore, the different Nash equilibria lead to a weighted sum of the evaluations relative to the thresholds (more precisely to the minimum or fixed thresholds).

3.4. Illustration with Combinations of Two and Three Descriptors

We propose to illustrate different cases for two descriptors of the same or different category (Section 3.4.1 and Section 3.4.2), and for three descriptors (Section 3.4.3). Note that the approach is not limited to three descriptors. It should be remembered that the model proposed in the previous section is based on any number of descriptors that are weighted by two-player matrices.

3.4.1. Combination of Two Descriptors of the Same Category

Let us take the two previous descriptors from the same category (Category 1), namely criticality and workload levels ( k 1 = 2 and k 2 = 0 ). The ratio α = λ w l λ c r t allows us to evaluate the sensitivity of the two descriptors, according to the parameter δ . We show that for a ratio of 1000, we can see that it corresponds exactly, not only to the number of interventions of the agent, but also to the min/max/average number of equilibria according to the value of δ and for a descriptor of this category. Above this value, the intervention rate of the agent remains the same. Figure 6 shows the influence of the weights on the agent’s intervention rate. An increase of δ leads to a decrease of the number of interventions for the agent, corresponding to the importance of keeping the knowledge learning for the human being (the closer δ is to 1, the more the agent’s intervention percentage is around 50 % ). For example, for l a m b d a w l = λ c r t = 1 with the same assumptions on the valuations of the two descriptors, we shift from 96 % (for δ = 0 ) to 50 % (for δ = 1 ). The difference between the two extreme configurations, according to the ratio of the two coefficients associated with the two descriptors: (i) λ c r t λ w l = 1 and (ii) λ c r t λ w l = 1000 is maximal (approximately 18 % for δ = 0.2 ). For values of δ > 0.8 , the approach is not really significant for the different weightings of these coefficients.

3.4.2. Combination of Two Descriptors from Different Categories

We wish to illustrate with the combination of the following two descriptors ( k 1 = k 2 = 1 ): (i) criticality or workload levels (from Category 1) and experience level (from Category 2). Since the choice of criticality or workload does not change the following analysis, we propose to select criticality to illustrate the first category.
To address the sensitivity of the coefficients of the two descriptors ( λ c r t and λ e x p l ), we have to provide different values for the weight coefficients. The results (assuming v = v f i x e d ) obtained for the different weights of the coefficients associated with the descriptors again show that a ratio of λ c r t λ e x p l = 1000 allows us to find the same percentages as for the criticality descriptor (a binary search allows us to again find this value). We see that the number of min/max/average equilibria is also identical to the criticality level. Above this value, we observe that the percentages do not change (it is thus not relevant to take values higher than 1000). Figure 7 shows the agent intervention rates for the three configurations: (i) λ c r t = 1 and λ e x p l = 1 , (ii) λ c r t = 1000 and λ e x p l = 1 , (iii) λ c r t = 1 and λ e x p l = 1000 . The differences tend to decrease significantly for δ = 0.6 ; but they are almost 40 % for a value of δ = 0 . It is then possible to describe a range of values up to a ratio of 1000 between these two coefficients.
For the hypothesis v = v m i n = 1 and coefficients equal to 1, the result obtained is identical for any value of the parameter δ . The behaviors are thus stable, with a 50 % of chance of intervening for the agent. Indeed, we show that from Equations (3)–(6): (i) Strategy C C is a Nash equilibrium when the criticality level is greater than or equal to the experience level, (ii) if the values of the descriptors are equal, Strategy D C wins; and (iii) the experience level is greater than the criticality level for D D . In contrast, the results obtained by adjusting the different parameters (including v = v f i x e d ), and with the contextual conditions of these two descriptors show that the level of assistance for the agent increases according to the parameter δ from 44 % to 65 % (essentially due to the decreasing number of Strategies D D as Nash equilibria).

3.4.3. Illustration with a Combination of Three Descriptors

We adapt the general matrix M A x H y for k 1 = 2 , k 2 = 1 , v A c r t , v A w l , v A e x p l , v H c r t , v H w l , and v H e x p l . The different simulations underline the consistency between the analytical result and its contextual interpretation. Figure 8 gives some results obtained for the agent’s intervention rate.
Figure 9a illustrates the degree of cooperation of the agent when k 1 = 2 and k 2 = 0 , and λ 1 = λ 2 = 1 ; with the criticality and workload descriptors. Logically enough, the degree of cooperation of the agent follows the same curve as in the case of a unique descriptor. Figure 9b represents the degree of cooperation of the agent when k 1 = 2 and k 2 = 1 (workload, experience level, criticality descriptors), and λ 1 = λ 2 = 1 and λ 2 = 2000 . We observe a lower degree of cooperation of the agent. Indeed the descriptor from Category 2 (for example, the experience level of the human) is strongly supported by λ 2 .
We now propose to generalize to a team of n agents and m humans.

4. Generalization for N Agents and M Humans

We are only interested in agent–human interactions, and thus in the construction of a general matrix (denoted M ¯ A n H m for n agents and m humans) determining the different interactions between the different participants. Two approaches can be defined, namely a centralized approach (Section 4.1) and a distributed approach (Section 4.2) for the building of the resulting matrices.

4.1. Centralized Approach for the Decision of Agents

We obtain a formulation that seems sufficient (see Equation (7)) for the centralized matrix, which we will designate by M ¯ A n H m :
M ¯ A n H m = 1 i = 1 , j = 1 i = n , j = m μ i , j i = 1 , j = 1 i = n , j = m + ¯ μ i , j · M A i H j
Let us set the different parameters μ i , j corresponding to the degrees of trust related to the joint activities of the participating actors A i and H j . This degree could evolve according to a positive or negative interaction between them (1 would indicate a positive relationship; 0 would indicate no interaction, and 1 a negative interaction). We could also imagine, in a future perspective, reinforcement learning to determine these coefficients according to the result of the joint activity. Let us denote the operator + ¯ as the sum of the valuations computed by generating the combination of strategies of a set of players. For example for three participants (Agents A 1 and A 2 , Human H 1 ), M A 1 H 1 + ¯ M A 2 H 1 generates a ( 2 , 4 ) matrix where lines correspond to the strategies C and D for Agent A 1 , and columns corresponds to the different strategies ( C C , C D , D C , D D ) for the other participants (Agent A 2 , and then Human H 1 ). Moreover, we denote M A n H m s i , s i ¯ , the value obtained by the projection of the strategy s = s i , s i ¯ for the matrix. In the following, we distinguish the strategy of the assistant agent A i by the notation s i and that of the human being H j by the notation s j . The gains obtained for the assistant agents (Section 4.1.1) and human beings (Section 4.1.2) are given for a centralized approach.

4.1.1. Determination of Gains for Assistant Agents

Let us consider the creation of an intermediate matrix that aggregates for each agent A i the information on the different H k ; the matrix thus aggregates its matrices against each human, weighted by its trust on each relation. This matrix that we call M A i H ¯ allows us to determine the gains for the assistant agents (Cf. Equation (8)):
M A i H ¯ = 1 j = 1 j = m μ i , j · μ i , 1 · M A i H 1 + ¯ μ i , 2 · M A i H 2 + ¯ + ¯ μ i , m · M A i H m
Let us build an intermediate matrix that aggregates for each human H k the information about the different assistant agents. This matrix that we call M A ¯ H k determines the gain for the assistant agents (Equation (9)) for a human being H k :
M A ¯ H k = 1 i = 1 i = n μ i , k · μ 1 , k · M A 1 H k + ¯ μ 2 , k · M A 2 H k + ¯ + ¯ μ n , k · M A n H k
We are mainly interested in the control of the cooperation of the agents. Thus, we propose a matrix representing the global behavior of the agents regarding the group of humans:
M A ¯ H ¯ = 1 i = 1 , j = 1 i = n , j = m μ i , j · j = 1 j = m μ 1 , j · M A 1 H ¯ + ¯ + ¯ j = 1 j = m μ n , j · M A n H ¯
Recall the viewpoint consistency assumption: v A i d e s c = v A j d e s c = v H k d e s c for any two assistant agents A i and A j and one human being H k . We can easily show that:
M A i H k = M A j H k , for all i , j and k given
Equation (9) then changes by using Equation (11), and in this case: M A ¯ H k = M A 1 H k . Considering this assumption, the utility of a strategy compared to the strategies taken by human players can then be simplified:
u ( s i , s i ¯ ) = 1 i = 1 , j = 1 i = n , j = m μ i , j · ( μ 1 , 1 · M A 1 H 1 s 1 , s 1 ¯ + + μ 1 , m · M A 1 H m s 1 , s m ¯ + μ 2 , 1 · M A 1 H 1 s 2 , s 1 ¯ + + μ 2 , m · M A 1 H m s 2 , s m ¯ + + μ n , 1 · M A 1 H 1 s n , s 1 ¯ + + μ n , m · M A 1 H m s n , s m ¯ )
It is then possible to determine the number of cooperative agents and those who do not want to intervene, for a given strategy. Thus, we could group the values according to the strategies of the assisting agents, in order to obtain a reformulation:
u ( s i , s i ¯ ) = 1 i = 1 , j = 1 i = n , j = m μ i , j · ( i = 1 / s i = C j = n · j = 1 / s j = C j = m μ i , j · M A 1 H ¯ C , C + j = 1 / s j = D j = m μ i , j · M A 1 H ¯ C , D + i = 1 / s i = D j = n · j = 1 / s j = C j = m μ i , j · M A 1 H ¯ D , C + j = 1 / s j = D j = m μ i , j · M A 1 H ¯ D , D )
Let us set the following parameters such that n = c a + d a and m = c h + d h :
  • c a : Number of agents wishing to intervene
  • d a : Number of agents not wishing to intervene
  • c h : Number of humans wishing to cooperate with agents
  • d h : Number of humans not wishing to be assisted by agents
Let us assume that the coefficients μ i , j are identical (if these weights are unitary μ i , j = 1 for all i , j , for example). In this case, we can again simplify the previous formulation:
u ( s i , s i ¯ ) = 1 n · m · ( c a · c h · M A 1 H ¯ C , C + d h · M A 1 H ¯ C , D + d a · c h · M A 1 H ¯ D , C + d h · M A 1 H ¯ D , D )

4.1.2. Determination of Gains for Human Beings

To compute the Nash equilibrium, it is necessary to estimate the utility of each human participant H j , depending on the strategies of the assistant agents and on their own strategy. A similar reasoning, considering the particular case where the coefficients μ i , j are identical, leads to a simpler rewriting. Based on the notations c a , d a , c h , and d h , we obviously obtain a dual formulation of Equation (12):
u ( s j , s j ¯ ) = 1 n · m · ( c h · c a · M A ¯ H ¯ C , C + d a · M A ¯ H ¯ D , C + d h · c a · M A ¯ H ¯ C , D + d a · M A ¯ H ¯ D , D )

4.2. Distributed Approach for the Decision of Each Agent

A distributed approach consists in studying the construction of the matrix from the point of view of a single assistant agent A i , which could be expressed as a weighted sum combining the coefficients of the initial two-player matrix. In this distributed approach, we determine the gains obtained for assistant agents (Section 4.2.1) and human beings (Section 4.2.2).

4.2.1. Determination of Gains for Assistant Agents

For n agents and m humans, let us denote the utility associated with agent A i by u A i ( s i , s i ¯ ) = u A i ( s 1 , , s i , s n , s 1 , s m ) . Moreover, we have previously seen that the initial matrices are identical for all assistant agents ( Cf. Equation (11)). Using similar reasoning, we can also simplify the computation of utilities for the assistant agents, if the different weights are equal. Thus, we can make the parameters c h and d h be such that c h + d h = m :
u A i ( s i , s i ¯ ) = 1 m · c h · M A i H ¯ s i , C + d h · M A i H ¯ s i , D
Note that this formulation is the same as the general equation defined previously (Cf. Equation (12)). Indeed, It is only necessary to take c a = 1 , d a = 0 if the agent A i chooses Strategy C (similarly for Strategy D, c a = 0 , d a = 1 ). This calculation is, in fact, the outcome of an analysis from the point of view of a single agent A i , and not of all the assistant agents (note that n = c a + d a = 1 ). Moreover, if the parameter m = 1 (only one human being), the formulation allows us to find the initial matrix by playing with the values of c h and d h .

4.2.2. Determination of Gains for Human Beings

As expected, the utility of H j only depends on its own descriptors, whatever the agent A i . We can thus simplify the calculation of utilities for the assistant agents, if the different coefficients are equal. Thus, we can introduce the parameters c a and d a , such that c a + d a = n . The formulation becomes:
u i ( s j , s j ¯ ) = 1 n · c a · M A ¯ H j C , s j + d a · M A ¯ H j D , s j
Note that this formulation is similar to the general equation defined previously (Cf. Equation (13)). Indeed, it is sufficient to take c h = 1 , d h = 0 if H j chooses Strategy C (similarly for Strategy D, c h = 0 , d h = 1 ). This parameter setting is the consequence of an analysis from the point of view of a single participant H j , and not on all humans ( m = c h + d h = 1 ). Let us also underline that, if the parameter n = 1 (only one agent), this formulation allows us to obtain the initial matrix by playing with the values of c a and d a .
In the studies proposed above, we are able to construct corresponding matrices, and thus determine their Nash equilibria. We will now illustrate this analysis using a scenario.

5. Case Study: Scenario Based on Two Agents and Two Humans

Let us illustrate our proposal for a team of two agents and two humans by describing a scenario. We present a description of the scenario (Section 5.1). It is necessary to determine and build the initial two-player matrix (Section 5.2). Then, we determine the Nash equilibria for the two approaches, a centralized approach with a single matrix (Section 5.3) and a distributed approach according to the point of view of each participant (Section 5.4).

5.1. Description of Scenario

We propose a scenario with four configurations by assuming δ = 0 for these illustrations, i.e., the agents’ decision is very sensitive to the minimum valuation (Table 2). For each configuration, we present the workload, the experience, and the criticality levels.
Initially, the descriptors are positioned at low levels (Configuration c 1 ). As a result of their action/inaction, probably due to their average experience, the environment has deteriorated and the criticality level becomes very high, as well as the workload (Configuration c 2 ). Following the actions of the participants, the situation returns to a normal mode (Configuration c 3 ), assuming an increased experience level for one of the two human operators. Their action depending on their experiences allows decreasing the danger of the environment; one of the two humans, thus, increases his/her experience level. Finally, the last situation (Configuration c 4 ) is again degraded.

5.2. Determination of the Initial Two-Player Matrix According to the Three Predefined Descriptors

We calculate the initial matrix based on the three descriptors, using the previous formulation and the various usual assumptions: v e x p l = v f i x e d e x p l , identical weightings ( λ c r t = λ w l = λ e x p l = 1 ) for these three descriptors (with v = v f i x e d ), v m i n = 1 , v m a x = 5 , v f i x e d c r t = δ · v m a x , v f i x e d w l = δ · v m a x , and v f i x e d e x p l = δ · v m a x , where δ = 0 ), and the valuations of the agents equal to those of the human being, v A c r t = v H c r t = c r t , v A w l = v H w l and v A e x p l = v H e x p l . The consideration of these assumptions defines the initial matrix (Equation (16)):
M A x H y = ( v H c r t + v H w l 3 , v H c r t + v H w l 3 ) ( v H c r t + v H w l 3 , v H e x p l 3 ) ( v H e x p l 3 , v H c r t + v H w l 3 ) ( 2 + v H e x p l 3 , 2 + v H e x p l 3 )

5.3. Building the Centralized Matrix for Two Agents Two Humans

For this particular case, a team of two agents and two humans ( x = 1 or x = 2 ; y = 1 or y = 2 ) is considered. The resulting matrix would be defined by M ¯ A 2 H 2 = M A 1 H 1 + ¯ M A 1 H 2 + ¯ M A 2 H 1 + ¯ M A 2 H 2 . Table 3 presents a summary of the Nash equilibria for the four studied configurations. Recall, for example, that C D D C means the following distribution of strategies: s A 1 = C , s A 2 = D , s H 1 = D , and s H 2 = C .
Configuration c 1 gives different Nash equilibria, since the intervention of one or more agents is not necessary. The matrix is as follows:
CCC CCD CDC CDD DCC DCD DDC DDD C ( 0.67 , 0.67 , 0.67 , 0.67 ) ( 0.67 , 0.67 , 0.5 , 0.5 ) ( 0.67 , 0.67 , 0.5 , 0.5 ) ( 0.67 , 0.67 , 0.33 , 0.33 ) ( 0.5 , 0.5 , 0.67 , 0.67 ) ( 0.67 , 0.67 , 0.67 , 0.67 ) ( 0.67 , 0.67 , 0.67 , 0.67 ) ( 0.83 , 0.83 , 0.67 , 0.67 ) D ( 0.5 , 0.5 , 0.67 , 0.67 ) ( 0.67 , 0.67 , 0.67 , 0.67 ) ( 0.67 , 0.67 , 0.67 , 0.67 ) ( 0.83 , 0.83 , 0.67 , 0.67 ) ( 0.33 , 0.33 , 0.67 , 0.67 ) ( 0.67 , 0.67 , 0.83 , 0.83 ) ( 0.67 , 0.67 , 0.83 , 0.83 ) ( 1 , 1 , 1 , 1 )
Configuration c 2 changes the utility calculations (expressed by Equations (12) and (13) and gives the final matrix:
CCC CCD CDC CDD DCC DCD DDC DDD C ( 2.67 , 2.67 , 2.67 , 2.67 ) ( 2.67 , 2.67 , 1.67 , 1.67 ) ( 2.67 , 2.67 , 1.67 , 1.67 ) ( 2.67 , 2.67 , 0.67 , 0.67 ) ( 1.67 , 1.67 , 2.67 , 2.67 ) ( 1.83 , 1.83 , 1.83 , 1.83 ) ( 1.83 , 1.83 , 1.83 , 1.83 ) ( 2 , 2 , 1 , 1 ) D ( 1.67 , 1.67 , 2.67 , 2.67 ) ( 1.83 , 1.83 , 1.83 , 1.83 ) ( 1.83 , 1.83 , 1.83 , 1.83 ) ( 2 , 2 , 1 , 1 ) ( 0.67 , 0.67 , 2.67 , 2.67 ) ( 1 , 1 , 2 , 2 ) ( 1 , 1 , 2 , 2 ) ( 1.33 , 1.33 , 1.33 , 1.33 )
Configuration c 3 gives only two Nash equilibria, either both agents intervene or do not intervene. Configuration c 4 leads to only one Nash equilibrium: the cooperation of both agents is then necessary.

5.4. Building the Distributed Matrix According to the Point of View of Each Agent

In the context of our illustration, two agents and two humans, the resulting matrix would then be (with the exception of weighting) reduced to the following calculation: M ¯ A 2 H 2 i = M A i H 1 i + ¯ M A i H 2 i for Agent A i . This result implicitly leads to a specific matrix for each agent. Indeed, for Agent A 1 , its calculation is defined by M A 1 H 1 + M A 1 H 2 (the same for A 2 ). Table 4 presents the results in terms of the Nash equilibria for the four configurations considered.
Configuration c 1 corresponding to the initial state of our scenario assumes all the descriptors at the minimum value. This is described by the following matrix, whose Nash equilibria require the intervention or not of the assistant agents. Given this situation, the analysis corresponds to the intuitive interpretation (no real reason for the agents to intervene).
CCC CCD CDC CDD DCC DCD DDC DDD C ( 0.67 , 0.67 , 0.67 , 0.67 ) ( 0.67 , 0.67 , 0.67 , 0.33 ) ( 0.67 , 0.67 , 0.33 , 0.67 ) ( 0.67 , 0.67 , 0.33 , 0.33 ) ( 0.67 , 0.33 , 0.67 , 0.67 ) ( 0.67 , 0.67 , 0.67 , 0.67 ) ( 0.67 , 0.67 , 0.67 , 0.67 ) ( 0.67 , 1 , 0.67 , 0.67 ) D ( 0.33 , 0.67 , 0.67 , 0.67 ) ( 0.67 , 0.67 , 0.67 , 0.67 ) ( 0.67 , 0.67 , 0.67 , 0.67 ) ( 0.67 , 0.67 , 0.67 , 0.67 ) ( 1 , 0.67 , 0.67 , 0.67 ) ( 0.33 , 0.33 , 0.67 , 0.67 ) ( 0.67 , 0.67 , 0.67 , 1 ) ( 1 , 1 , 1 , 1 )
For Configuration c 2 , the environment evolves relatively quickly to a critical situation. This differs from the previous one by having a criticality at the maximum value. Following the experience level and despite a low workload, the assisting agents must intervene (the selected action, as the Nash equilibrium is C C C C ). Intuitively, related to the urgency of the situation, the intervention of the two agents becomes necessary. Thus, we obtain the following matrix:
CCC CCD CDC CDD DCC DCD DDC DDD C ( 2.67 , 2.67 , 2.33 , 3 ) ( 2.67 , 2.67 , 2.33 , 0.33 ) ( 2.67 , 2.67 , 1 , 3 ) ( 2.67 , 2.67 , 1 , 0.33 ) ( 2.67 , 0.67 , 2.33 , 3 ) ( 2.67 , 1 , 2.33 , 0.67 ) ( 2.67 , 1 , 1.33 , 3 ) ( 2.67 , 1.33 , 1.33 , 0.67 ) D ( 0.67 , 2.67 , 2.33 , 3 ) ( 1 , 2.67 , 2.33 , 0.67 ) ( 1 , 2.67 , 1.33 , 3 ) ( 1.33 , 2.67 , 1.33 , 0.67 ) ( 0.67 , 0.67 , 2.33 , 3 ) ( 1 , 1 , 2.33 , 1 ) ( 1 , 1 , 1.67 , 3 ) ( 1.33 , 1.33 , 1.67 , 1 )
Over time, let us imagine that one of the two humans has an experience level that has increased (as a result of learning from their mistakes or following training, for example). This would not change the intuitive interpretation of the actions of the assistant agents. Configuration c 3 regarding a low criticality level requires the intervention of one or both assistant agents. Configuration c 4 is a slight modification of the previous one, in which the criticality level is increased from 1 to 3. The assistant agents are told to choose between their interventions (or not) (Actions C C C C and D D D D ).

6. Discussion

Theoretically, we assume that the descriptors are defined for all the agents. We propose evaluations by showing their respective influence on the agents’ strategic behaviors. In this study, we considered three descriptors (workload, experience level, and criticality), which we combined and weighted. The illustrations used to evaluate our approach considered these three descriptors. We have therefore introduced three examples for the building of two-player matrices (agent–human), illustrating our proposal for a given descriptor. The first example used two descriptors from Category 1 that can be considered in many application domains: criticality and workload. The second example involved a descriptor from Category 2 that is also widely used in many fields: experience level.
However, the state of the art allows us to highlight a set of descriptors that we could take into account. We also showed that our model remains valid for any number of descriptors. Thus, we have completed our validation by analyzing the results for combinations of descriptors: a combination of two descriptors from the same category (criticality and workload), two descriptors from different categories (criticality and experience level), and finally a combination of three descriptors mentioned above. Currently, in the context of several descriptors (e.g., workload and criticality), the same importance is given to each one, in order to simplify the presented examples. It is of course entirely possible that each descriptor has its own weighting. It would be possible to develop a method to define the priorities to be given to descriptors according to the desired objective. Thus, in the case of high criticality, the intervention of the agent could be the priority, whether the human workload is medium or low (not just high). In this case, each agent could consider only a restricted set of considered descriptors. The concerned coefficients λ i and the choice of these descriptors would explicitly guide the agents’ decisions. In the same way, the threshold set modifying the behavior of the agents for a particular descriptor could also change the agents’s strategic behavior. Indeed, the sensitivity of the assistant agents has been defined according to this fixed threshold (depending on a parameter, δ ). The value of this parameter (varying from 0 to 1) makes it possible to determine the thresholds for their interventions. In our model, we assume that this parameter is unique for all descriptors; but it would be possible to associate a specific value for each descriptor (or even for each agent).
Generalization for any number of participants has been also investigated. We proposed a weighting of coefficients μ i , j for these different initial matrices. These coefficients make it possible to define a level of trust for the other interacting participants [60,61,62]. We could then propose a dynamic evolution of these coefficients according to the level of trust for the different interactions.

7. Conclusions

Interactions between humans and agents have been proposed in different research domains (human–computer interaction, artificial intelligence and multi-agent systems) and for different applications (for example, autonomous cooperative robotics, workflow modeling). Different approaches have been proposed to model teams of participants (agents and humans). The models described in these studies are often challenged by the specificities of humans (workload and experience level, for example) and software agents (autonomy, for example). Recently, studies on AI and MAS have considered human-compatible agents. We think that the game theory approach is a well-suited method for studying social interactions in such teams. The idea is based on the equilibrium concept, to find the strategy which helps a group of participants to maximize their own benefits (utilities). This mathematical approach has also been investigated for various real applications based on interactions between agents (e.g., traffic management, resource allocation, social networks, urban planning). These approaches are often based on the empirical construction of a two-player matrix, and studies underline the good behavior of the built matrix. The human intervention is usually not directly considered, nor included in the analyses of these matrices. We assume that their characteristics should be initially defined in the decision processes, and the agents should also be allowed to decide to cooperate or not. The assistance of one or more humans by one or more agents was the subject of this paper.
The main objective was to achieve a model of the agents’ interventions, in order to assist humans in the context of task realization. This modeling can be qualified as generic, regarding the intervention criteria used by the agents. Thus, to determine the decision of the agents (and thus their intervention or not), we proposed considering matrix games. These depend on the descriptors (acting as criteria) defined by the designer for a given application. These descriptors are classified in two categories. We have seen that the first category means that the minimum value of the scale does not require any particular attention; on the other hand, the maximum value generally requires an intervention by the agent(s). The second category of descriptors, on the other hand, considers a dual interpretation. The set of descriptors we propose to use is thus based on an explicit combination of all interactions between agents and humans (represented by two-player matrices). We also proposed a model based on a weighted sum of the matrices (associated with an agent–human interaction) corresponding to the predefined descriptors. Finally, we described a general model (n agents and m humans) allowing the definition of any number of participants and any number of descriptors. The exploitation of the model was illustrated using a typical scenario, functioning as a proof of concept. This scenario considered a matrix with two agents and two humans, using the same three descriptors. We considered the obtained equilibria according to two different approaches (centralized or distributed) depending on the point of view: global or relative to a given agent.
The proposed model is generic but reveals two possible alternatives, leading to complementary research. On the one hand, it would be possible to take into account the feedback of an agent’s action and to interpret the result with respect to the decision determined by the selected Nash equilibrium. The action (with or without cooperation) can in this case be judged irrelevant (for example, instead of intervening to help the human, it would have been better for the agent to wait). On the other hand, instead of each agent intervening individually, it would be possible for multi-agent coalitions [63,64,65] to be formed to assist a human or group of humans in the achievement of a task. This would allow the other agents to focus on other tasks. We plan to implement and test such configurations in different application domains.

Author Contributions

Conceptualization, E.A., M.R., C.K. and R.M.; methodology, E.A., M.R., C.K. and R.M.; software, E.A. and R.M.; validation, E.A. and R.M.; writing—original draft preparation, E.A., M.R., C.K. and R.M.; writing—review and editing, E.A., M.R., C.K. and R.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the reviewers for their constructive comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Razakatiana, M.; Kolski, C.; Mandiau, R.; Mahatody, T. Game theory-based human-assistant agent interaction model: Feasibility study for a complex task. In Proceedings of the HAI ’20: 8th International Conference on Human-Agent Interaction, Virtual Event, Australia, 10–13 November 2020; Obaid, M., Mubin, O., Nagai, Y., Osawa, H., Abdelrahman, Y., Fjeld, M., Eds.; ACM: New York, NY, USA, 2020; pp. 187–195. [Google Scholar] [CrossRef]
  2. Razakatiana, M.; Kolski, C.; Mandiau, R.; Mahatody, T. Human-agent interaction based on game theory: Case of a road traffic supervision task. In Proceedings of the 13th International Conference on Human System Interaction, HSI 2020, Tokyo, Japan, 6–8 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 88–93. [Google Scholar] [CrossRef]
  3. Murphy, R.R. Human-robot interaction in rescue robotics. IEEE Trans. Syst. Man Cybern. Part C 2004, 34, 138–153. [Google Scholar] [CrossRef]
  4. Adam, E.; Mandiau, R. Design of a MAS into a Human Organization: Application to an Information Multi-agent System. In Proceedings of the Agent-Oriented Information Systems, 5th International Bi-Conference Workshop, AOIS 2003, Melbourne, Australia, 14 July 2003; Chicago, IL, USA, 13 October 2003. Giorgini, P., Henderson-Sellers, B., Winikoff, M., Eds.; Revised Selected Papers, Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2003; Volume 3030, pp. 1–15. [Google Scholar] [CrossRef]
  5. Rosenfeld, A.; Richardson, A. Explainability in human–agent systems. Auton. Agents Multi-Agent Syst. 2019, 33, 673–705. [Google Scholar] [CrossRef]
  6. Chaib-draa, B.; Moulin, B.; Mandiau, R.; Millot, P. Trends in distributed artificial intelligence. Artif. Intell. Rev. 1992, 6, 35–66. [Google Scholar] [CrossRef]
  7. Müller, J.P. Architecture and application of intelligent agent: A survey. Knowl. Eng. Rev. 1998, 13, 353–380. [Google Scholar] [CrossRef]
  8. van der Hoek, W.; Wooldridge, M.J. Multi-agent systems. In Handbook of Knowledge Representation; van Harmelen, F., Lifschitz, V., Porter, B.W., Eds.; Foundations of Artificial Intelligence; Elsevier: Amsterdam, The Netherlands, 2008; Volume 3, pp. 887–928. [Google Scholar] [CrossRef]
  9. Hutter, M. Open Problems in Universal Induction & Intelligence. Algorithms 2009, 2, 879–906. [Google Scholar]
  10. Wooldridge, M. An introduction to MultiAgent Systems; Wiley: Hoboken, NJ, USA, 2009. [Google Scholar]
  11. Maybury, M.; Wahlster, W. Readings in Intelligent User Interfaces; Morgan Kaufmann: Burlington, MA, USA, 1998. [Google Scholar]
  12. Lew, M.; Bakker, E.M.; Sebe, N.; Huang, T.S. Human-computer intelligent interaction: A survey. In Proceedings of the International Workshop on Human-Computer Interaction, Rio de Janeiro, Brazil, 20 October 2007. [Google Scholar]
  13. Boy, G.A. Human-centered design of complex systems: An experience-based approach. Des. Sci. 2017, 3, 147–154. [Google Scholar] [CrossRef]
  14. Völkel, S.T.; Schneegass, C.; Eiband, M.; Buschek, D. What is “intelligent” in intelligent user interfaces?: A meta-analysis of 25 years of IUI. In Proceedings of the IUI ’20: 25th International Conference on Intelligent User Interfaces, Cagliari, Italy, 17–20 March 2020; Paternò, F., Oliver, N., Conati, C., Spano, L.D., Tintarev, N., Eds.; ACM: New York, NY, USA, 2020. [Google Scholar]
  15. Mandiau, R.; Kolski, C.; Chaib-Draa, B.; Millot, P. A new approach for the cooperation between human(s) and assistance system(s): A system based on intentional states. In Proceedings of the World Congress onExpert Systems, Orlando, FL, USA, 16–19 December 1991. [Google Scholar]
  16. Millot, P.; Mandiau, R. Man-Machine Cooperative Organizations: Formal and Pragmatic Implementation Methods; Chapter Expertise and Technology: Cognition & Human-Computer Cooperation; Lawrence Erlbaum Associates: London, UK, 1995; pp. 213–228. [Google Scholar]
  17. Azaria, A.; Gal, Y.; Kraus, S.; Goldman, C.V. Strategic advice provision in repeated human-agent interactions. Auton. Agent Multiagent Syst. 2015, 30, 4–29. [Google Scholar] [CrossRef]
  18. Kolski, C.; Boy, G.; Mélançon, G.; Ochs, M.; Vanderdonckt, J. Cross-fertilisation between human-computer interaction and artificial intelligence. In A Guided Tour of Artificial Intelligence Research; Marquis, P., Papini, O., Prade, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2020; Volume 3, pp. 1117–1141. [Google Scholar]
  19. Badeig, F.; Adam, E.; Mandiau, R.; Garbay, C. Analyzing multi-agent approaches for the design of advanced interactive and collaborative systems. J. Ambient Intell. Smart Environ. 2016, 8, 325–346. [Google Scholar] [CrossRef]
  20. Kubicki, S.; Lebrun, Y.; Lepreux, S.; Adam, E.; Kolski, C.; Mandiau, R. Simulation in contexts involving an interactive table and tangible objects. Simul. Model. Pract. Theory 2013, 31, 116–131. [Google Scholar] [CrossRef]
  21. Holzinger, A.; Plass, M.; Kickmeier-Rust, M.; Holzinger, K. Interactive machine learning: Experimental evidence for the human in the algorithmic loop: A case study on Ant Colony Optimization. Appl. Intell. 2019, 49, 2401–2414. [Google Scholar] [CrossRef]
  22. Russell, S. It ’s Not Too Soon to Be Wary of AI. IEEE Sprectrum. 2019, 56, 47–51. [Google Scholar]
  23. Russell, S. Human-Compatible Artificial Intelligence. In Human-Like Machine Intelligence; Muggleton, S.H., Chater, N., Eds.; Oxford University Press: Oxford, UK, 2022; pp. 3–23. [Google Scholar]
  24. Russell, S. Artificial Intelligence and the Problem of Control. In Perspectives on Digital Humanism; Werthner, H., Prem, E., Lee, E.A., Ghezzi, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2022; pp. 19–24. [Google Scholar]
  25. Mandiau, R.; Champion, A.; Auberlet, J.; Espié, S.; Kolski, C. Behaviour based on decision matrices for a coordination between agents in a urban traffic simulation. Appl. Intell. 2008, 28, 121–138. [Google Scholar] [CrossRef]
  26. Osborne, M.J. An Introduction to Game Theory; MIT Press: Cambridge, MA, USA, 2004. [Google Scholar]
  27. Kaviari, F.; Mesgari, M.S.; Seidi, E.; Motieyan, H. Simulation of urban growth using agent-based modeling and game theory with different temporal resolutions. Cities 2019, 95, 102387. [Google Scholar] [CrossRef]
  28. Tan, R.; Liu, Y.; Zhou, K.; Jiao, L.; Tang, W. A game-theory based agent-cellular model for use in urban growth simulation: A case study of the rapidly urbanizing Wuhan area of central China. Comput. Environ. Urban Syst. 2015, 49, 15–29. [Google Scholar] [CrossRef]
  29. Shults, F.L.; Gore, R.; Wildman, W.J.; Lynch, C.J.; Lane, J.E.; Toft, M.D. A Generative Model of the Mutual Escalation of Anxiety Between Religion Groups. J. Artif. Soc. Soc. Simul. 2018, 21, 1–25. [Google Scholar] [CrossRef]
  30. Schelling, T.C. Dynamic models of segregation. J. Math. Sociol. 1971, 1, 143–186. [Google Scholar] [CrossRef]
  31. Schelling, T.C. Micromotives and Macrobehavior; W. W. Norton and Company: New York, NY, USA, 2006. [Google Scholar]
  32. Axelrod, J. The dissemination of culture: A model with local convergence and global polarization. J. Confl. Resolut. 1997, 41, 203–226. [Google Scholar] [CrossRef]
  33. Lemos, C.M.; Gore, R.J.; Lessard-Phillips, L.; Shults, F.L. A network agent-based model of ethnocentrism and intergroup cooperation. Qual. Quant. 2020, 54, 463–489. [Google Scholar] [CrossRef]
  34. Santos, F.P.; Santos, F.C.; Pacheco, J.M. Social norms of cooperation in small-scale societies. PLoS Comput. Biol. 2016, 12, 1–13. [Google Scholar] [CrossRef]
  35. Watts, D.J.; Strogatz, S.H. Collective dynamics of ‘small-world’ networks. Nature 1998, 393, 440–442. [Google Scholar] [CrossRef]
  36. Akhbari, M.; Grigg, N.S. A framework for an agent-based model to manage water resources conflicts. J. Water Resour. Manag. 2013, 27, 4039–4052. [Google Scholar] [CrossRef]
  37. Anebagilu, P.K.; Dietrich, J.; Prado-Stuardo, L.; Morales, B.; Winter, E.; Arumi, J.L. Application of the theory of planned behavior with agent-based modeling for sustainable management of vegetative filter strips. J. Environ. Manag. 2021, 284, 112014. [Google Scholar] [CrossRef] [PubMed]
  38. Noori, M.; Emadi, A.; Fazloula, R. An agent-based model for water allocation optimization and comparison with the game theory approach. Water Supply 2021, 21, 3584–3601. [Google Scholar] [CrossRef]
  39. Kefi, H.; Besson, E.; Sokolova, K.; Chiraz, A.M. Privacy and intelligent virtual assistants usage across generations. Systèmes Inf. Manag. 2021, 26, 43–76. [Google Scholar] [CrossRef]
  40. Roentgen, U.R.; Gelderblom, G.J.; Soede, M.; de Witte, L.P. Inventory of electronic mobility aids for persons with visual impairments: A literature review. J. Vis. Impair. Blind. 2008, 102, 702–724. [Google Scholar] [CrossRef]
  41. Dhiman, H.; Wächter, C.; Fellmann, M.; Röcker, C. Intelligent assistants. Bus. Inf. Syst. Eng. 2022, 64, 645–665. [Google Scholar] [CrossRef]
  42. Wandke, H. Assistance in human–machine interaction: A conceptual framework and a proposal for a taxonomy. Theor. Issues Ergon. Sci. 2007, 6, 129–155. [Google Scholar] [CrossRef]
  43. Lecerf, U. Robust Learning for Autonomous Agents in Stochastic Environments. Ph.D. Thesis, Sorbonne University, Paris, France, 2022. [Google Scholar]
  44. Eckhoff, R.K. Explosion Hazards in the Process Industries; Gulf Professional Publishing: Houston, TX, USA, 2016. [Google Scholar]
  45. Gursel, E.; Reddy, B.; Khojandi, A.; Madadi, M.; Coble, J.B.; Agarwal, V.; Yadav, V.; Boring, R.L. Using artificial intelligence to detect human errors in nuclear power plants: A case in operation and maintenance. Nucl. Eng. Technol. 2023, 55, 603–622. [Google Scholar] [CrossRef]
  46. Masson, M.; de Keyser, V. Human error: Lesson learned from a field study for the specification of an intelligent error prevention system. In Proceedings of the Advances in Industrial Ergonomics and Safety IV; Taylor and Francis: London, UK, 1992; pp. 1085–1092. [Google Scholar]
  47. Bastien, J.M.C.; Scapin, D.L. Evaluating a user interface with ergonomic criteria. Int. J. Hum.-Comput. Interact. 1995, 7, 105–121. [Google Scholar] [CrossRef]
  48. Rubio, S.; Diaz, E.; Martin, J.; Puente, J.M. Evaluation of subjective mental workload: A comparison of SWAT, NASA-TLX, and workload profile methods. Appl. Psychol. 2004, 53, 61–86. [Google Scholar] [CrossRef]
  49. Maes, P. Agents that reduce work and information overload. Commun. ACM 1994, 37, 30–40. [Google Scholar] [CrossRef]
  50. Dennis, A.; Wixom, B.; Roth, R.M. Systems Analysis and Design, 8th ed.; Wiley: Hoboken, NJ, USA, 2021. [Google Scholar]
  51. EU. Directive 2012/18/EU of the European Parliament and of the Council of 4 July 2012 on the Control of Major-Accident Hazards Involving Dangerous Substances, Amending and Subsequently Repealing Council Directive 96/82/EC Text with EEA Relevance; Techreport; European Union: Brussels, Belgium, 2012. [Google Scholar]
  52. Adama, K.Y.; Konaté, J.; Maïga, O.Y.; Tembiné, H. Efficient Strategies Algorithms for Resource Allocation Problems. Algorithms 2020, 13, 270. [Google Scholar] [CrossRef]
  53. Neumann, J.V.; Morgenstein, O. Theory of Games and Economic Behavior; Princeton University Press: Princeton, NJ, USA, 1944. [Google Scholar]
  54. Nash, J. Equilibrium points in n-person games. Proc. Natl. Acad. Sci. USA 1950, 36, 48–49. [Google Scholar] [CrossRef]
  55. Yuan, A.; Cao, L.; Wang, X. Game-theory-based multi-agent interaction model. Jisuanji Gongcheng/Comput. Eng. 2005, 31, 50–51. [Google Scholar]
  56. Szilagyi, M.N. Investigation of N-Person Games by Agent-based Modeling. Complex Syst. 2012, 21, 201–243. [Google Scholar] [CrossRef]
  57. Hamila, H.; Grislin-Le Strugeon, E.; Mandiau, R.; Mouaddib, A. Strategic dominance and dynamic programming for multi-agent plannning, application to the multi-robot box-pushing problem. In Proceedings of the ICAART 2012, 4th International Conference on Agents and Artificial Intelligence, Vilamoura, Algarve, Portugal, 6–8 February 2012. [Google Scholar]
  58. Shoham, Y.; Leyton-Brown, K. Multiagent Systems: Algorithmic, Game Theoretic and Logicial Foundations; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
  59. Dufour, R.; Dufour, R. Learning by Doing: A Handbook for Professional Learning Communities at Work; Solution Tree Press: Bloomington, Indiana, 2013. [Google Scholar]
  60. Ramchurn, S.D.; Huynh, D.; Jennings, N.R. Trust in multi-agent systems. Knowl. Eng. Rev. 2004, 19, 1–25. [Google Scholar] [CrossRef]
  61. Granatyr, J.; Botelho, V.; Lessing, O.R.; Scalabrin, E.E.; Barthes, J.P.; Enembreck, F. Trust and Reputation Models for Multi-Agent Systems. ACM Comput. Surv. 2015, 48, 27:1–27:42. [Google Scholar] [CrossRef]
  62. Chen, M.; Yin, C.; Zhang, J.; Nazarian, S.; Deshmukh, J.; Bogdan, P. A General Trust Framework for Multi-Agent Systems. In Proceedings of the AAMAS ’21: Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, Virtual Event, UK, 3–7 May 2021; Endriss, U., Nowé, A., Dignum, F., Lomuscio, A., Eds.; IFAAMAS: Richland, SC, USA, 2021; pp. 332–340. [Google Scholar]
  63. Aknine, S.; Pinson, S.; Shakun, M. A Multi-Agent Coalition Formation Method Based on Preference Models. Group Decis. Negot. 2004, 13, 513–538. [Google Scholar] [CrossRef]
  64. Guéneron, J.; Bonnet, G. Un protocole de concessions monotones pour la formation distribuée de coalitions. In Proceedings of the SMA et Smart Cities—Trentièmes Journées Francophones sur les Systèmes Multi-Agents, JFSMA 2022, Saint-Etienne, France, 27–29 June 2022; Camps, V., Ed.; Cépaduès: Toulouse, France, 2022; pp. 31–40. [Google Scholar]
  65. Sarkar, S.; Malta, M.C.; Dutta, A. A survey on applications of coalition formation in multi-agent systems. Concurr. Comput. Pract. Exp. 2022, 34, e6876. [Google Scholar] [CrossRef]
Figure 1. UML activity diagram for the representation of interactions.
Figure 1. UML activity diagram for the representation of interactions.
Information 14 00313 g001
Figure 2. UML sequence diagram for the representation of interactions.
Figure 2. UML sequence diagram for the representation of interactions.
Information 14 00313 g002
Figure 3. Evolution of the level of intervention (i.e., the decision is C) for the agent.
Figure 3. Evolution of the level of intervention (i.e., the decision is C) for the agent.
Information 14 00313 g003
Figure 4. Evolution of an agent intervention rate ( v = v m i n = 1 ).
Figure 4. Evolution of an agent intervention rate ( v = v m i n = 1 ).
Information 14 00313 g004
Figure 5. Degrees of cooperation for Agent A.
Figure 5. Degrees of cooperation for Agent A.
Information 14 00313 g005
Figure 6. Evolution of the agent’s intervention level for different values λ c r t and λ w l .
Figure 6. Evolution of the agent’s intervention level for different values λ c r t and λ w l .
Information 14 00313 g006
Figure 7. Evolution of the agent’s intervention level for different values λ c r t and λ e x p l .
Figure 7. Evolution of the agent’s intervention level for different values λ c r t and λ e x p l .
Information 14 00313 g007
Figure 8. Evolution of the agent’s intervention level for different values λ c r t , λ w l , and λ e x p l .
Figure 8. Evolution of the agent’s intervention level for different values λ c r t , λ w l , and λ e x p l .
Information 14 00313 g008
Figure 9. Degree of cooperation for the agent; cases with combinations of descriptors.
Figure 9. Degree of cooperation for the agent; cases with combinations of descriptors.
Information 14 00313 g009
Table 1. Classification of descriptors into two categories (those in bold are used to illustrate the proposed approach).
Table 1. Classification of descriptors into two categories (those in bold are used to illustrate the proposed approach).
Category 1Category 2
CriticalityExperience level
WorkloadPrivacy
DisabilityUsability
Stochastic environmentPerformance
Human errorsReliability of the system
Table 2. Description of typical configurations for A 2 H 2 .
Table 2. Description of typical configurations for A 2 H 2 .
ConfigurationWorkloadExperience LevelCriticality
wl 1 wl 2 expl 1 expl 2 crt
c 1 11111
c 2 24315
c 3 24321
c 4 22323
Table 3. Determination of equilibria for typical configurations in the A 2 H 2 case (centralized approach).
Table 3. Determination of equilibria for typical configurations in the A 2 H 2 case (centralized approach).
ConfigurationWorkloadExperience LevelCriticalityNash Equilibria
wl 1 wl 2 expl 1 expl 2 crt
c 1 11111 C C C C , C D C D , C D D C , D C C D , D C D C , D D D D
c 2 24315 C C C C
c 3 24321 C C C C , D D D D
c 4 22323 C C C C
Table 4. Description of typical configurations for A 2 H 2 (distributed view).
Table 4. Description of typical configurations for A 2 H 2 (distributed view).
ConfigurationWorkloadExperience LevelCriticalityNash Equilibria
wl 1 wl 2 expl 1 expl 2 crt
c 1 11111 C C C C , C D C D , C D D C , D C C D , D C D C , D D D D
c 2 24315 C C C C
c 3 24321 C C C C , C C D C
c 4 22323 C C C C , D D D D
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Adam, E.; Razakatiana, M.; Mandiau, R.; Kolski, C. Matrices Based on Descriptors for Analyzing the Interactions between Agents and Humans. Information 2023, 14, 313. https://doi.org/10.3390/info14060313

AMA Style

Adam E, Razakatiana M, Mandiau R, Kolski C. Matrices Based on Descriptors for Analyzing the Interactions between Agents and Humans. Information. 2023; 14(6):313. https://doi.org/10.3390/info14060313

Chicago/Turabian Style

Adam, Emmanuel, Martial Razakatiana, René Mandiau, and Christophe Kolski. 2023. "Matrices Based on Descriptors for Analyzing the Interactions between Agents and Humans" Information 14, no. 6: 313. https://doi.org/10.3390/info14060313

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop