Next Article in Journal
Towards Using Unsupervised Learning for Comparing Traditional and Synchronous Online Learning in Assessing Students’ Academic Performance
Next Article in Special Issue
Description of the Distribution Law and Non-Linear Dynamics of Growth of Comments Number in News and Blogs Based on the Fokker-Planck Equation
Previous Article in Journal
Adaptation of Residual-Error Series Algorithm to Handle Fractional System of Partial Differential Equations
Previous Article in Special Issue
Not Another Computer Algebra System: Highlighting wxMaxima in Calculus
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Knowledge Dynamics and Behavioural Equivalences in Multi-Agent Systems

1
Institute of Computer Science, Romanian Academy, 700505 Iaşi, Romania
2
Faculty of Computer Science, Alexandru Ioan Cuza University, 700506 Iaşi, Romania
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(22), 2869; https://doi.org/10.3390/math9222869
Submission received: 25 September 2021 / Revised: 25 October 2021 / Accepted: 8 November 2021 / Published: 11 November 2021
(This article belongs to the Special Issue Applied and Computational Mathematics for Digital Environments)

Abstract

:
We define a process calculus to describe multi-agent systems with timeouts for communication and mobility able to handle knowledge. The knowledge of an agent is represented as sets of trees whose nodes carry information; it is used to decide the interactions with other agents. The evolution of the system with exchanges of knowledge between agents is presented by the operational semantics, capturing the concurrent executions by a multiset of actions in a labelled transition system. Several results concerning the relationship between the agents and their knowledge are presented. We introduce and study some specific behavioural equivalences in multi-agent systems, including a knowledge equivalence able to distinguish two systems based on the interaction of the agents with their local knowledge.

1. Introduction

Process calculi are used to describe concurrent systems, providing a high-level description of interactions, communications and synchronizations between independent processes or agents. The main features of a process calculus are: (i) interactions between agents/processes are by communication (message-passing), rather than modifying shared variables; (ii) large systems are described in a compositional way by using a small number of primitives and operators; (iii) processes can be manipulated by using equational reasoning and behavioural equivalences. The key primitive distinguishing the process calculi from other models of computation is the parallel composition. The compositionality offered by the parallel composition can help to describe large systems in a modular way, and to better organize their knowledge (for reasoning about them).
In this paper we define an extension of the process calculus TiMo [1] in order to model multi-agent systems and their knowledge. In this framework, the agents can move between locations and exchange information, having explicit timeouts for both migration and communication. Additionally, they have a knowledge of the network used to decide the next interactions with other agents. The knowledge of the agents is inspired by a model of semi-structured data [2] in which it is given by sets of unordered trees containing pairs of labels and values in each node. In our approach, the knowledge is described via sets of trees used to exchange information among agents about migration and communication. Overall, we present a formal way to describe the behaviour of mobile communicating agents and networks of agents in a compositional manner.
A network of mobile agents is a distributed environment composed of locations where several agents act in parallel. Each agent is represented by a process together with its knowledge that is used to decide interactions with other agents. Taking the advantage that there already exists a theory of parallel and concurrent systems, we define a prototyping language for multi-agent systems presented as a process calculus in concurrency theory. Its semantics is given formally by a labelled transition system; in this way we describe the behaviour of the entire network, and prove some useful properties.
In concurrency, the behavioural equality of two systems is captured by using bisimulations. Bisimulations are important contributions to computer science that appeared as refinements of ‘structure-preserving’ mappings (morphisms) in mathematics; they can be applied to new fields of study, including multi-agent systems. Bisimilarity is the finest behavioural equivalence; it abstracts from certain details of the systems, focusing only on the studied aspects. The equivalence relations should be compositional such that if two systems are equivalent, then the systems obtained by their compositions with a third system should also be equivalent. This compositional reasoning allows for the development of complex systems in which each component can be replaced by an equivalent one. Furthermore, there exist efficient algorithms for bisimilarity checking and compositionality properties of bisimilarity, algorithms that are usually used to minimize the state-space of systems. These are good reasons why we consider that it is important to define and study some specific behavioural equivalences for multi-agent systems enhanced with a knowledge of the network for deciding the next interactions. To be more realistic, we consider systems of agents with timing constraints on migration and communication. Therefore, a notable advantage of using our framework to model systems of mobile agents is the possibility to naturally express compositionality, mobility, local communication, timeouts, knowledge, and equivalences between systems in a given interval of time (up to a timeout).
The paper is structured as follows: Section 2 presents the syntax and semantics of the new process calculus knowTiMo and provides some results regarding the timing and knowledge aspects of the evolution. In Section 3 we define and study various bisimulations for the multi-agent systems described in knowTiMo . The conclusion, related work and references end the article.

2. The New Process Calculus knowTiMo

In order to model the evolution of multi-agent systems handling knowledge, timed communication and timed migration, we define a process calculus named knowTiMo, where know stands for ‘knowledge’ and TiMo stands for the family of calculi introduced in [1] and developed in several articles.
In Table 1 we present the syntax of knowTiMo, where:
  • Loc = { l , l } is a set of distributed locations or location variables, Chan= { a , b , } is a set of channels used for communication among agents, Id = { i d , } is a set of names used to denote recursive processes, and N = { N , N , } is a set of networks;
  • a unique process definition id ( u 1 , , u m id ) = def P id is available for all id Id ;
  • timeouts of actions are denoted by t N ; thresholds appearing in tests are denoted by k Z ; variables are denoted by u; expressions (over values, variables and allowed operations) are denoted by v; fields are denoted by f; path of fields are denoted by p and are used to retrieve/update the value of the fields. Also, if Q Id and Q ( u ) is a process definition, then for v 1 v 2 we obtain two different process instances Q ( v 1 ) and Q ( v 2 ) .
An agent A is a pair P K , where A behaves as prescribed by P and K is the knowledge used by process P during its execution. An agent A = go t l then P K is ready to migrate from its current location to the location l by consuming the action go t l of agent A. In go t , the timer t indicates the fact that agent A is unavailable for t units of time at the current location; then, once the timer t expires, go t l then P executes process P at the new location l. Since l can be a location variable, it may be instantiated after communication between agents. The use of location variables allows agents to adapt their behaviours based on the interactions among agents.
An agent A = a Δ t ! v then P else Q K is available for up-to t units of time to communicate on channel a the value v to another agent A = a Δ t ? ( u ) then P else Q K available for communication at the same location and awaiting for a value on the same communication channel a. In order to simplify the presentation in this paper, we consider a synchronous calculus; this means that when a communication takes place, the message sent by one process is instantly received by the other process. If the communication happens, then agent A executes process P, while agent A executes process P by making use of the received value v. If the timers t and t of the agents A and A expire, then they execute processes Q and Q , respectively.
An agent A = if test then P else Q K uses its knowledge K to check the truth value of the test. If the value is true, then agent A executes process P, while if the value is false, then agent A executes process Q.
The agent A = create ( f v ; ) then P K extends its knowledge K by adding the new piece of knowledge f v ; in parallel with K, and then executes process P. The agent A = update ( p / f , v ) then P K updates its knowledge K by adding the value v into the field identified by f reached following path p / f , and then executes process P; if the field f does not exist, then the field is created and the value v is assigned to it. The agent A = 0 K has no actions to execute, and its evolution terminates.
The knowledge K of an agent A is used either for storing information needed for communication with other agents or for deciding what process to execute. We define the knowledge as sets of trees in which the nodes carrying the information are of two types: f ε ; K and f v ; K . Both types of nodes contain a field f and a knowledge K ; they differ only in the value stored in the field f, which can be either the symbol ε indicating the empty value, or a non-empty value v. An agent A = P K can use the information stored in its knowledge K to perform tests. For example, a test K ( p / f ) > k is t r u e only if, following a path p in knowledge K, the value stored in the field f is greater than k (otherwise, it is evaluated to false ); a path is used to select a node in knowledge K. Predicates, always embedded in square brackets and attached to fields in a path, are used to analyze either the value of the current node by using p [ test ( p ) ] or the values of the inner nodes by using p [ test ( p / f ) ] . We say that a knowledge K is included in another knowledge K (denoted K K ) if for all paths p appearing in K it holds that K ( p ) = K ( p ) .
In Table 1 there exist only one possibility to bind variables; namely, the variable u of the process a Δ t ? ( u ) then P else Q is bound within process P, while it is not bound within process Q. We denote by fv ( P ) and fv ( N ) the sets of free variables appearing in process P and network N, respectively. Moreover, we impose that fv ( P id ) { u 1 , , u m id } , where id ( u 1 , , u m id ) = def P id . We denote by { v / u } P the process P having all the free occurrences of the variable u replaced by value v, possibly after using α -conversion to avoid name clashes in process P.
A network is composed of distributed locations, where l [ [ A ˜ ] ] denotes a location l containing a set A ˜ of agents, while l [ [ 0 ] ] denotes a location without any agents. Over the set N of networks we define the structural equivalence ≡ as the smallest congruence satisfying the equalities:
l [ [ A ˜ 0 ] ] l [ [ A ˜ ] ] ,   l [ [ A ˜ ] ] l [ [ B ˜ ] ] l [ [ A ˜ B ˜ ] ] ,
N N ,   N N N N ,   ( N N ) N N ( N N ) .
The structural congruence ≡ is needed when using the operational semantics presented in Table 2 and Table 3 for either executing actions or indicating time passing. In Table 2 the relation N Λ N denotes the transformation of a network N into a network N by executing the actions from the multiset of actions Λ ; if the multiset of actions Λ contains only a single action λ , namely Λ = { λ } , then we use N λ N instead of N { λ } N .
The operational semantics of knowTiMo is presented in Table 2.
In rule (Stop), l [ [ 0 ] ] denotes a network without agents, and thus marks the fact that no action is available for execution. Rule (Com) is used if at location l two agents A 1 = a Δ t 1 ! v then P 1 else Q 1 K 1 and A 2 = a Δ t 2 ? ( u ) then P 2 else Q 2 K 2 can communicate successfully over channel a. After communication, both agents remain at the current location l with their knowledge unchanged; agent A 1 executes P 1 , while agent A 2 executes { v / u } P 2 . The successful communication over channel a at location l is marked by label a ! ? @ l .
Rules (Put0) and (Get0) are used for an agent A = a Δ 0 then P else Q K (where { ! v , ? ( u ) } ) to remove action a when its timer expires. Afterwards, agent A is ready to execute Q. Knowledge K remains unchanged. Since rule (Com) can be applied even if t 1 and t 2 are zero, it follows that when a timer is 0, only one of the rules (Com), (Put0) and (Get0) is chosen for application in a nondeterministic manner.
Rule (Move0) is used when at location l an agent A = go 0 l then P K migrates to location l to execute process P. Rules (IfT) and (IfF) are used when an agent A = if test then P else Q K should decide what process to execute (P or Q) based on the Boolean value returned by test @ K ; this value is determined by performing the test on the knowledge K of agent A. Notice that in order to perform a test, the agent A can only read its knowledge K.
Rule (Create) is used when an agent A = create ( f v ; ) then P K extends its knowledge K with f v ; ; afterwards, the agent A executes process P.
Rule (Update) is used when an agent A = update ( p / f , v ) then P K updates to v the value of K ( p / f ) of the existing field f, while rule (Extend) is used when the agent A = update ( p / f , v ) then P K expands (at the end of) an existing path p with a field f such that K ( p / f ) = v ; afterwards the agent A executes process P.
Rule (Call) is used when an agent A = id ( v ) K is ready to unfold the process i d ( v ) into { v / u } P id . Rule (Par) is used to put together the behaviour of smaller subnetworks. while rule (Equiv) is used to apply the structures congruence over networks.
In Table 3 are presented the rules for describing time passing, while the knowledge of the involved agents remains unchanged. The relation N t N indicates the transformation of a network N into a network N after t units of time.
In rule (Stop), l [ [ 0 ] ] denotes a network without agents; the passing of time does not affect such a network. Rules (DPut), (DGet) and (DMove) are used to decrease the timers of actions, while rules (DPar) is used to put together the behaviour of composed networks. In rule (DPar), N 1 N 2 denotes a network N 1 N 2 that cannot execute any action; this is possible because the use of negative premises in our operational semantics does not lead to inconsistencies.
Given a finite multiset of actions Λ = { λ 1 , , λ k } and a timeout t, a derivation N Λ , t N captures a complete computational step of the form:
N λ 1 N 1 N k 1 λ k N k t N .
The fact that a knowTiMo network N is able to perform zero or more actions steps and a time step in order to reach a network N is denoted by N * N . Notice that the consumed actions and elapsed time are not recorded. By N λ * N we denote the fact that there exist networks N 1 and N 2 such that N * N 1 λ N 2 * N ; in this way we emphasize only the consumed action λ out of all consumed actions.
In our setting, at most one time passing rule can be applied for any arbitrary given process. This is the reason why, by inverting a rule, we can describe how the time passes in the subprocesses of a process. This result is useful when reasoning by induction on the structure of processes for which time passes.
Proposition 1.
Assume N t N . Then exactly one of the following holds:
  • N = l [ [ 0 ] ] and N = l [ [ 0 ] ] ;
  • N = l [ [ a Δ t ! v then P else Q K ] ] and N = l [ [ a Δ t t ! v then P else Q K ] ] , where t t 0 ;
  • N = l [ [ a Δ t ? ( u ) then P else Q K ] ] and N = l [ [ a Δ t t ? ( u ) then P else Q K ] ] , where t t 0 ;
  • N = l [ [ go t l then P K ] ] and N = l [ [ go t t l then P K ] ] , where t t 0 ;
  • N = N 1 N 2 such that N 1 N 2 , and there exist N 1 and N 2 such that N = N 1 N 2 , N 1 t N 1 and N 2 t N 2 .
Proof. 
Straightforward, by observing that the time passing rules in Table 3 can be deterministically inverted; namely, each network of Table 1 performing a time step can use at most one rule of Table 3. □
The following theorem claims that time passing does not introduce nondeterminism in the evolution of a network.
Theorem 1.
The next two statements hold for any three networks N, N and N :
1.
if N 0 N , then N = N ;
2.
if N t N and N t N , then N = N .
Proof. 
1.
We proceed by induction on the structure of N.
  • Case N = l [ [ 0 ] ] . Since N 0 N , by using Proposition 1, it holds that N = l [ [ 0 ] ] , meaning that N = N (as desired).
  • Case N = l [ [ a Δ t ! v then P else Q K ] ] . Since N 0 N , by using Proposition 1, it holds that N = l [ [ a Δ t 0 ! v then P else Q K ] ] = l [ [ a Δ t ! v then P else Q K ] ] , meaning that N = N (as desired).
  • Case N = l [ [ a Δ t ? ( u ) then P else Q K ] ] . Since N 0 N , by using Proposition 1, it holds that N = l [ [ a Δ t 0 ? ( u ) then P else Q K ] ] = l [ [ a Δ t ? ( u ) then P else Q K ] ] , meaning that N = N (as desired).
  • Case N = l [ [ go t l then P K ] ] . Since N 0 N , by using Proposition 1, it holds that N = l [ [ go t 0 l then P K ] ] = l [ [ go t l then P K ] ] , meaning that N = N (as desired).
  • Case N = N 1 N 2 . Since N 0 N , by using Proposition 1, it holds that there exist N 1 and N 2 such that N = N 1 N 2 , together with N 1 0 N 1 and N 2 0 N 2 . By induction the reductions N 1 0 N 1 and N 2 0 N 2 imply that N 1 = N 1 and N 2 = N 2 , respectively. Thus N 1 = N 1 N 2 = N 1 N 2 , meaning that N = N (as desired).
2.
We proceed by induction on the structure of N.
  • Case N = l [ [ 0 ] ] . Since N t N and N t N , by using Proposition 1, it holds that N = l [ [ 0 ] ] and N = l [ [ 0 ] ] , respectively, meaning that N = N (as desired).
  • Case N = l [ [ a Δ t ! v then P else Q K ] ] . Since N t N and N t N , by using Proposition 1, it holds that N = l [ [ a Δ t t ! v then P else Q K ] ] and N = l [ [ a Δ t t ! v then P else Q K ] ] , respectively, meaning that N = N (as desired).
  • Case N = l [ [ a Δ t ? ( u ) then P else Q K ] ] . Since N t N and N t N , by using Proposition 1, it holds that N = l [ [ a Δ t t ? ( u ) then P else Q K ] ] and N = l [ [ a Δ t t ? ( u ) then P else Q K ] ] , respectively, meaning that N = N (as desired).
  • Case N = l [ [ go t l then P K ] ] . Since N t N and N t N , by using Proposition 1, it holds that N = l [ [ go t t l then P K ] ] and N = l [ [ go t t l then P K ] ] , respectively, meaning that N = N (as desired).
  • Case N = N 1 N 2 . Since N t N , by using Proposition 1, it holds that there exist N 1 and N 2 such that N = N 1 N 2 , together with N 1 t N 1 and N 2 t N 2 . Similarly, since N t N , by using Proposition 1, it holds that there exist N 1 and N 2 such that N = N 1 N 2 , together with N 1 t N 1 and N 2 t N 2 . By induction, N 1 t N 1 and N 1 t N 1 imply that N 1 = N 1 , while N 2 t N 2 and N 2 t N 2 imply that N 2 = N 2 . Thus, N 1 = N 1 N 2 = N 1 N 2 , meaning that N = N (as desired).  □
The following theorem claims that whenever only the rules of Table 3 can be applied for two time steps of lengths t and t , then the rules can be applied also for a time step of length t + t .
Theorem 2.
If N t N t N , then N t + N .
Proof. 
We proceed by induction on the structure of N.
  • Case N = l [ [ 0 ] ] . Since N t N by using Proposition 1, it holds that N = l [ [ 0 ] ] . Similarly, since N t N by using Proposition 1, it holds that N = l [ [ 0 ] ] . Rule (DStop) can be used for network N, namely N t + t   l [ [ 0 ] ] = N (as desired).
  • Case N = l [ [ a Δ t ! v then P else Q K ] ] . Since N t N by using Proposition 1, it holds that N = l [ [ a Δ t t ! v then P else Q K ] ] , where t t 0 . Similarly, since N t N by using Proposition 1, it holds that N = l [ [ a Δ ( t t ) t ! v then P else Q K ] ] , where t t t 0 . Due to the fact that 0 t + t t , rule (DGet) can be used for network N, namely N t + t l [ [ a Δ t ( t + t ) ! v then P else Q K ] ] = N (as desired).
  • Case N = l [ [ a Δ t ? ( u ) then P else Q K ] ] . Since N t N by using Proposition 1, it holds that N = l [ [ a Δ t t ? ( u ) then P else Q K ] ] , with t t 0 . Similarly, since N t N by using Proposition 1, it holds that N = l [ [ a Δ ( t t ) t ? ( u ) then P else Q K ] ] , where t t t 0 . Due to the fact that 0 t + t t , rule (DPut) can be used for network N, namely N t + t l [ [ a Δ t ( t + t ) ? ( u ) then P else Q K ] ] = N (as desired).
  • Case N = l [ [ go t l then P K ] ] . Since N t N by using Proposition 1, it holds that N = l [ [ go t t l then P K ] ] , where t t 0 . Similarly, since N t N by using Proposition 1, it holds that N = l [ [ go ( t t ) t l then P K ] ] , where t t t 0 . Due to the fact that 0 t + t t , rule (DMove) can be used for network N, namely N t + t l [ [ go t ( t + t ) l then P K ] ] = N (as desired).
  • Case N = N 1 N 2 . Since N t N , by using Proposition 1, it holds that N 1 N 2 and there exist N 1 and N 2 such that N = N 1 N 2 , together with N 1 t N 1 and N 2 t N 2 . Similarly, since N t N , by using Proposition 1, it holds that there exist N 1 and N 2 such that N = N 1 N 2 , together with N 1 t N 1 and N 2 t N 2 . By induction, N 1 t N 1 and N 1 t N 1 imply that N 1 t + t N 1 , while N 2 t N 2 and N 2 t N 2 imply that N 2 t + t N 2 . Since N 1 t + t N 1 , N 2 t + t N 2 and N 1 N 2 , rule (DPar) can be used for network N, namely N t + t N 1 N 2 = N (as desired).
Regarding the knowledge of an agent, we have the following result showing that any given agent can be obtained starting from an agent without any knowledge.
Proposition 2.
If N = l [ [ P K ] ] with K , then
there exists N = l [ [ P K ] ] with K = such that N * N .
Proof. 
We proceed by induction on the structure of K .
  • Consider K = f v ; . According to rule (Create), this knowledge can be obtain from a process P = create ( f v ; ) then P . This implies that for N = l [ [ P K ] ] with K = , it holds that N create f @ l N (as desired).
  • Consider K = f v ; K , with K . By induction, there exists a process P able to create the knowledge K. This implies that for N = l [ [ P K ] ] with K , it holds that N * N . According to rule (Create), knowledge K can be obtain starting from knowledge K by using a process P = create ( f v ; ) then P . This implies that for N = l [ [ P K ] ] with K = , it holds that N create f @ l N * N (as desired).
  • Consider K = f v ; f v ; f v ; K 1 K 2 K 3 . By induction, there exists a process P able to create the knowledge K = f v ; f v ; K 1 K 2 K 3 . This implies that for N = l [ [ P K ] ] with K , it holds that N * N . According to rule (Extend), knowledge K can be obtain starting from knowledge K by using a process P = update ( p / f , v ) then P , where p = / f / f . This implies that for N = l [ [ P K ] ] with K = , it holds that N upd p @ l N * N (as desired). □
The next result is a consequence of the previous one; it claims that any given network in knowTiMo can be obtained starting from a network containing only agents without knowledge.
Theorem 3.
If N = l 1 [ [ P 11 K 11 P 1 n K 1 n ] ] l m [ [ P m 1 K m 1 P m n K m n ] ] , then there exists N = l 1 [ [ P 11 K 11 P 1 n K 1 n ] ] l m [ [ P m 1 K m 1 P m n K m n ] ] with K i j = ( 1 i m , 1 j n ) such that N * N .
The following example illustrates how agents communicate and make use of their knowledge.
Example 1.
To illustrate how multi-agent systems can be described in knowTiMo, we adapt the travel agency example from [3], where all the involved agents have a cyclic behaviour. Consider a travel agency with seven offices (one central and six locals) and five employees (two executives and three travel agents). As the agency is understaffed and all local offices need to be used from time to time, the executives meet with the agents daily at the central office in order to assign them local offices where they sell travel packages by interacting with potential customers. We consider two customers that are willing to visit the local offices closer to their homes. In what follows we show how each of the involved agents can be described by using the knowTiMo syntax.
Each day, agent A 1 executes the action go 10 office in order to move after 10 time units from location home A 1 to the central office . After reaching the central office, in order to find out at which local office will work for the rest of the day, it executes the action b Δ 5 ? ( newloc ) to try to communicate with any of the executives in the next 5 time units. The location variable newloc is needed to model a dynamic evolution based on the local office assigned by an available executive. After successfully communicating with an executive, the agent A 1 moves to location office i after 5 time units in order to communicate with potential customers using channel a i in order to sell a travel package towards location dest A 1 at the cost of 100 monetary units. After each working day, the agent returns home by executing the action go 3 home A 1 . The agents A 2 and A 3 behave similarly to A 1 , except that they begin and end their days at different locations, work locally at different offices and the travel packages they advertise are different.
Formally, the travel agents are described by the recursive processes A X ( home AX ) K AX :
A X ( home AX ) = go 10 office then A X ( office )
A X ( office ) = b Δ 5 ? ( newloc )
then ( go 5 newloc then A X ( newloc ) )
else A X ( office )
A X ( office i ) = update ( / work , office i )
then a i Δ 9 ! K A X ( / work / dest ) , K AX ( / work / price )
then go 3 home AX then A X ( home AX )
else go 3 home AX then A X ( home AX )
K AX = work office ; dest dest AX price 100 · X .
The identifiers A X ( 1 X 3 ) are uniquely assigned to the three travel agents, and office i ( 1 i 6 ) indicate the six local offices.
Given the knowledge K A X defined above, we exemplify how it can be used for some queries:
  • K A X ( / work / price ) is used to retrieve the price value 100 · X by following the path / work / price in K A X ;
  • K A X ( / work [ K AX ( / work / price ) < 200 ] ) returns the local office in which the agent is trying to sell its travel package whenever the price of the package available by following the path / work / price is below 200 monetary units.
Executives E 1 and E 2 are placed in the central office , being available for communication on channel b for 5 time minutes. In this way, they can assign to the travel agents (in a cyclic manner) the locations office 1 , office 3 , office 5 , and the locations office 2 , office 4 , office 6 , respectively. Formally, the executives are described by E X ( office Y ) K EX :
E X ( office Y ) = update ( / work , office Y )
then b Δ 5 ! K E X ( / work ) then E X ( office Y + 2 )
else E X ( office Y )
K EX = ∅.
The identifiers E X (with 1 X 2 ) are uniquely assigned to the two executives, while office Y (with Y { X , X + 2 , X + 4 } ) indicate the local offices that each executive E X can assign to travel agents. Defining the index of the local offices in this way ensures that the executives assign the existing local offices in a cyclic way.
The client C 1 initially resides at location home C 1 ; being interested in a travel package, client C 1 is willing to visit the local offices closer to his location, namely office 1 , office 2 , and office 3 . For each of these three local offices, the visit has two possible outcomes: if client C 1 interacts with an agent then it will acquire a travel offer, while if the highoffice is closed then client C 1 moves to the next local office from its itinerary. Once its journey through the three local offices ends, client C 1 returns home whenever was unable to collect any travel offer, while goes at the destination for which he has to pay the lowest amount whenever got at least one offer. After the holiday period ends, client C 1 returns home, where can restart the process of searching for a holiday destination. Client C 2 behaves in a similar manner as client C 1 does, except looking for the most expensive travel package while visiting the local offices office 4 , office 5 and office 6 . Formally, the clients are described by C X ( home CX ) K CX :
C X ( home CX ) = go 13 office Z + 1 then C X ( office Z + 1 )
C X ( office Z + 1 ) = a Z + 1 Δ 4 ? ( dest CX , 1 , cost CX , 1 )
then update ( / agency [ test Z + 1 ] / dest , dest CX , 1 )
then update ( / agency [ test Z + 1 ] / price , cost CX , 1 )
then go 2 office Z + 2 then C X ( office Z + 2 )
else update ( / agency [ test Z + 1 ] / dest , ε)
then update ( / agency [ test Z + 1 ] / price , ε)
then go 2 office Z + 2 then C X ( office Z + 2 ) ,
where test Z + 1 = ( K CX ( / agency ) = office Z + 1 )
C X ( office Z + 2 ) = a Z + 2 Δ 4 ? ( dest CX , 2 , cost CX , 2 )
then update ( / agency [ test Z + 2 ] / dest , dest CX , 2 )
then update ( / agency [ test Z + 2 ] / price , cost CX , 2 )
then go 3 office Z + 3 then C X ( office Z + 3 )
else update ( / agency [ test Z + 2 ] / dest , ε)
then update ( / agency [ test Z + 2 ] / price , ε)
then go 2 office Z + 3 then C X ( office Z + 3 ) ,
where test Z + 2 = ( K CX ( / agency ) = office Z + 2 )
C X ( office Z + 3 ) = a Z + 3 Δ 4 ? ( dest CX , 3 , cost CX , 3 )
then update ( / agency [ test Z + 3 ] / dest , dest CX , 3 )
then update ( / agency [ test Z + 3 ] / price , cost CX , 3 )
then C X ( next CX )
else update ( / agency [ test Z + 3 ] / dest , ε)
then update ( / agency [ test Z + 3 ] / price , ε)
then C X ( next CX ) ,
where test Z + 3 = ( K CX ( / agency ) = office Z + 3 )
C X ( next CX ) = i f testX then ( go 5 next CX then C X ( next CX ) )
else ( go 5 home CX then C X ( home CX ) )
C X ( dest CX , i ) = go 5 dest CX then C X ( home CX )
K CX = agency office Z + 1 ; dest ε price ε
agency office Z + 2 ; dest ε price ε
agency office Z + 3 ; dest ε price ε .
The identifiers C X (with 1 X 2 ) are uniquely assigned to the two clients, the identifiers dest CX , i uniquely identify the possible destinations the clients C X can visit, while Z = 3 ( X 1 ) (with X { 1 , 2 } ) are used to identify the local offices for each of the clients.
The tests used above are:
testX = ¬ ( K C X ( / agency / price ) = ε),
next CX = K CX ( / agency [ test min ] / dest CX , i ) i f   X = 1   a n d   K C X ( / agency / price ) = min j { 1 , 2 , 3 } cost CX , j N ; K CX ( / agency [ test max ] / dest CX , i ) i f   X = 2   a n d   K C X ( / agency / price ) = max j { 1 , 2 , 3 } cost CX , j N ; home CX o t h e r w i s e .
The initial state of the system given as the knowTiMo network N is:
home A 1 [ [ A 1 ( home A 1 ) K A 1 ] ] home A 2 [ [ A 2 ( home A 2 ) K A 2 ] ]
home A 3 [ [ A 3 ( home A 3 ) K A 3 ] ] office [ [ E 1 ( office 1 ) K E 1 E 2 ( office 2 ) K E 2 ] ]
home C 1 [ [ C 1 ( home C 1 ) K C 1 ] home C 2 [ [ C 2 ( home C 2 ) K C 2 ] N ,
where N stands for:
office 1 [ [ 0 ] ] office 2 [ [ 0 ] ] office 3 [ [ 0 ] ] office 4 [ [ 0 ] ] office 5 [ [ 0 ] ] office 6 [ [ 0 ] ]
dest 1 [ [ 0 ] ] dest 2 [ [ 0 ] ] dest 3 [ [ 0 ] ] .
In what follows we show how some of the rules of Table 2 and Table 3 are applied such that network N evolves. Since the network N is defined by means of recursive processes, in order to execute their actions we need to use the rules(Call)and(Par)for unfolding, namely
{ call , call , call , call , call , call , call }        (Call), (Par)
home A 1 [ [ ( go 10 office then A 1 ( office ) K A 1 ] ]
home A 2 [ [ ( go 10 office then A 2 ( office ) K A 2 ] ]
home A 3 [ [ ( go 10 office then A 3 ( office ) K A 3 ] ]
office [ [ update ( / work , office 1 )
then b Δ 5 ! K E 1 ( / work ) then E 1 ( office 3 )
else E 1 ( office 1 )
K E 1
update ( / work , office 2 )
then b Δ 5 ! K E 2 ( / work ) then E 2 ( office 4 )
else E 2 ( office 2 )
K E 2 ] ]
home C 1 [ [ go 13 office 1 then C 1 ( office 1 ) ] ]
home C 2 [ [ go 13 office 4 then C 2 ( office 4 ) ] ]
N .
The next step is represented by the two updates performed by the executives; thus, the rules(Extend)and(Par)are applied several times. Since the existing knowledge of the two executives is currently ∅, this means that these updates extend in fact their knowledge.
{ upd , upd }       (Extend),(Par)
home A 1 [ [ ( go 10 office then A 1 ( office ) K A 1 ] ]
home A 2 [ [ ( go 10 office then A 2 ( office ) K A 2 ] ]
home A 3 [ [ ( go 10 office then A 3 ( office ) K A 3 ] ]
office [ [ b Δ 5 ! K E 1 ( / work ) then E 1 ( office 3 )
else E 1 ( office 1 )
work office 1 ;
b Δ 5 ! K E 2 ( / work ) then E 2 ( office 4 )
else E 2 ( office 2 )
work office 2 ; ] ]
home C 1 [ [ go 13 office 1 then C 1 ( office 1 ) ] ]
home C 2 [ [ go 13 office 4 then C 2 ( office 4 ) ] ]
N .
Since the rules of Table 2 are not applicable to the above network, then only time passing can be applied by using the rules of Table 3. The rules(DMove),(DGet)and(DPar)can be applied for t = 5 , namely the maximum time units that can be performed.
5       (DMove),(DGet),(DPar)
home A 1 [ [ ( go 5 office then A 1 ( office ) K A 1 ] ]
home A 2 [ [ ( go 5 office then A 2 ( office ) K A 2 ] ]
home A 3 [ [ ( go 5 office then A 3 ( office ) K A 3 ] ]
office [ [ b Δ 0 ! K E 1 ( / work ) then E 1 ( office 3 )
else E 1 ( office 1 )
work office 1 ;
b Δ 0 ! K E 2 ( / work ) then E 2 ( office 4 )
else E 2 ( office 2 )
work office 2 ; ] ]
home C 1 [ [ go 8 office 1 then C 1 ( office 1 ) ] ]
home C 2 [ [ go 8 office 4 then C 2 ( office 4 ) ] ]
N .
Since after 5 time units of the evolution there are no agents to communicate with the executives on channel b, then the rules(Put0)and(Par)are applied such that the else branches of the two executives are chosen to be executed next.
{ b ! Δ 0 @ office , b ! Δ 0 @ office }       (Put0), (Par)
home A 1 [ [ ( go 5 office then A 1 ( office ) K A 1 ] ]
home A 2 [ [ ( go 5 office then A 2 ( office ) K A 2 ] ]
home A 3 [ [ ( go 5 office then A 3 ( office ) K A 3 ] ]
office [ [ E 1 ( office 1 ) work office 1 ;
E 2 ( office 2 ) work office 2 ; ] ]
home C 1 [ [ go 8 office 1 then C 1 ( office 1 ) ] ]
home C 2 [ [ go 8 office 4 then C 2 ( office 4 ) ] ]
N .
Note that the evolution was deterministic during the first 5 time units. However, since there are two executives and three travel agents into the system, the communication on channel b will take place in a nondeterministic manner, and thus there exists several possible future evolutions of the system.

3. Behavioural Equivalences in knowTiMo

In what follows, we define and study bisimulations for multi-agent systems that consider knowledge dynamics as well as explicit time constraints for communication and migration. Since a bisimilarity is the union of all bisimulations of the same type, in order to demonstrate that two knowTiMo networks N 1 and N 2 are bisimilar it is enough to discover a bisimulation relation containing the pair ( N 1 , N 2 ) . This standard bisimulation proof method is interesting for the following reasons:
  • check-ups are local (only immediate transitions are used);
  • No hierarchy exists between the pairs of a bisimulation, and thus we can effectively use bisimilarity to reason about infinite behaviours; this makes it different from inductive techniques, where we can reason about finite behaviour due to the required hierarchy.

3.1. Strong Timed Equivalences

Inspired by the approach taken in [4], we extend the standard notion of strong bisimilarity by allowing also timed transitions to be taken into account.
Definition 1
(Strong timed bisimulation).
Let R N × N be a symmetric binary relation over knowTiMo networks.
1.
R is a strong timed bisimulation if
  • ( N 1 , N 2 ) R and N 1 λ N 1 implies that there exists N 2 N such that N 2 λ N 2 and ( N 1 , N 2 ) R ;
  • ( N 1 , N 2 ) R and N 1 t N 1 implies that there exists N 2 N such that N 2 t N 2 and ( N 1 , N 2 ) R .
2.
The strong timed bisimilarity is the union ∼ of all strong timed bisimulations R .
Definition 1 treats in a similar manner the timed transitions and the labelled transitions, and so the bisimilarity notion is similar to the bisimilarity notion originally given for labelled transition systems. We can prove that the relation ∼ is the largest strong timed bisimulation, and also an equivalence relation.
Proposition 3.
1.
Identity, inverse, composition and union of strong timed bisimulations are strong timed bisimulations.
2.
∼ is the largest strong timed bisimulation.
3.
∼ is an equivalence.
Proof. 
1.
We treat each relations separately showing that it respects the conditions from Definition 1 for being a strong timed bisimulation.
(a)
The identity relation I d R is a strong timed bisimulation.
  • Assume ( N , N ) I d R . Consider N λ N ; then ( N , N ) I d R .
  • Assume ( N , N ) I d R . Consider N t N ; then ( N , N ) I d R .
(b)
The inverse of a strong timed bisimulation is a strong timed bisimulation.
  • Assume ( N 1 , N 2 ) R 1 , namely ( N 2 , N 1 ) R . Consider N 2 λ N 2 ; then for some N 1 we have N 1 λ N 1 and ( N 2 , N 1 ) R , namely ( N 1 , N 2 ) R 1 . By similar reasoning, if N 1 λ N 1 then we can find N 2 such that N 2 λ N 2 and ( N 1 , N 2 ) R 1 .
  • Assume ( N 1 , N 2 ) R 1 , namely ( N 2 , N 1 ) R . Consider N 2 t N 2 ; then for some N 1 we have N 1 t N 1 and ( N 2 , N 1 ) R , namely ( N 1 , N 2 ) R 1 . By similar reasoning, if N 1 t N 1 then we can find N 2 such that N 2 t N 2 and ( N 1 , N 2 ) R 1 .
(c)
The composition of strong timed bisimulations is a strong timed bisimulation.
  • Assume ( N 1 , N 2 ) R 1 R 2 . Then for some N we have ( N 1 , N ) R 1 and ( N , N 2 ) R 2 . Consider N 1 λ N 1 ; then for some N , since ( N 1 , N ) R 1 , we have N λ N and ( N 1 , N ) R 1 . Also, since ( N , N 2 ) R 2 we have for some N 2 that N 2 λ N 2 and ( N , N 2 ) R 2 . Thus, ( N 1 , N 2 ) R 1 R 2 . By similar reasoning, if N 2 λ N 2 then we can find N 1 such that N 1 λ N 1 and ( N , N 2 ) R 2 .
  • Assume ( N 1 , N 2 ) R 1 R 2 . Then for some N we have ( N 1 , N ) R 1 and ( N , N 2 ) R 2 . Consider N 1 t N 1 ; then for some N , since ( N 1 , N ) R 1 , we have N t N and ( N 1 , N ) R 1 . Also, since ( N , N 2 ) R 2 we have for some N 2 that N 2 t N 2 and ( N , N 2 ) R 2 . Thus, ( N 1 , N 2 ) R 1 R 2 . By similar reasoning, if N 2 t N 2 then we can find N 1 such that N 1 t N 1 and ( N , N 2 ) R 2 .
(d)
The union of strong timed bisimulations is a strong timed bisimulation.
  • Assume ( N 1 , N 2 ) i I R i . Then for some i I we have ( N 1 , N 2 ) R i . Consider N 1 λ N 1 ; then for some N 2 , since ( N 1 , N 2 ) R i , we have N 2 λ N 2 and ( N 1 , N 2 ) R i . Thus, ( N 1 , N 2 ) i I R i . By similar reasoning, if N 2 λ N 2 then we can find N 1 such that N 1 λ N 1 and ( N 1 , N 2 ) R i , namely ( N 1 , N 2 ) i I R i .
  • Assume ( N 1 , N 2 ) i I R i . Then for some i I we have ( N 1 , N 2 ) R i . Consider N 1 t N 1 ; then for some N 2 , since ( N 1 , N 2 ) R i , we have N 2 t N 2 and ( N 1 , N 2 ) R i . Thus, ( N 1 , N 2 ) i I R i . By similar reasoning, if N 2 t N 2 then we can find N 1 such that N 1 t N 1 and ( N 1 , N 2 ) R i , namely ( N 1 , N 2 ) i I R i .
2.
By the previous case (the union part), ∼ is a strong timed bisimulation and includes any other strong timed bisimulation.
3.
Proving that relation ∼ is an equivalence requires proving that it satisfies reflexivity, symmetry and transitivity. We consider each of them in the following:
(a)
Reflexivity: For any network N, N N results from the fact that the identity relation is a strong timed bisimulation.
(b)
Symmetry: If N N , then ( N , N ) R for some strong timed bisimulation R . Hence ( N , N ) R 1 , and so N N because the inverse relation is a strong timed bisimulation.
(c)
Transitivity: If N N and N N then ( N , N ) R 1 and ( N , N ) R 2 for some strong timed bisimulations R 1 and R 2 . Thus, ( N , N ) R 1 R 2 , and so N N due to the fact that the composition relation is a strong timed bisimulation.
The next result claims that the strong timed equivalence ∼ among processes is preserved even if the local knowledge of the agents is expanded. This is consistent with the fact that the processes affect the same portion of their knowledge. To simplify the presentation, in what follows we assume the notations i = 1 n N i = N 1 N n and i = 1 n A i = A 1 A n .
Proposition 4.
If K i j K i j for 1 i n , 1 j m , then
i = 1 n l i [ [ j = 1 m P i j K i j ] ] i = 1 n l i [ [ j = 1 m P i j K i j ] ] .
Proof. 
We show that S is a strong timed bisimulation, where:
S = { ( i = 1 n l i [ [ j = 1 m P i j K i j ] ] , i = 1 n l i [ [ j = 1 m P i j K i j ] ] ) : K i j K i j , 1 i n , 1 j m } .
The proof is by induction on the last performed step:
  • Let us assume that i = 1 n l i [ [ j = 1 m P i j K i j ] ] λ N . Depending on the value of λ , there are several cases:
    -
    Consider λ = a ! ? @ l 1 . Then there exists P 11 = a Δ t 1 ! v then P 11 else P 11 and P 12 =   a Δ t 2 ? ( u ) then P 12 else P 12 such that l 1 [ [ P 11 K 11 P 12 K 12 j = 3 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] a ! ? @ l 1 l 1 [ [ P 11 K 11 P 12 K 12 j = 3 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = [ [ P 11 K 11 P 12 K 12 j = 3 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ] a ! ? @ l 1 N . Since K i j K i j , 1 i n , 1 j m , clearly ( N , N ) S .
    -
    Consider λ = a ! Δ 0 @ l 1 . Then there exists P 11 = a Δ 0 ! v then P 11 else P 11 such that l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] a ! ? @ l 1 l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ] ) a ! ? @ l 1 N . Since K i j K i j , 1 i n , 1 j m , clearly ( N , N ) S .
    -
    Consider λ = a ? Δ 0 @ l 1 . Then there exists P 11 = a Δ 0 ? ( u ) then P 11 else P 11 such that l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] a ! ? @ l 1 l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ] ) a ! ? @ l 1 N . Since K i j K i j , 1 i n , 1 j m , clearly ( N , N ) S .
    -
    Consider λ = l 1 l 2 . Then there exists P 11 = go 0 l 2 then P 11 such that l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] l 1 l 2 l 1 [ [ j = 2 m P 1 j K 1 j ] ] l 2 [ [ P 11 K 11 j = 1 m P 2 j K 2 j ] ] i = 3 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = l 1 [ [ j = 2 m P 1 j K 1 j ] ] l 2 [ [ P 11 K 11 j = 1 m P 2 j K 2 j ] ] i = 3 n l i [ [ j = 1 m P i j K i j ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ] ) l 1 l 2 N . Since K i j K i j , 1 i n , 1 j m , clearly ( N , N ) S .
    -
    Consider λ = true @ l 1 . Then there exists P 11 = if test then P 11 else P 11 , where test @ K 11 = true , such that l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] true @ l 1 l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ] ) true @ l 1 N . Since K i j K i j , 1 i n , 1 j m , clearly ( N , N ) S .
    -
    Consider λ = false @ l 1 . Then there exists P 11 = if test then P 11 else P 11 , where test @ K 11 = false , such that l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] true @ l 1 l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ] ) true @ l 1 N . Since K i j K i j , 1 i n , 1 j m , clearly ( N , N ) S .
    -
    Consider λ = create f @ l 1 . Then there exists P 11 = create ( f v ; ) then P 11 such that l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] create f @ l l 1 [ [ P 11 K 11 f v ; j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = l 1 [ [ P 11 K 11 f v ; j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ] ) create f @ l N . Since K i j K i j , 1 i n , 1 j m , then also K 11 f v ; K 11 f v ; , and clearly ( N , N ) S .
    -
    Consider λ = upd p @ l 1 . Then there exists P 11 = update ( p / f , v ) then P 11 such that l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] upd p @ l 1 l 1 [ [ P 11 K 11 u j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = l 1 [ [ P 11 K 11 u j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ] ) upd p @ l 1 N . Since K i j K i j , 1 i n , 1 j m , then also K 11 u K 11 u , and clearly ( N , N ) S .
  • Let us assume that i = 1 n l i [ [ j = 1 m P i j K i j ] ]   t   N . Then there exists P i j , 1 i n ,   1 j m , such that i = 1 n l i [ [ j = 1 m P i j K i j ] ]   t i = 1 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = i = 1 n l i [ [ j = 1 m P i j K i j ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ]   t N . Since K i j K i j , 1 i n , 1 j m , clearly ( N , N ) S .
The symmetric cases follow by similar arguments. □
The following result shows that strong timed bisimulation is preserved even after complete computational steps of two knowTiMo networks.
Proposition 5.
Let N 1 , N 2 be two knowTiMo networks.
If N 1 N 2 and N 1 Λ , t N 1 , then there exists N 2 N such that N 2 Λ , t N 2 and N 1 N 2 .
Proof. 
Assuming that the finite multiset of actions Λ contains the labels { λ 1 , , λ k } , then the complete computational step N 1 Λ , t N 1 can be detailed as N 1 λ 1 N 1 1 N 1 k 1 λ k N 1 k t N 1 . Since N 1 λ 1 N 1 1 and N 1 N 2 , then according to Definition 1 there exists N 2 1 N such that N 2 λ 1 N 2 1 and N 1 1 N 2 1 . The same reasoning can be applied for another k steps, meaning that there exist N 2 2 , , N 2 k , N 2 N such that N 2 λ 1 N 2 1 N 2 k 1 λ k N 2 k t N 2 and N 1 N 2 . By the definition of a complete computational step, it holds that N 2 λ 1 N 2 1 N 2 k 1 λ k N 2 k t N 2 can be written as N 2 Λ , t N 2 . Thus, we obtained that there exists N 2 N such that N 2 Λ , t N 2 and N 1 N 2 (as desired). □
The next example illustrates that the relation ∼ is able to distinguish between agents with different knowledge if update operations are performed.
Example 2.
Consider that client C 2 is at location office 4 , ready to communicate on channel a 4 . To simplify the presentation, we take only a simplified definition of C 2 as follows:
C 2 ( office 4 ) = a 4 Δ 4 ? ( dest C 2 , 1 , cost C 2 , 1 )
then update ( / agency [ test 4 ] / dest , dest C 2 , 1 )
else update ( / agency [ test 4 ] / dest , ε).
Consider the following three networks in knowTiMo:
N 1 = office 4 [ [ C 2 ( office 4 ) K C 2 ] ] ,
N 1 = office 4 [ [ C 2 ( office 4 ) K C 2 ] ] ,
N 1 = office 4 [ [ C 2 ( office 4 ) K C 2 ] ] ,
where the knowledge of the agents is defined as:
K C 2 = agency office 4 ; dest ε price ε ,
K C 2 = agency office 5 ; dest ε price ε ,
K C 2 = .
According to Definition 1, it holds that N 1 N 1 , while N 1 N 1 and N 1 N 1 . This is due to the fact that while all three networks are able to perform a time step of length 4 and to choose the else branch, only network N 1 is able to perform the update operation. Formally:
N 1 4 N 2 false @ office 4 N 3 upd / agency [ test 4 ] / dest @ office 4 N 4
and
Mathematics 09 02869 i001
where the networks N 2 , N 3 , N 4 , N 2 , N 3 , N 2 and N 3 in knowTiMo are obtained by using the rules of Table 2 and Table 3.

3.2. Strong Bounded Timed Equivalences

We provide some notations used in the rest of the paper:
  • A timed relation over the set N of networks is any relation R N × N × N .
  • The identity timed relation is
    ι = d f { ( N , t , N ) | N N , t N } .
  • The inverse of a timed relation R is
    R 1 = d f { ( N 2 , t , N 1 ) | ( N 1 , t , N 2 ) R } .
  • The composition of timed relations R 1 and R 2 is
    R 1 R 2 = d f { ( N , t , N ) | N N : ( N , t , N ) R 1 ( N , t , N ) R 2 } .
  • If R is a timed relation and t N , then
    R t = d f { ( N 1 , N 2 ) ( N 1 , t , N 2 ) R }
    is R ’s t-projection. We also denote R = d f t N R t .
  • A timed relation R is a timed equivalence if R is an equivalence relation, and is an equivalence up-to time t N if 0 t < t R t is an equivalence relation.
The equivalence ∼ requires an exact match of transitions of two networks during their entire evolutions. Sometimes this requirement is too strong. In many situations this requirement is relaxed [5], and real-time systems are allowed to behave in an expected way up to a certain amount t of time units. This impels one to define bounded timed equivalences up-to a given time t.
Definition 2
(Strong bounded timed bisimulation).
Let R N × N × N be a symmetric timed relation on N and on networks in knowTiMo.
1.
R is a strong bounded timed bisimulation if
  • ( N 1 , t , N 2 ) R and N 1 λ N 1 implies that there exists N 2 N such that N 2 λ N 2 and ( N 1 , t , N 2 ) R ;
  • ( N 1 , t , N 2 ) R and N 1 t N 1 implies that there exists N 2 N such that N 2 t N 2 and ( N 1 , t t , N 2 ) R .
2.
The strong bounded timed bisimilarity is the union ≃ of all strong bounded timed bisimulations R .
The following results illustrate some properties of the strong bounded timed bisimulations. In particular, we prove that the equivalence relation ≃ (that is strictly included in relation ∼) is the largest strong bounded timed bisimulation.
Proposition 6.
1.
Identity, inverse, composition and union of strong bounded timed bisimulations are strong bounded timed bisimulations.
2.
≃ is the largest strong bounded timed bisimulation.
3.
≃ is a timed equivalence.
4.
.
Proof. 
1.
We treat each relations separately showing that it respects the conditions from Definition 2 for being a strong bounded timed bisimulation.
(a)
The identity relation ι is a strong bounded timed bisimulation.
  • Assume ( N , t , N ) ι . Consider N λ N ; then ( N , t , N ) ι .
  • Assume ( N , t , N ) ι . Consider N t N ; then ( N , t t , N ) ι .
(b)
The inverse of a strong bounded timed bisimulation is a strong bounded timed bisimulation.
  • Assume ( N 1 , t , N 2 ) R 1 , namely ( N 2 , t , N 1 ) R . Consider N 2 λ N 2 ; then for some N 1 we have N 1 λ N 1 and ( N 2 , t , N 1 ) R , namely ( N 1 , t , N 2 ) R 1 . By similar reasoning, if N 1 λ N 1 then we can find N 2 such that N 2 λ N 2 and ( N 1 , t , N 2 ) R 1 .
  • Assume ( N 1 , t , N 2 ) R 1 , namely ( N 2 , N 1 ) R . Consider N 2 t N 2 ; then for some N 1 we have N 1 t N 1 and ( N 2 , t t , N 1 ) R , namely ( N 1 , t t , N 2 ) R 1 . By similar reasoning, if N 1 t N 1 then we can find N 2 such that N 2 t N 2 and ( N 1 , t t , N 2 ) R 1 .
(c)
The composition of strong bounded timed bisimulations is a strong bounded timed bisimulation.
  • Assume ( N 1 , t , N 2 ) R 1 R 2 . Then for some N we have ( N 1 , t , N ) R 1 and ( N , t , N 2 ) R 2 . Consider N 1 λ N 1 ; then for some N , since ( N 1 , t , N ) R 1 , we have N λ N and ( N 1 , t , N ) R 1 . Also, since ( N , t , N 2 ) R 2 we have for some N 2 that N 2 λ N 2 and ( N , t , N 2 ) R 2 . Thus, ( N 1 , t , N 2 ) R 1 R 2 . By similar reasoning, if N 2 λ N 2 then we can find N 1 such that N 1 λ N 1 and ( N , t , N 2 ) R 2 .
  • Assume ( N 1 , t , N 2 ) R 1 R 2 . Then for some N we have ( N 1 , t , N ) R 1 and ( N , t , N 2 ) R 2 . Consider N 1 t N 1 ; then for some N , since ( N 1 , t , N ) R 1 , we have N t N and ( N 1 , t t , N ) R 1 . Also, since ( N , t , N 2 ) R 2 , for some N 2 we have N 2 t N 2 and ( N , t t , N 2 ) R 2 . Thus, ( N 1 , t t , N 2 ) R 1 R 2 . By similar reasoning, if N 2 t N 2 then we can find N 1 such that N 1 t N 1 and ( N , t t , N 2 ) R 2 .
(d)
The union of strong bounded timed bisimulations is a strong bounded timed bisimulation.
  • Assume ( N 1 , t , N 2 ) i I R i . Then for some i I we have that ( N 1 , t , N 2 ) R i . Consider N 1 λ N 1 ; then for some N 2 , since ( N 1 , t , N 2 ) R i , we have N 2 λ N 2 and ( N 1 , t , N 2 ) R i . Thus, ( N 1 , t , N 2 ) i I R i . By similar reasoning, if N 2 λ N 2 then we can find N 1 such that N 1 λ N 1 and ( N 1 , t , N 2 ) R i , namely ( N 1 , t , N 2 ) i I R i .
  • Assume ( N 1 , t , N 2 ) i I R i . Then for some i I we have that ( N 1 , t , N 2 ) R i . Consider N 1 t N 1 ; then for some N 2 , since ( N 1 , t , N 2 ) R i , we have N 2 t N 2 and ( N 1 , t t , N 2 ) R i . Thus, ( N 1 , t t , N 2 ) i I R i . By similar reasoning, if N 2 t N 2 then we can find N 1 such that N 1 t N 1 and ( N 1 , t t , N 2 ) R i , namely ( N 1 , t t , N 2 ) i I R i .
2.
By the previous case (the union part), ≃ is a strong bounded timed bisimulation and includes any other strong bounded timed bisimulation.
3.
Proving that relation ≃ is a timed equivalence requires proving that it satisfies reflexivity, symmetry and transitivity. We consider each of them in what follows:
(a)
Reflexivity: For any network N, N N results from the fact that the identity relation is a strong bounded timed bisimulation.
(b)
Symmetry: If N N , then ( N , t , N ) R for some strong bounded timed bisimulation R . Hence ( N , t , N ) R 1 , and so N N because the inverse relation is a strong bounded timed bisimulation.
(c)
Transitivity: If N N and N N then ( N , t , N ) R 1 and ( N , t , N ) R 2 for some strong bounded timed bisimulations R 1 and R 2 . Thus, it holds that ( N , t , N ) R 1 R 2 , and so N N due to the fact that the composition relation is a strong bounded timed bisimulation.
4.
We provide Example 3 below that illustrates the strict inclusion. □
The next result claims that strong bounded timed equivalence t over processes is preserved even if the local knowledge of the agents is expanded. This is consistent with the fact that the processes affect the same portion of their knowledge.
Proposition 7.
If K i j K i j , for 1 i n , 1 j m , then
i = 1 n l i [ [ j = 1 m P i j K i j ] ] i = 1 n l i [ [ j = 1 m P i j K i j ] ] .
Proof. 
We show that S is a strong bounded timed bisimulation, where:
S = { ( i = 1 n l i [ [ j = 1 m P i j K i j ] ] , t , i = 1 n l i [ [ j = 1 m P i j K i j ] ] ) : K i j K i j , 1 i n , 1 j m } .
The proof is by induction on the last performed step:
  • Let us assume that i = 1 n l i [ [ j = 1 m P i j K i j ] ] λ N . Depending on the value of λ , there are several cases:
    -
    Consider λ = a ! ? @ l 1 . Then there exists P 11 = a Δ t 1 ! v then P 11 else P 11 and P 11 =   a Δ t 2 ? ( u ) then P 12 else P 12 such that l 1 [ [ P 11 K 11 P 12 K 12 j = 3 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] a ! ? @ l 1 l 1 [ [ P 11 K 11 P 12 K 12 j = 3 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = [ [ P 11 K 11 P 12 K 12 j = 3 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ] a ! ? @ l 1 N . Since K i j K i j , 1 i n , 1 j m , clearly ( N , t , N ) S .
    -
    Consider λ = a ! Δ 0 @ l 1 . Then there exists P 11 = a Δ 0 ! v then P 11 else P 11 such that l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] a ! Δ 0 @ l 1 l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ] ) a ! Δ 0 @ l 1 N . Since K i j K i j , 1 i n , 1 j m , clearly ( N , t , N ) S .
    -
    Consider λ = a ? Δ 0 @ l 1 . Then there exists P 11 = a Δ 0 ? ( u ) then P 11 else P 11 such that l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] a ! Δ 0 @ l 1 l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ] ) a ! Δ 0 @ l 1 N . Since K i j K i j , 1 i n , 1 j m , clearly ( N , t , N ) S .
    -
    Consider λ = l 1 l 2 . Then there exists P 11 = go 0 l 2 then P 11 such that l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] l 1 l 2 l 1 [ [ j = 2 m P 1 j K 1 j ] ] l 2 [ [ P 11 K 11 j = 1 m P 2 j K 2 j ] ] i = 3 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = l 1 [ [ j = 2 m P 1 j K 1 j ] ] l 2 [ [ P 11 K 11 j = 1 m P 2 j K 2 j ] ] i = 3 n l i [ [ j = 1 m P i j K i j ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ] ) l 1 l 2 N . Since K i j K i j , 1 i n , 1 j m , clearly ( N , t , N ) S .
    -
    Consider λ = true @ l 1 . Then there exists P 11 = if test then P 11 else P 11 , where test @ K 11 = true , such that l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] true @ l 1 l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ] ) true @ l 1 N . Since K i j K i j , 1 i n , 1 j m , clearly ( N , t , N ) S .
    -
    Consider λ = false @ l 1 . Then there exists P 11 = if test then P 11 else P 11 , where test @ K 11 = false , such that l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] false @ l 1 l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ] ) false @ l 1 N . Since K i j K i j , 1 i n , 1 j m , clearly ( N , t , N ) S .
    -
    Consider λ = create f @ l 1 . Then there exists P 11 = create ( f v ; ) then P 11 such that l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] create f @ l 1 l 1 [ [ P 11 K 11 f v ; j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = l 1 [ [ P 11 K 11 f v ; j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ] ) create f @ l 1 N . Since K i j K i j , 1 i n , 1 j m , then also K 11 f v ; K 11 f v ; , and clearly ( N , t , N ) S .
    -
    Consider λ = upd p @ l 1 . Then there exists P 11 = update ( p / f , v ) then P 11 such that l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] upd p @ l 1 l 1 [ [ P 11 K 11 u j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = l 1 [ [ P 11 K 11 u j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ] upd p @ l 1 N . Since K i j K i j , 1 i n , 1 j m , then also K 11 u K 11 u , and clearly ( N , t , N ) S .
  • Let us assume that i = 1 n l i [ [ j = 1 m P i j K i j ] ]   t   N . Then there exists P i j , 1 i n ,   1 j m , such that i = 1 n l i [ [ j = 1 m P i j K i j ] ]   t i = 1 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = i = 1 n l i [ [ j = 1 m P i j K i j ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ]   t N . Since K i j K i j , 1 i n , 1 j m , clearly ( N , t t , N ) S .
The symmetric cases follow by similar arguments. □
The following result shows that strong bounded timed bisimulation is preserved even after complete computational steps of two networks in knowTiMo.
Proposition 8.
Let N 1 , N 2 be two knowTiMo networks.
If N 1 t N 2 and N 1 Λ , t N 1 , then there is N 2 N such that N 2 Λ , t N 2 and N 1 t t N 2 .
Proof. 
Assuming that the finite multiset of actions Λ contains the labels { λ 1 , , λ k } , then the complete computational step N 1 Λ , t N 1 can be detailed as N 1 λ 1 N 1 1 N 1 k 1 λ k N 1 k t N 1 . Note that N 1 t N 2 means that ( N 1 , t , N 2 ) . Since N 1 λ 1 N 1 1 and ( N 1 , t , N 2 ) , then according to Definition 2 there exists N 2 1 N such that N 2 λ 1 N 2 1 and ( N 1 1 , t , N 2 1 ) . The same reasoning can be applied for another k steps, meaning that there exist N 2 2 , , N 2 k , N 2 N such that N 2 λ 1 N 2 1 N 2 k 1 λ k N 2 k t N 2 and ( N 1 , t t , N 2 ) , namely N 1 t t N 2 . The definition of a complete computational step implies that N 2 λ 1 N 2 1 N 2 k 1 λ k N 2 k t N 2 can be written as N 2 Λ , t N 2 . Thus, we obtained that there exists N 2 N such that N 2 Λ , t N 2 and N 1 t t N 2 (as desired). □
Strong bounded timed bisimulation satisfies the property that if two networks are equivalent up-to a certain deadline t, they are equivalent up-to any deadline t before t, i.e., t t .
Proposition 9.
If N t N and t t , then N t N .
Proof. 
Assume N t N and that there exist the networks N 1 , , N k N , the set of actions Λ 1 , , Λ k and the timers t 1 , , t k N such that N Λ 1 , t 1 N 1 Λ k , t k N k and also t = t 1 + + t k . According to Proposition 8, there exist the networks N 1 , , N k N such that N Λ 1 , t 1 N 1 Λ k , t k N k , and also N 1 t t 1 N 1 , , N k 0 N k . Since t t , then there exists an l k and a t N such that t 1 + + t l + t = t . By using Theorem 1, it holds that there exists N 1 such that N Λ 1 + 1 t 1 N 1 Λ 1 + 1 t 1 N l Λ l + 1 , t N 1 . In a similar manner, by using Theorem 1, it holds that there exists N 2 such that N Λ 1 , t 1 N 1 Λ 1 , t 1 N l Λ l + 1 , t N 2 . Since N 1 and N 2 can perform only time passing steps of length at most t l + 1 t , this means that N 1 0 N 2 , However, according to Definition 2, this means that we obtain the desired relation N t N because the networks N and N can match their behaviour for t steps. □
The next example illustrates that the relation t is able to treat as bisimilar some multi-agent systems that are not bisimilar using the relation ∼.
Example 3.
Let us consider the networks of Example 2, namely:
N 1 = office 4 [ [ C 2 ( office 4 ) K C 2 ] ] ,
N 1 = office 4 [ [ C 2 ( office 4 ) K C 2 ] ] ,
N 1 = office 4 [ [ C 2 ( office 4 ) K C 2 ] ] ,
where the knowledge of the agents is defined as:
K C 2 = agency office 4 ; dest ε price ε ,
K C 2 = agency office 5 ; dest ε price ε ,
K C 2 = .
Even if it holds that N 1 N 1 while N 1 N 1 and N 1 N 1 , by applying Definition 2, it results that N 1 , N 1 and N 1 are strong bounded timed bisimilar before the 4 t h time unit since they have the same evolutions during this deadline, namely N 1 4 N 1 , N 1 4 N 1 and N 1 4 N 1 . If t > 4 , we have that N 1 t N 1 , while N 1 t N 1 and N 1 t N 1 . Thus, both Definitions 1 and 2 return the same relations among N 1 , N 1 and N 1 for t > 4 .
This example illustrates also the strict inclusion relation from item 4 of Proposition 6.

3.3. Weak Knowledge Equivalences

Both equivalence relations ∼ and ≃ require an exact match of transitions and time steps of two networks in knowTiMo; this makes them too restrictive. We can introduce a weaker version of network equivalence by looking only at the steps that affect the knowledge data, namely the create and update steps. Thus, we introduce a knowledge equivalence in order to distinguish between networks based on the interaction of the agents with their local knowledge: the networks are equivalent if we observe only create and update actions along same paths, regardless of the values added to the knowledge.
Definition 3
(Weak knowledge bisimulation).
Let R N × N be a symmetric binary relation over networks in knowTiMo.
1.
R is a weak knowledge bisimulation if
  • ( N 1 , N 2 ) R and N 1 create f @ l * N 1 implies that there exists N 2 N such that N 2 create f @ l * N 2 and ( N 1 , N 2 ) R ;
  • ( N 1 , N 2 ) R and N 1 upd p @ l * N 1 implies that there exists N 2 N such that N 2 upd p @ l * N 2 and ( N 1 , N 2 ) R ;
2.
The weak knowledge bisimilarity is the union ≅ of all weak knowledge bisimulations R .
The following results present some properties of the weak knowledge bisimulations. In particular, we prove that the equivalence relation ≅ (that is strictly included in relation ∼) is the largest weak knowledge bisimulation.
Proposition 10.
1.
Identity, inverse, composition and union of weak knowledge bisimulations are weak knowledge bisimulations.
2.
≅ is the largest weak knowledge bisimulation.
3.
≅ is an equivalence.
4.
.
Proof. 
1.
We treat each relation separately showing that it respects the conditions from Definition 3 for being a weak knowledge bisimulation.
(a)
The identity relation I d R is a weak knowledge bisimulation.
  • Assume ( N , N ) I d R . Consider N create f @ l * N ; then ( N , N ) I d R .
  • Assume ( N , N ) I d R . Consider N upd p @ l * N ; then ( N , N ) I d R .
(b)
The inverse of a weak knowledge bisimulation is a weak knowledge bisimulation.
  • Assume ( N 1 , N 2 ) R 1 , namely ( N 2 , N 1 ) R . Consider N 2 create f @ l * N 2 ; then for some N 1 we have N 1 create f @ l * N 1 and ( N 2 , N 1 ) R , namely ( N 1 , N 2 ) R 1 . By similar reasoning, if N 1 create f @ l * N 1 then we can find N 2 such that N 2 create f @ l * N 2 and ( N 1 , N 2 ) R 1 .
  • Assume ( N 1 , N 2 ) R 1 , namely ( N 2 , N 1 ) R . Consider N 2 upd f @ l * N 2 ; then for some N 1 we have N 1 upd f @ l * N 1 and ( N 2 , N 1 ) R , namely ( N 1 , N 2 ) R 1 . By similar reasoning, if N 1 upd f @ l * N 1 then we can find N 2 such that N 2 upd f @ l * N 2 and ( N 1 , N 2 ) R 1 .
(c)
The composition of weak knowledge bisimulations is a weak knowledge bisimulation.
  • Assume ( N 1 , N 2 ) R 1 R 2 . Then for some N we have ( N 1 , N ) R 1 and ( N , N 2 ) R 2 . Consider N 1 create f @ l * N 1 ; then for some N , since ( N 1 , N ) R 1 , we have N create f @ l * N and ( N 1 , N ) R 1 . Also, since ( N , N 2 ) R 2 we have for some N 2 that N 2 create f @ l * N 2 and ( N , N 2 ) R 2 . Thus, ( N 1 , N 2 ) R 1 R 2 . By similar reasoning, if N 2 create f @ l * N 2 then we can find N 1 such that N 1 create f @ l * N 1 and ( N , N 2 ) R 2 .
  • Assume ( N 1 , N 2 ) R 1 R 2 . Then for some N we have ( N 1 , N ) R 1 and ( N , N 2 ) R 2 . Consider N 1 upd f @ l * N 1 ; then for some N , since ( N 1 , N ) R 1 , we have N upd f @ l * N and ( N 1 , N ) R 1 . Also, since ( N , N 2 R 2 we have for some N 2 that N 2 upd f @ l * N 2 and ( N , N 2 ) R 2 . Thus, ( N 1 , N 2 ) R 1 R 2 . By similar reasoning, if N 2 upd f @ l * N 2 then we can find N 1 such that N 1 upd f @ l * N 1 and ( N , N 2 ) R 2 .
(d)
The union of weak knowledge bisimulations is a weak knowledge bisimulation.
  • Assume ( N 1 , N 2 ) i I R i . Then for some i I we have ( N 1 , N 2 ) R i . Consider N 1 create f @ l * N 1 ; then for some N 2 , since ( N 1 , N 2 ) R i , we have N 2 create f @ l * N 2 and ( N 1 , N 2 ) R i . Thus, ( N 1 , N 2 ) i I R i . By similar reasoning, if N 2 create f @ l * N 2 then we can find N 1 such that N 1 create f @ l * N 1 and ( N 1 , N 2 ) R i , namely ( N 1 , N 2 ) i I R i .
  • Assume ( N 1 , N 2 ) i I R i . Then for some i I we have ( N 1 , N 2 ) R i . Consider N 1 upd f @ l * N 1 ; then for some N 2 , since ( N 1 , N 2 ) R i , we have N 2 upd f @ l * N 2 and ( N 1 , N 2 ) R i . Thus, ( N 1 , N 2 ) i I R i . By similar reasoning, if N 2 upd f @ l * N 2 then we can find N 1 such that N 1 upd f @ l * N 1 and ( N 1 , N 2 ) R i , namely ( N 1 , N 2 ) i I R i .
2.
By the previous case (the union part), ≅ is a weak knowledge bisimulation and includes any other weak knowledge bisimulation.
3.
Proving that relation ≅ is an equivalence requires proving that it satisfies reflexivity, symmetry and transitivity. We consider each of them in what follows:
(a)
Reflexivity: For any network N, N N results from the fact that the identity relation is a weak knowledge bisimulation.
(b)
Symmetry: If N N , then ( N , N ) R for some weak knowledge bisimulation R . Hence ( N , N ) R 1 , and so N N because the inverse relation is a weak knowledge bisimulation.
(c)
Transitivity: If N N and N N then ( N , N ) R 1 and ( N , N ) R 2 for some weak knowledge bisimulations R 1 and R 2 . Thus, ( N , N ) R 1 R 2 , and so N N due to the fact that the composition relation is a weak knowledge bisimulation.
4.
We provide Example 4 below illustrating the strict inclusion. □
The next result claims that weak knowledge equivalence ≅ among processes is preserved even if the local knowledge of the agents is expanded. This is consistent with the fact that the processes affect the same portion of their knowledge.
Proposition 11.
If K i j K i j , for 1 i n , 1 j m , then
i = 1 n l i [ [ j = 1 m P i j K i j ] ] i = 1 n l i [ [ j = 1 m P i j K i j ] ] .
Proof. 
We show that S is a weak knowledge bisimulation, where:
S = { ( i = 1 n l i [ [ j = 1 m P i j K i j ] ] , i = 1 n l i [ [ j = 1 m P i j K i j ] ] ) : K i j K i j , 1 i n , 1 j m } .
The proof is by induction on the last performed step. Let us assume that
i = 1 n l i [ [ j = 1 m P i j K i j ] ] λ * N . Depending on the value of λ , there are several cases:
  • Consider λ = create f @ l 1 . Then there exists P 11 such that l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] create f @ l * l 1 [ [ P 11 K 11 f v ; j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = l 1 [ [ P 11 K 11 f v ; j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ] ) create f @ l * N . Since K i j K i j , 1 i n , 1 j m , then also K 11 f v ; K 11 f v ; , and clearly ( N , N ) S .
  • Consider λ = upd p @ l 1 . Then there exists P 11 such that l 1 [ [ P 11 K 11 j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] upd f @ l * l 1 [ [ P 11 K 11 u j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] = N . Then there exists N = l 1 [ [ P 11 K 11 u j = 2 m P 1 j K 1 j ] ] i = 2 n l i [ [ j = 1 m P i j K i j ] ] ] such that i = 1 n l i [ [ j = 1 m P i j K i j ] ] ) upd f @ l * N . Since K i j K i j , 1 i n , 1 j m , then also K 11 u K 11 u , and clearly ( N , N ) S .
The symmetric cases follow by similar arguments. □
The following result shows that weak knowledge bisimulation is preserved after complete computational steps of two networks in knowTiMo only if the knowledge is modified at least once during such a step.
Proposition 12.
Let N 1 , N 2 be two knowTiMo networks and create f @ l Λ or upd f @ l Λ .
If N 1 N 2 and N 1 Λ , t N 1 , then there exists N 2 N such that N 2 Λ , t 2 and N 1 N 2 .
Proof. 
Assuming that the finite multiset of actions Λ contains the labels { λ 1 , , λ k } that denote modifications to the knowledge, then the complete computational step N 1 Λ , t N 1 can be detailed as N 1 λ 1 * N 1 1 N 1 k 1 λ k * N 1 . Since N 1 λ 1 * N 1 1 and N 1 N 2 , then according to Definition 3 there exists N 2 1 N such that N 2 λ 1 * N 2 1 and N 1 1 N 2 1 . The same reasoning can be applied for another k times, meaning that there exist N 2 2 , , N 2 N such that N 2 λ 1 * N 2 1 N 2 k 1 λ k * N 2 and N 1 N 2 . By the definition of a complete computational step, it holds that N 2 λ 1 * N 2 1 N 2 k 1 λ k * N 2 can be written as N 2 Λ , t N 2 . Thus, we obtained that there exists N 2 N such that N 2 Λ , t N 2 and N 1 N 2 (as desired). □
The next example illustrates that the relation ≅ is able to treat bisimilar systems that are not bisimilar using the relation ∼.
Example 4.
Consider the network N 1 of Example 2, and a network
N 1 = office 4 [ [ C 2 ( office 4 ) K C 2 ] ] ,
in which the client can perform only an update action:
C 2 ( office 4 ) = update ( / agency [ test 4 ] / dest , dest C 2 , 1 ) .
According to Definition 1, it holds that N 1 N 1 . This is due to the fact that the network N 1 can perform a time step of length 4 and choose the else branch, while the network N 1 can perform only the update operation. Formally:
N 1 4 N 2 false @ office 4 N 3 upd / agency [ test 4 ] / dest @ office 4 N 4
and
N 1 upd / agency [ test 4 ] / dest @ office 4 N 2
The above reductions can also be written as
N 1 * N 3 upd / agency [ test 4 ] / dest @ office 4 N 4 ,
and
N 1 * N 1 upd / agency [ test 4 ] / dest @ office 4 N 2 .
By applying Definition 3, it results that N 1 and N 1 are weak knowledge bisimilar because they are able to perform an update on the same path at the same location, i.e., N 1 N 1 .
This example is also an illustration of the strict inclusion relation from item 4 of Proposition 10.

4. Conclusions and Related Work

In multi-agent systems, knowledge is usually treated by using epistemic logics [6]; in particular, the multi-agent epistemic logic [7,8]. These epistemic logics are modal logics describing different types of knowledge, being different not only syntactically, but also in expressiveness and complexity. Essentially, they are based on two concepts: Kripke structures (to model their semantics) and logic formulas (to represent the knowledge of the agents).
The initial version of TiMo presented in [1] leads to some extensions: with access permissions in perTiMo [9], with real-time in rTiMo [10], combining TiMo and the bigraphs [11] to obtain the BigTiMo calculus [12]. However, in all these approaches an implicit knowledge is used inside the processes. In this article we defined knowTiMo to describe multi-agent systems operating according to their accumulated knowledge. Essentially, the agents get an explicit representation of the knowledge about the other agents of a distributed network in order to decide their next interactions. The knowledge is defined as sets of trees whose nodes contain pairs of labels and values; this tree representation is similar to the data representation in Petri nets with structured data [13] and Xd π process calculus [14]. The network dynamics involving exchanges of knowledge between agents is presented by the operational semantics of this process calculus; its labelled transition system is able to capture the concurrent execution by using a multiset of actions. We proved that time passing in such a multi-agent system does not introduce any nondeterminism in the evolution of a network, and that the progression of the network is smooth (there are no time gaps). Several results are devoted to the relationship between the evolution of the agents and their knowledge.
According to [15], the notion of bisimulation was independently discovered in computer science [16,17], modal logic [18] and set theory [19,20]. Bisimulation is currently used in several domains: to test the behavioural equality of processes in concurrency [21]; to solve the state-explosion problem in model checking [22]; to index and compress semi-structured data in databases [23,24]; to solve Markov decision processes efficiently in stochastic planning [25]; to understand for some languages their expressiveness in description logics [26]; and to study the observational indistinguishability and computational complexity on data graphs in XPath (a language extending modal logic with equality tests for data) [27]. It is worth noting that the notion of bisimulation is related to the modal equivalence in various logics of knowledge and structures presented in [28]. In some of these logics it is proved that certain forms of bisimulation correspond to modal equivalence of knowledge, and this is used to compare the logics expressivity [29,30].
Inspired by the bisimulation notion defined in computer science, in this paper we defined and studied some specific behavioural equivalences involving the network knowledge and timing constraints on communication and migration; the defined behavioural equivalences are preserved during complete computational steps of two multi-agent systems. Strong timed bisimulation takes also into account timed transitions, being able to distinguish between different systems regardless of the evolution time; strong bounded timed bisimulation imposes limits for the evolution time, including the equivalences up to any bound below that deadline. A knowledge equivalence is able to distinguish between systems based on the interaction of the agents with their local knowledge. In the literature, a related but weaker/simpler approach of knowledge bisimulation appeared in [14], where the authors used only barbs (not equivalences), looking only at the update steps.

Author Contributions

All authors have read and agreed to the published version of the manuscript. All authors contributed equally to this work.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare that there is no conflict of interest.

References

  1. Ciobanu, G.; Koutny, M. Modelling and Verification of Timed Interaction and Migration. In Proceedings of the Fundamental Approaches to Software Engineering, 11th International Conference, FASE 2008, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2008, Budapest, Hungary, 29 March–6 April 2008; Lecture Notes in Computer Science. Fiadeiro, J.L., Inverardi, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; Volume 4961, pp. 215–229. [Google Scholar] [CrossRef] [Green Version]
  2. Abiteboul, S.; Buneman, P.; Suciu, D. Data on the Web: From Relations to Semistructured Data and XML; Morgan Kaufmann: Burlington, MA, USA, 1999. [Google Scholar]
  3. Aman, B.; Ciobanu, G. Verification of distributed systems involving bounded-time migration. Int. J. Crit.-Comput.-Based Syst. 2017, 7, 279–301. [Google Scholar] [CrossRef]
  4. Ciobanu, G. Behaviour Equivalences in Timed Distributed pi-Calculus. In Software-Intensive Systems and New Computing Paradigms—Challenges and Visions; Lecture Notes in Computer Science; Wirsing, M., Banâtre, J., Hölzl, M.M., Rauschmayer, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; Volume 5380, pp. 190–208. [Google Scholar] [CrossRef]
  5. Posse, E.; Dingel, J. Theory and Implementation of a Real-Time Extension to the pi-Calculus. In Proceedings of the Formal Techniques for Distributed Systems, Joint 12th IFIP WG 6.1 International Conference, FMOODS 2010 and 30th IFIP WG 6.1 International Conference, FORTE 2010, Amsterdam, The Netherlands, 7–9 June 2010; Lecture Notes in Computer Science. Hatcliff, J., Zucca, E., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6117, pp. 125–139. [Google Scholar] [CrossRef] [Green Version]
  6. Hintikka, J. Knowledge and Belief. An Introduction to the Logic of the Two Notions; Cornell University Press: Ithaca, NY, USA, 1962. [Google Scholar]
  7. Fagin, R.; Halpern, J.Y. Belief, Awareness, and Limited Reasoning. Artif. Intell. 1987, 34, 39–76. [Google Scholar] [CrossRef]
  8. Modica, S.; Rustichini, A. Awareness and partitional information structures. Theory Decis. 1994, 37, 107–124. [Google Scholar] [CrossRef]
  9. Ciobanu, G.; Koutny, M. Timed Migration and Interaction with Access Permissions. In Proceedings of the FM 2011: Formal Methods—17th International Symposium on Formal Methods, Limerick, Ireland, 20–24 June 2011; Lecture Notes in Computer Science. Butler, M.J., Schulte, W., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6664, pp. 293–307. [Google Scholar] [CrossRef]
  10. Aman, B.; Ciobanu, G. Real-Time Migration Properties of rTiMo Verified in Uppaal. In Proceedings of the Software Engineering and Formal Methods—11th International Conference, SEFM 2013, Madrid, Spain, 25–27 September 2013; Lecture Notes in Computer Science. Hierons, R.M., Merayo, M.G., Bravetti, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8137, pp. 31–45. [Google Scholar] [CrossRef]
  11. Milner, R. The Space and Motion of Communicating Agents; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  12. Xie, W.; Zhu, H.; Zhang, M.; Lu, G.; Fang, Y. Formalization and Verification of Mobile Systems Calculus Using the Rewriting Engine Maude. In Proceedings of the 2018 IEEE 42nd Annual Computer Software and Applications Conference, COMPSAC 2018, Tokyo, Japan, 23–27 July 2018; Reisman, S., Ahamed, S.I., Demartini, C., Conte, T.M., Liu, L., Claycomb, W.R., Nakamura, M., Tovar, E., Cimato, S., Lung, C., et al., Eds.; IEEE Computer Society: New York, NY, USA, 2018; Volume 1, pp. 213–218. [Google Scholar] [CrossRef]
  13. Badouel, É.; Hélouët, L.; Morvan, C. Petri Nets with Structured Data. Fundam. Inform. 2016, 146, 35–82. [Google Scholar] [CrossRef] [Green Version]
  14. Gardner, P.; Maffeis, S. Modelling dynamic web data. Theor. Comput. Sci. 2005, 342, 104–131. [Google Scholar] [CrossRef] [Green Version]
  15. Sangiorgi, D. On the origins of bisimulation and coinduction. ACM Trans. Program. Lang. Syst. 2009, 31, 15:1–15:41. [Google Scholar] [CrossRef]
  16. Ginzburg, A. Algebraic Theory of Automata, 1st ed.; Academic Press: Cambridge, MA, USA, 1968. [Google Scholar]
  17. Milner, R. An Algebraic Definition of Simulation between Programs. In Proceedings of the 2nd International Joint Conference on Artificial Intelligence, London, UK, 1–3 September 1971; Cooper, D.C., Ed.; William Kaufmann: Pleasant Hill, CA, USA, 1971; pp. 481–489. [Google Scholar]
  18. van Benthem, J. Modal Logic and Classical Logic; Bibliopolis: Asheville, NC, USA, 1985. [Google Scholar]
  19. Forti, M.; Honsell, F. Set theory with free construction principles. Ann. Della Sc. Norm. Super. Pisa Cl. Sci. 4E SÉRie 1983, 10, 493–522. [Google Scholar]
  20. Hinnion, R. Extensional quotients of structures and applications to the study of the axiom of extensionality. Bull. Soci‘Et’E Mathmatique Belg. 1981, XXXIII, 173–206. [Google Scholar]
  21. Aman, B.; Ciobanu, G.; Koutny, M. Behavioural Equivalences over Migrating Processes with Timers. In Proceedings of the Formal Techniques for Distributed Systems—Joint 14th IFIP WG 6.1 International Conference, FMOODS 2012 and 32nd IFIP WG 6.1 International Conference, FORTE 2012, Stockholm, Sweden, 13–16 June 2012; Lecture Notes in Computer Science. Giese, H., Rosu, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7273, pp. 52–66. [Google Scholar] [CrossRef] [Green Version]
  22. Clarke, E.M.; Grumberg, O.; Peled, D.A. Model Checking; MIT Press: Cambridge, MA, USA, 2001. [Google Scholar]
  23. Milo, T.; Suciu, D. Index Structures for Path Expressions. In Proceedings of the Database Theory—ICDT ’99, 7th International Conference, Jerusalem, Israel, 10–12 January 1999; Lecture Notes in Computer Science. Beeri, C., Buneman, P., Eds.; Springer: Berlin/Heidelberg, Germany, 1999; Volume 1540, pp. 277–295. [Google Scholar] [CrossRef]
  24. Fan, W.; Li, J.; Wang, X.; Wu, Y. Query preserving graph compression. In Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2012, Scottsdale, AZ, USA, 20–24 May 2012; Candan, K.S., Chen, Y., Snodgrass, R.T., Gravano, L., Fuxman, A., Eds.; ACM: New York, NY, USA, 2012; pp. 157–168. [Google Scholar] [CrossRef] [Green Version]
  25. Givan, R.; Dean, T.L.; Greig, M. Equivalence notions and model minimization in Markov decision processes. Artif. Intell. 2003, 147, 163–223. [Google Scholar] [CrossRef] [Green Version]
  26. Kurtonina, N.; de Rijke, M. Expressiveness of Concept Expressions in First-Order Description Logics. Artif. Intell. 1999, 107, 303–333. [Google Scholar] [CrossRef] [Green Version]
  27. Abriola, S.; Barceló, P.; Figueira, D.; Figueira, S. Bisimulations on Data Graphs. J. Artif. Intell. Res. 2018, 61, 171–213. [Google Scholar] [CrossRef]
  28. Fagin, R.; Halpern, J.Y.; Moses, Y.; Vardi, M.Y. Reasoning about Knowledge; MIT Press: Cambridge, MA, USA, 1995. [Google Scholar] [CrossRef]
  29. van Ditmarsch, H.; French, T.; Velázquez-Quesada, F.R.; Wáng, Y.N. Knowledge, awareness, and bisimulation. In Proceedings of the 14th Conference on Theoretical Aspects of Rationality and Knowledge (TARK 2013), Chennai, India, 7–9 January 2013; Schipper, B.C., Ed.; 2013. [Google Scholar]
  30. Velázquez-Quesada, F.R. Bisimulation characterization and expressivity hierarchy of languages for epistemic awareness models. J. Log. Comput. 2018, 28, 1805–1832. [Google Scholar] [CrossRef]
Table 1. Syntax of our Multi-Agent Systems.
Table 1. Syntax of our Multi-Agent Systems.
ProcessesP, Q : : = got l then P(move)
| a Δ t ! v then P else Q(output)
| a Δ t ? ( u ) then P else Q(input)
|if test then P else Q(branch)
|0(termination)
| id ( v ) (recursion)
| create ( f v ; ) then P(create)
| update ( p / f , v ) then P(update)
KnowledgeK : : = (empty)
| f ε ; K  |  f v ; K (tree)
| K K (set)
Paths p , p : : = / f  |  p p  |  p [ test ( p ) ]  |  p [ test ( p / f ) ]
Tests test ( p ) : : = true  |  ¬ test ( p )  |  K ( p ) > k  |  K ( p ) = v   |
test : : = test ( p ) test ( p )  |  ¬ test
Agents A , B : : = P K
Set of Agents A ˜ : : = 0  |  A ˜ A
NetworksN : : = l [ [ A ˜ ] ]  |  N N
Table 2. Operational Semantics for our Multi-Agent Systems.
Table 2. Operational Semantics for our Multi-Agent Systems.
(Stop) l [ [ 0 ] ]
(Com) l [ [ a Δ t 1 ! v then P 1 else Q 1 K 1 a Δ t 2 ? ( u ) then P 2 else Q 2 K 2 A ˜ ] ] a ! ? @ l l [ [ P 1 K 1 { v / u } P 2 K 2 A ˜ ] ]
(Put0) l [ [ a Δ 0 ! v then P else Q K A ˜ ] ] a ! Δ 0 @ l l [ [ Q K A ˜ ] ]
(Get0) l [ [ a Δ 0 ? ( u ) then P else Q K A ˜ ] ] a ? Δ 0 @ l l [ [ Q K A ˜ ] ]
(Move0) l [ [ go 0 l then P K A ˜ ] ] l [ [ B ˜ ] ] l l l [ [ A ˜ ] ] l [ [ P K B ˜ ] ]
(IfT) test @ K = true l [ [ if test then P else Q K A ˜ ] ] true @ l l [ [ P K A ˜ ] ]
(IfF) test @ K = false l [ [ if test then P else Q K A ˜ ] ] false @ l l [ [ Q K A ˜ ] ]
(Create) l [ [ create ( f v ; ) then P K A ˜ ] ] create f @ l l [ [ P K f v ; A ˜ ] ]
(Update) p / f = / f / f K = f v ; f v ; K 1 K 2 K 3 K = f v ; f v ; K 1 K 2 K 3 l [ [ update ( p / f , v ) then P K A ˜ ] ] upd p @ l l [ [ P K A ˜ ] ]
(Extend) p = / f / f p / f K = f v ; f v ; K 1 K 2 K 3 K = f v ; f v ; f v ; K 1 K 2 K 3 l [ [ update ( p / f , v ) then P K A ˜ ] ] upd p @ l l [ [ P K A ˜ ] ]
(Call) l [ [ id ( v ) K A ˜ ] ] call @ l l [ [ { v / u } P id K A ˜ ] ] , where i d ( u ) = def P id
(Par) N 1 Λ 1 N 1 N 2 Λ 2 N 2 N 1 N 2 Λ 1 | Λ 2 N 1 N 2    (Equiv) N N N Λ N N N N Λ N
Table 3. Operational Semantics of knowTiMo: Time Passing.
Table 3. Operational Semantics of knowTiMo: Time Passing.
(DStop) l [ [ 0 ] ] t l [ [ 0 ] ]
(DPut) t t 0 l [ [ a Δ t ! v then P else Q K ] ] t l [ [ a Δ t t ! v then P else Q K ] ]
(DGet) t t 0 l [ [ a Δ t ? ( u ) then P else Q K ] ] t l [ [ a Δ t t ? ( u ) then P else Q K ] ]
(DMove) t t 0 l [ [ go t l then P K ] ] t l [ [ go t t l then P K ] ]
(DPar) N 1 t N 1 N 2 t N 2 N 1 N 2 N 1 N 2 t N 1 N 2
(DEquiv) N N N t N N N N t N
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Aman, B.; Ciobanu, G. Knowledge Dynamics and Behavioural Equivalences in Multi-Agent Systems. Mathematics 2021, 9, 2869. https://doi.org/10.3390/math9222869

AMA Style

Aman B, Ciobanu G. Knowledge Dynamics and Behavioural Equivalences in Multi-Agent Systems. Mathematics. 2021; 9(22):2869. https://doi.org/10.3390/math9222869

Chicago/Turabian Style

Aman, Bogdan, and Gabriel Ciobanu. 2021. "Knowledge Dynamics and Behavioural Equivalences in Multi-Agent Systems" Mathematics 9, no. 22: 2869. https://doi.org/10.3390/math9222869

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop