Next Article in Journal
Prospective Teachers’ Use of Conceptual Advances of Learning Trajectories to Develop Their Teaching Competence in the Context of Pattern Generalization
Previous Article in Journal
Existence and Multiplicity of Solutions for a Bi-Non-Local Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis of a Class of Stochastic Animal Behavior Models under Specific Choice Preferences

1
Department of Mathematics and Statistics, Faculty of Science and Technology, Thammasat University Rangsit Center, Pathum Thani 12120, Thailand
2
Department of Mathematics and Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
3
Faculty of Electrical Engineering, University of Banja Luka, Patre 5, 78000 Banja Luka, Bosnia and Herzegovina
4
School of Science, Nanjing University of Science and Technology, Nanjing 210094, China
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(12), 1975; https://doi.org/10.3390/math10121975
Submission received: 18 April 2022 / Revised: 29 May 2022 / Accepted: 6 June 2022 / Published: 8 June 2022

Abstract

:
The behavior of animals can be studied in two ways: experimentally, in labs or in the field, or theoretically, via modeling. Extensive research on animal behavior in probabilistic learning circumstances has produced findings that are consistent with so-called occurrence studies. However, such cases can be classified into four different events, depending on the location of the food and the side chosen by the animals. This article sought to overcome these limitations by offering a generic stochastic model under these conditions that can be utilized to analyze the broad range of models that have been reported in the literature. We explored the existence, uniqueness, and stability results of the proposed model using well-known fixed-point methods. Additionally, we present some examples to highlight the significance of our results.
MSC:
91G30; 92D50; 46N60; 47H10

1. Introduction and Preliminaries

Animal behavior is a lively and intriguing field of study within science. Many of the fundamental concepts of animal behavior are inspired by ecology, physiology, and evolution, despite their roots in comparative psychology and ethology (see [1]). Numerous fields within animal behavior study can be applied to real-world challenges, including monitoring well-being indicators for animals in wildlife parks, research labs, and farms (see [2]), behavioral difficulties in pet animals (see [3,4]), and the cognitive implications of environmental control (see [5]).
Light prediction research projects and cooperative games are two types of experimental settings that are often utilized in decision-making research and have served as the background for several studies. The findings of studies that were conducted within these two contexts are significantly different. Investigations on the former have shown that decision-making attitudes are remarkably resilient, which results in the development of a profusion of incredibly exact behavioral models (see [6]). Regrettably, research on the latter seems to have produced more uncertainty than clarity (see [7,8]).
Mathematical psychology is a branch of psychology that studies how to describe observations, reasoning, and consciousness using mathematics. When a substantial volume of quantitative data has been amassed, mathematical models for scientific processes can assist in the evolution of science. This aggregation may steer the construction of models and assess the sufficiency of their intermediate phases. In turn, these models are typically advantageous for organizing and analyzing experimental data and proposing novel routes for investigation. Few fields of psychology have the same amount and diversity of data required for model construction as for learning. Numerous efforts to develop quantitative models for learning processes have provided evidence for this assertion (for details, see [9,10]).
Recent research within mathematical psychology has shown that a simple learning experiment behaves stochastically. Consequently, it is not a novel phenomenon (for more information, see [11,12,13,14,15]).
To illustrate the learning behavior of animals in more than one event, Epstein proposed the following model [16] in 1967:
H ( x ) = e x ( 1 + e x ) 1 H ( k 1 ( x ) ) + 1 + e x 1 H ( k 2 ( x ) )
for all x D and where k 1 , k 2 : D R are given mappings and H : D R is an unknown function. The bilateral Laplace transformation was then used to determine the solution of the aforementioned problem.
In [11,12], the authors examined the motion of a tropical fish in specific circumstances while observing both habit formation and reinforcement extinction behavior. They asserted that four separate outputs are associated with such actions: right non-reward, right reward, left reward, and left non-reward.
In [17], Turab and Sintunavarat utilized the above idea to explain the behavior of a paradise fish in a two-choice situation using the following stochastic equation:
H ( x ) = x H ( 1 x + 1 1 ) + ( 1 x ) H ( 2 x )
for all x D and where 0 < 1 2 < 1 and H : D R is an unknown function.
Later, the authors proposed the following functional equation to study the traumatic avoidance learning behavior of mongrel dogs (for details, see [18]):
H ( x ) = x H ( 1 x + ( 1 1 ) 1 ) + ( 1 x ) H ( 2 x + ( 1 2 ) 2 )
for all x D and where 0 < 1 2 < 1 , 1 , 2 [ 0 , 1 ] , and H : D R is an unknown function. On the other hand, Berinde and Khan [19] and Turab and Sintunavarat [20] used different approaches and extended the work presented in [21] by proposing the following general functional equation:
H ( x ) = x H ( ϱ 1 ( x ) ) + ( 1 x ) H ( ϱ 2 ( x ) )
for all x D and where H : D R is an unknown function and ϱ 1 , ϱ 2 : D R are given mappings.
Numerous further investigations on human (see [22,23,24,25,26,27]) and animal (see [28,29,30,31]) behavior within probability learning settings have generated diverse findings.
Most research on the study of animal responses in two-option scenarios has only looked at the animals’ progression toward a single alternative (for examples, see [16,17,18,19,20]). On the other hand, in [11,12], the authors categorized such reactions into four groups by concentrating on the food position and the selected side. To fill that void, we proposed the following stochastic model:
H ( x ) = Y 1 ( x ) H G 1 γ ( x ) + Y 2 ( x ) H G 2 γ ( x ) + Y 3 ( x ) H G 3 γ ( x ) + Y 4 ( x ) H G 4 γ ( x ) ,
for all x D and where H : D R is an unknown function and G 1 γ , G 2 γ , G 3 γ , G 4 γ : D D are the four given responses. Additionally:
Y 1 ( x ) = ζ V ( x ) , Y 2 ( x ) = ( 1 ζ ) V ( x ) , Y 3 ( x ) = ζ ( 1 V ( x ) ) , Y 4 ( x ) = ( 1 ζ ) ( 1 V ( x ) ) ,
where V : D D represents the corresponding probability of the event with the fixed proportion of the trials within 0 ζ 1 .
The primary contributions of this manuscript are as follows:
  • We propose a general functional equation with more than two options, focusing on food location and the chosen side;
  • Our proposed model generalizes many models in the existing literature and opens a new avenue of research within mathematical psychology and learning theory;
  • We used the Banach fixed-point theorem to establish the existence of a unique solution to the proposed model (for further information on fixed-point theory, see [32,33,34,35,36]);
  • We also discuss Hyers–Ulam and Hyers–Ulam–Rassias stability for the proposed system;
  • We present some examples to show the importance of our results within this field of study.
In order to progress, the following result is necessary.
Theorem 1
([37]). Let ( D , d ) be a complete metric space and K : D D be a mapping that satisfies:
d ( K μ , K υ ) Λ d ( μ , υ ) , μ , υ D ,
for Λ < 1 . Then, K has a unique fixed point. Moreover, for each μ 0 D , the iteration process { μ n } n N in D that is defined by μ n = K μ n 1 converges to the unique fixed point of K .

2. Main Results

Throughout this paper, we let D = [ 0 , 1 ] . Before proving the main results, we highlight the following assumptions.
O ˜ 1 :
Let T R C be a class containing all continuous real-value functions H : D R satisfying H ( 0 ) = 0 and H = sup τ ˜ H τ ˜ \ τ ˜ < , where H τ ˜ = H ( μ ) H ( υ ) and τ ˜ = μ υ with μ υ . There is also a subset C B S of P S : = H T R C H ( 1 ) 1 , such that ( C B S , · ) is a Banach space (for details, see [17]);
O ˜ 2 :
Let G 1 γ , G 2 γ , G 3 γ , G 4 γ : D D be contraction mappings with contractive coefficients κ 1 κ 4 , respectively, with G 3 γ ( 0 ) = 0 = G 4 γ ( 0 ) ;
O ˜ 3 :
Let V : D D be a non-expansive mapping with | V ( x ) | κ 5 for κ 5 > 0 and V ( 0 ) = 0 ;
O ˜ 4 :
The mappings G 1 γ , G 2 γ , G 3 γ , G 4 γ : D D satisfy the following conditions:
G 1 γ ( μ ) G 3 γ ( υ ) κ 6 μ υ a n d G 2 γ ( μ ) G 4 γ ( υ ) κ 7 μ υ
for all μ , υ D with μ υ and where κ 6 , κ 7 [ 0 , 1 ) ;
O ˜ 5 :
Let α : C B S [ 0 , ) be a fixed function. Then, for every H C B S that satisfies d ( MH , H ) α ( H ) , there is a unique H C B S , such that M H = H and d ( H , H ) θ α ( H ) for θ > 0 ;
O ˜ 6 :
Let χ > 0 . Then, for every H C B S that satisfies d ( MH , H ) χ , there is a unique H C B S , such that M H = H and d ( H , H ) ρ χ for ρ > 0 .
We begin with the result stated below.
Theorem 2.
Consider the model (5)–(6) with the assumptions O ˜ 1 O ˜ 4 . Suppose that Λ 1 < 1 , where:
Λ 1 : = ζ ( 1 + κ 5 ) κ 1 + κ 3 + κ 6 + ( 1 ζ ) ( 1 + κ 5 ) κ 2 + κ 4 + κ 7 .
The mapping M : C B S C B S then has a solution where:
( MH ) ( x ) = Y 1 ( x ) H G 1 γ ( x ) + Y 2 ( x ) H G 2 γ ( x ) + Y 3 ( x ) H G 3 γ ( x ) + Y 4 ( x ) H G 4 γ ( x )
for all x D . Furthermore, the Picard sequence H n n N in C B S H 0 C B S converges to the unique solution of (9), which is defined by:
H n ( x ) = Y 1 ( x ) H n 1 G 1 γ ( x ) + Y 2 ( x ) H n 1 G 2 γ ( x ) + Y 3 ( x ) H n 1 G 3 γ ( x ) + Y 4 ( x ) H n 1 G 4 γ ( x ) .
Proof. 
We defined a metric d : C B S × C B S R on C B S , which was induced by · . It could be seen that ( C B S , d ) was a complete metric space. Then, we considered the mapping M from C B S which was given in (9).
For each H , it could be seen that ( MH ) ( 0 ) = ζ H G 3 γ ( 0 ) + ( 1 ζ ) H G 4 γ ( 0 ) = 0 . Additionally, M was continuous and MH < for all H C B S . Thus, M was a self operator on C B S . Furthermore, it was clear that the fixed point of M was the solution of (9). As M was a linear mapping (hence H 1 , K 2 C B S ), we could write MH 1 MH 2 = M ( H 1 H 2 ) . Thus, for each γ 1 , γ 2 D with γ 1 γ 2 , we had Θ γ 1 γ 2 γ 1 , γ 2 = M H ˜ ( γ 1 ) H ˜ ( γ 2 ) \ γ ˜ , where H ˜ = H 1 H 2 and γ ˜ = γ 1 γ 2 . By using this argument in (9), we could write:
Θ γ 1 γ 2 γ 1 , γ 2 = Y 1 ( γ 1 ) H ˜ G 1 γ ( γ 1 ) + Y 2 ( γ 1 ) H ˜ G 2 γ ( γ 1 ) + Y 3 ( γ 1 ) H ˜ G 3 γ ( γ 1 ) + Y 4 ( γ 1 ) H ˜ G 4 γ ( γ 1 ) Y 1 ( γ 2 ) H ˜ G 1 γ ( γ 2 ) Y 2 ( γ 2 ) H ˜ G 2 γ ( γ 2 ) Y 3 ( γ 2 ) H ˜ G 3 γ ( γ 2 ) Y 4 ( γ 2 ) H ˜ G 4 γ ( γ 2 ) ( γ ˜ ) 1 = Y 1 ( γ 1 ) H ˜ G 1 γ ( γ 1 ) Y 1 ( γ 1 ) H ˜ G 1 γ ( γ 2 ) + Y 2 ( γ 1 ) H ˜ G 2 γ ( γ 1 ) Y 2 ( γ 1 ) H ˜ G 2 γ ( γ 2 ) + Y 3 ( γ 1 ) H ˜ G 3 γ ( γ 1 ) Y 3 ( γ 1 ) H ˜ G 3 γ ( γ 2 ) + Y 4 ( γ 1 ) H ˜ G 4 γ ( γ 1 ) Y 4 ( γ 1 ) H ˜ G 4 γ ( γ 2 ) + Y 1 ( γ 1 ) H ˜ G 1 γ ( γ 2 ) Y 1 ( γ 2 ) H ˜ G 1 γ ( γ 2 ) + Y 2 ( γ 1 ) H ˜ G 2 γ ( γ 2 ) Y 2 ( γ 2 ) H ˜ G 2 γ ( γ 2 ) + Y 3 ( γ 1 ) H ˜ G 3 γ ( γ 2 ) Y 3 ( γ 2 ) H ˜ G 3 γ ( γ 2 ) + Y 4 ( γ 1 ) H ˜ G 4 γ ( γ 2 ) Y 4 ( γ 2 ) H ˜ G 4 γ ( γ 2 ) ( γ ˜ ) 1
From the above analysis, we could write:
Θ γ 1 γ 2 γ 1 , γ 2 = | Y 1 ( γ 1 ) H ˜ G 1 γ ( γ 1 ) Y 1 ( γ 1 ) H ˜ G 1 γ ( γ 2 ) + Y 2 ( γ 1 ) H ˜ G 2 γ ( γ 1 ) Y 2 ( γ 1 ) H ˜ G 2 γ ( γ 2 ) + Y 3 ( γ 1 ) H ˜ G 3 γ ( γ 1 ) Y 3 ( γ 1 ) H ˜ G 3 γ ( γ 2 ) + Y 4 ( γ 1 ) H ˜ G 4 γ ( γ 1 ) Y 4 ( γ 1 ) H ˜ G 4 γ ( γ 2 ) + Y 1 ( γ 1 ) H ˜ G 1 γ ( γ 2 ) Y 1 ( γ 2 ) H ˜ G 1 γ ( γ 2 ) + Y 2 ( γ 1 ) H ˜ G 2 γ ( γ 2 ) Y 2 ( γ 2 ) H ˜ G 2 γ ( γ 2 ) + Y 3 ( γ 1 ) H ˜ G 3 γ ( γ 2 ) Y 3 ( γ 2 ) H ˜ G 3 γ ( γ 2 ) + Y 4 ( γ 1 ) H ˜ G 4 γ ( γ 2 ) Y 4 ( γ 2 ) H ˜ G 4 γ ( γ 2 ) ( γ ˜ ) 1
By using the assumptions O ˜ 1 O ˜ 4 , we obtained:
Θ γ 1 γ 2 γ 1 , γ 2 κ 1 Y 1 ( γ 1 ) H ˜ + κ 2 Y 2 ( γ 1 ) H ˜ + κ 3 Y 3 ( γ 1 ) H ˜ + κ 4 Y 4 ( γ 1 ) H ˜ + ζ H ˜ G 1 γ ( γ 2 ) ζ H ˜ G 1 γ ( γ 1 ) + ( 1 ζ ) H ˜ G 2 γ ( γ 2 ) ( 1 ζ ) H ˜ G 2 γ ( γ 1 ) + ζ H ˜ G 1 γ ( γ 1 ) ζ H ˜ G 3 γ ( γ 2 ) + ( 1 ζ ) H ˜ G 2 γ ( γ 1 ) ( 1 ζ ) H ˜ G 4 γ ( γ 2 ) Λ 1 H ˜ ,
where Λ 1 is as defined in (8). This produced:
d ( MH 1 , MH 2 ) = M H ˜ Λ 1 H ˜ = Λ 1 d ( H 1 , H 2 ) .
As 0 < Λ 1 < 1 , by Theorem 1, we obtained the solution of (9) and the Picard sequence H n n N in C B S H 0 C B S converged to the unique solution of (9). □
For the special case of Theorem 2, we considered the following assumption:
O ˜ 2 :
Let G 1 γ , G 2 γ , G 3 γ , G 4 γ : D D be contraction mappings with contractive coefficients κ 1 κ 2 κ 3 κ 4 , respectively, with G 3 γ ( 0 ) = 0 = G 4 γ ( 0 ) .
Corollary 1.
Consider the model (5) and (6) with the assumptions O ˜ 1 , O ˜ 2 , O ˜ 3 , and O ˜ 4 . When Λ 1 < 1 , where:
Λ 1 : = ( 2 + κ 5 ) κ 4 + ( κ 6 + κ 7 ) ,
then the mapping M : C B S C B S has a solution where:
( MH ) ( x ) = Y 1 ( x ) H G 1 γ ( x ) + Y 2 ( x ) H G 2 γ ( x ) + Y 3 ( x ) H G 3 γ ( x ) + Y 4 ( x ) H G 4 γ ( x )
for all x D . Furthermore, the Picard sequence H n n N in C B S H 0 C B S converges to a unique solution of (12), which is defined by:
H n ( x ) = Y 1 ( x ) H n 1 G 1 γ ( x ) + Y 2 ( x ) H n 1 G 2 γ ( x ) + Y 3 ( x ) H n 1 G 3 γ ( x ) + Y 4 ( x ) H n 1 G 4 γ ( x ) .
The conditions O ˜ 1 O ˜ 4 were sufficient but not necessary to claim the existence of a unique solution to the proposed model (9). Next, we imposed the following key assumption.
O ˜ 4 :
Points ξ 1 , ξ 2 D exist such that G 1 γ ( ξ 1 ) = G 3 γ ( ξ 1 ) and G 2 γ ( ξ 2 ) = G 4 γ ( ξ 2 ) .
Theorem 3.
Consider the model (5)–(6) with the assumptions O ˜ 1 O ˜ 3 and O ˜ 4 . Suppose that Λ 2 < 1 , where:
Λ 2 : = ζ ( 1 + κ 5 ) κ 1 + 2 κ 3 + ( 1 ζ ) ( 1 + κ 5 ) κ 2 + 2 κ 4 ,
then the mapping M : C B S C B S has a solution where:
( MH ) ( x ) = Y 1 ( x ) H G 1 γ ( x ) + Y 2 ( x ) H G 2 γ ( x ) + Y 3 ( x ) H G 3 γ ( x ) + Y 4 ( x ) H G 4 γ ( x )
for all x D . Furthermore, the Picard sequence H n n N in C B S H 0 C B S converges to a unique solution of (15), which is defined by:
H n ( x ) = Y 1 ( x ) H n 1 G 1 γ ( x ) + Y 2 ( x ) H n 1 G 2 γ ( x ) + Y 3 ( x ) H n 1 G 3 γ ( x ) + Y 4 ( x ) H n 1 G 4 γ ( x ) .
Proof. 
This theorem followed the same line of reasoning as Theorem 2. The sections that varied from the preceding theorem are highlighted below. For each γ 1 , γ 2 D with γ 1 γ 2 , we obtained Θ γ 1 γ 2 γ 1 , γ 2 = M H ˜ ( γ 1 ) H ˜ ( γ 2 ) \ γ ˜ , where H ˜ = H 1 H 2 and γ ˜ = γ 1 γ 2 . Thus, using the assumptions O ˜ 1 O ˜ 3 , we could write:
Θ γ 1 γ 2 γ 1 , γ 2 κ 1 Y 1 ( γ 1 ) H ˜ + κ 2 Y 2 ( γ 1 ) H ˜ + κ 3 Y 3 ( γ 1 ) H ˜ + κ 4 Y 4 ( γ 1 ) H ˜ ζ H ˜ G 1 γ ( γ 2 ) ζ H ˜ G 1 γ ( ξ 1 ) + ( 1 ζ ) H ˜ G 2 γ ( γ 2 ) ( 1 ζ ) H ˜ G 1 γ ( ξ 2 ) ζ H ˜ G 3 γ ( ξ 1 ) ζ H ˜ G 3 γ ( γ 2 ) + ( 1 ζ ) H ˜ G 4 γ ( ξ 2 ) ( 1 ζ ) H ˜ G 4 γ ( γ 2 ) .
Then, utilizing the condition O ˜ 4 , we obtained the following results.
C a s e 1 :
When γ 2 = ξ 1 = ξ 2 , then using (17), we obtained:
Θ γ 1 γ 2 γ 1 , γ 2 κ 1 Y 1 ( γ 1 ) H ˜ + κ 2 Y 2 ( γ 1 ) H ˜ + κ 3 Y 3 ( γ 1 ) H ˜ + κ 4 Y 4 ( γ 1 ) H ˜ Λ 2 H ˜ .
C a s e 2 :
When γ 2 ξ 1 and γ 2 = ξ 2 , then using (17), we obtained:
Θ γ 1 γ 2 γ 1 , γ 2 κ 1 Y 1 ( γ 1 ) H ˜ + κ 2 Y 2 ( γ 1 ) H ˜ + κ 3 Y 3 ( γ 1 ) H ˜ + κ 4 Y 4 ( γ 1 ) H ˜ + κ 1 ζ | γ 2 ξ 1 | H ˜ + κ 3 ζ | ξ 1 γ 2 | H ˜ Λ 2 H ˜ .
C a s e 3 :
When γ 2 = ξ 1 and γ 2 ξ 2 , then using (17), we obtained:
Θ γ 1 γ 2 γ 1 , γ 2 κ 1 Y 1 ( γ 1 ) H ˜ + κ 2 Y 2 ( γ 1 ) H ˜ + κ 3 Y 3 ( γ 1 ) H ˜ + κ 4 Y 4 ( γ 1 ) H ˜ + κ 2 ( 1 ζ ) | γ 2 ξ 2 | H ˜ + κ 4 ( 1 ζ ) | ξ 2 γ 2 | H ˜ Λ 2 H ˜ .
C a s e 4 :
When γ 2 ξ 1 ξ 2 , then using (17), we obtained:
Θ γ 1 γ 2 γ 1 , γ 2 κ 1 Y 1 ( γ 1 ) H ˜ + κ 2 Y 2 ( γ 1 ) H ˜ + κ 3 Y 3 ( γ 1 ) H ˜ + κ 4 Y 4 ( γ 1 ) H ˜ + κ 1 ζ | γ 2 ξ 1 | H ˜ + κ 3 ζ | ξ 1 γ 2 | H ˜ + κ 2 ( 1 ζ ) | γ 2 ξ 2 | H ˜ + κ 4 ( 1 ζ ) | ξ 2 γ 2 | H ˜ Λ 2 H ˜ ,
where Λ 2 is as defined in (14). Hence:
d ( MH 1 , MH 2 ) = M H ˜ Λ 2 H ˜ = Λ 2 d ( H 1 , H 2 ) .
As 0 < Λ 2 < 1 , by Theorem 1, we obtained the solution of (15) and the Picard sequence H n n N in C B S H 0 C B S , as defined in (16), converged to the unique solution of (15). □
The following result followed on from the preceding theorem.
Corollary 2.
Consider the model (5)–(6) with the assumptions O ˜ 1 , O ˜ 2 , O ˜ 3 , and O ˜ 4 . When Λ 2 : ( 3 + κ 5 ) κ 4 < 1 , then the mapping M : C B S C B S has a solution where:
( MH ) ( x ) = Y 1 ( x ) H G 1 γ ( x ) + Y 2 ( x ) H G 2 γ ( x ) + Y 3 ( x ) H G 3 γ ( x ) + Y 4 ( x ) H G 4 γ ( x )
for all x D . Furthermore, the Picard sequence H n n N in C B S H 0 C B S converges to a unique solution of (18), which is defined by:
H n ( x ) = Y 1 ( x ) H n 1 G 1 γ ( x ) + Y 2 ( x ) H n 1 G 2 γ ( x ) + Y 3 ( x ) H n 1 G 3 γ ( x ) + Y 4 ( x ) H n 1 G 4 γ ( x ) .
Remark 1.
The suggested model (5), when associated with (6), is an extension of several models that have been seen in the literature. For example:
When we considered ζ = 0 , V ( x ) = x , and G 2 γ ( 1 ) = 1 , then our proposed stochastic Equation (5) with (6) decreased to the functional equation discussed in [17];
When we considered ζ = 1 , V ( x ) = x , and G 1 γ ( 1 ) 1 , then our proposed model (5)–(6) reduced to the functional equation discussed in [18];
When we considered ζ = 0 and V ( x ) = x and the mappings G 1 γ , G 2 γ : D D satisfied the condition O ˜ 2 with G 2 γ ( 1 ) = 1 , then our proposed Equation (5), with regard to (6), generalized the models presented in [19,21].
Remark 2.
We could not predict rapid convergence from the sequence of iterations due to the linear convergence rate of the Picard iteration. We could apply a suitable accelerative approach to overcome this problem (for details, see [38,39,40]).

3. Stability Analysis

The consistency of solutions is essential within computational mathematics. Results can be affected by minor changes in the given dataset. Therefore, it was essential to observe the stability (see [41,42,43,44,45,46,47,48] for details) of the proposed model (5) when associated with (6).
Theorem 4.
Under the premise of Theorem 3, the equation MH = H , where M : C B S C B S , is given by:
( MH ) ( x ) = Y 1 ( x ) H G 1 γ ( x ) + Y 2 ( x ) H G 2 γ ( x ) + Y 3 ( x ) H G 3 γ ( x ) + Y 4 ( x ) H G 4 γ ( x )
for all H C B S and x D , which has Hyers–Ulam–Rassias stability (as defined in O ˜ 5 ).
Proof. 
We let H C B S , such that d ( MH , H ) α ( H ) . Using Theorem 3, we deduced that there was a unique H C B S , such that M H = H . Thus, we obtained:
d ( H , H ) d ( H , MH ) + d ( MH , H ) α ( H ) + d ( MH , M H ) α ( H ) + Λ 2 d ( H , H ) ,
where Λ 2 is as defined in (14). So:
d ( H , H ) θ α ( H ) ,
where θ : = ( 1 Λ 2 ) 1 . □
We could obtain the following outcome about the Hyers–Ulam stability from the above result.
Corollary 3.
Under the hypothesis of Theorem 3, the equation MH = H , where M : C B S C B S , is given by:
( MH ) ( x ) = Y 1 ( x ) H G 1 γ ( x ) + Y 2 ( x ) H G 2 γ ( x ) + Y 3 ( x ) H G 3 γ ( x ) + Y 4 ( x ) H G 4 γ ( x )
for all H C B S and x D , which has Hyers–Ulam stability (as defined in O ˜ 6 ).

4. Some Illustrative Examples

We now offer the following examples to enhance our key findings.
Example 1.
Consider the following model:
H ( x ) = ζ x H θ 1 x + 1 θ 1 + ( 1 ζ ) x H θ 2 x + 1 θ 2 + ζ ( 1 x ) H θ 3 x + ( 1 ζ ) ( 1 x ) H θ 4 x , x D ,
where 0 < θ 1 < θ 2 < θ 3 < θ 4 < 1 and H : D R . Then, when we considered G 1 γ , G 2 γ , G 3 γ , G 4 γ : D D by G 1 γ ( x ) = θ 1 x + 1 θ 1 , G 2 γ ( x ) = θ 2 x + 1 θ 2 , G 3 γ ( x ) = θ 3 x , G 4 γ ( x ) = θ 4 x , a n d V ( x ) = x , (22) was decreased to the proposed model (5).
The above functional equation (22) was used to observe a particular behavior of a rat in a T-maze model (see Figure 1). In that experiment, the rat sometimes chose the same side repeatedly and increased the probability of that event (for details, see [49]).
The operators G 1 γ , G 2 γ , G 3 γ , and G 4 γ represent the four possible outcomes, which depended on the food placement and the chosen side (for details, see Table 1).
It is easy to see that the transition operators G 1 γ , G 2 γ , G 3 γ , and G 4 γ were contraction mappings with contractive coefficients κ 1 = θ 1 , κ 2 = θ 2 , κ 3 = θ 3 and κ 4 = θ 4 , respectively. We also obtained points ξ 1 , ξ 2 [ 0 , 1 ] , such that G 1 γ ( ξ 1 ) = 1 θ 1 θ 1 θ 3 = G 3 γ ( ξ 1 ) and G 2 γ ( ξ 2 ) = 1 θ 2 θ 4 θ 2 = G 4 γ ( ξ 2 ) , where θ 1 θ 3 0 and θ 2 θ 4 0 . Moreover, V : D D was a non-expansive mapping and κ 5 = 1 . Additionally, for all ζ [ 0 , 1 ] and θ 1 , θ 2 , θ 3 , θ 4 [ 0 , 1 ) , we obtained Λ 2 : = | ζ ( θ 1 + θ 3 ) + ( 1 ζ ) ( θ 2 + θ 4 ) | < 1 2 .
It can be seen that all conditions of Theorem 3 were satisfied. Consequently, the proposed model (23) had a solution. Furthermore, when we picked an initial approximation H 0 = I C B S (whereas I is an identity function), the next iteration led to the unique solution of (22):
H 1 ( x ) = ( x 2 x ) ζ ( θ 1 θ 2 θ 3 + θ 4 ) + ( θ 4 θ 2 ) + x , H 2 ( x ) = ζ x H 1 θ 1 x + 1 θ 1 + ( 1 ζ ) x H 1 θ 2 x + 1 θ 2 + ζ ( 1 x ) H 1 θ 3 x + ( 1 ζ ) ( 1 x ) H 1 θ 4 x , H n ( x ) = ζ x H n 1 θ 1 x + 1 θ 1 + ( 1 ζ ) x H n 1 θ 2 x + 1 θ 2 + ζ ( 1 x ) H n 1 θ 3 x + ( 1 ζ ) ( 1 x ) H n 1 θ 4 x ,
for all n N .
Example 2.
Consider the following model:
H ( x ) = ζ x H ( x + 1 ) / 13 + ( 1 ζ ) x H ( 3 x + 4 ) / 44 + ζ ( 1 x ) H 2 x / 13 + ( 1 ζ ) ( 1 x ) H 2 x / 11 , x D ,
where H : D R . Then, when we considered G 1 γ , G 2 γ , G 3 γ , G 4 γ : D D by G 1 γ ( x ) = ( x + 1 ) / 13 , G 2 γ ( x ) = ( 3 x + 4 ) / 44 , G 3 γ ( x ) = 2 x / 13 , G 4 γ ( x ) = 2 x / 11 , a n d V ( x ) = x , (23) was decreased to the proposed model (5).
Here, G 1 γ , G 2 γ , G 3 γ , G 4 γ were contraction mappings with contractive coefficients κ 1 = 1 / 13 ,   κ 2 = 3 / 14 ,   κ 3 = 3 / 44 and κ 4 = 2 / 11 , respectively. We also obtained points ξ 1 , ξ 2 [ 0 , 1 ] , such that G 1 γ ( ξ 1 ) = G 3 γ ( ξ 1 ) (see Figure 2) and G 2 γ ( ξ 2 ) = G 4 γ ( ξ 2 ) (see Figure 3).
It is clear that V : D D was a non-expansive mapping and κ 5 = 1 . Additionally, for all ζ [ 0 , 1 ] , we obtained Λ 2 : = 1005 ζ + 1586 ( 2002 ) 1 < 1 . It can be seen that all conditions of Theorem 3 were satisfied. Consequently, the proposed model (23) had a solution. Furthermore, when we picked an initial approximation H 0 = I C B S (whereas I is an identity function), the next iteration led to the unique solution of (23):
H 1 ( x ) = 1 572 [ 21 ζ x 2 65 x 2 + 156 x 24 ζ x ] , H 2 ( x ) = 1 187149248 73227 ζ 2 x 3 421850 ζ x 3 + 604175 x 3 356664 ζ 2 x 2 + 3290144 ζ x 2 6766760 x 2 + 313344 ζ 2 x 4176432 ζ x + 13744432 x , H n ( x ) = ζ x H n 1 ( x + 1 ) / 13 + ( 1 ζ ) x H n 1 ( 3 x + 4 ) / 44 + ζ ( 1 x ) H n 1 2 x / 13 + ( 1 ζ ) ( 1 x ) H n 1 2 x / 11
for all n N .
Example 3.
Consider the following model:
H ( x ) = ζ x H ( x + 3 ) 23 + ( 1 ζ ) x H ( 3 x + 1 ) / 45 + ζ ( 1 x ) H x / 5 + ( 1 ζ ) ( 1 x ) H 6 x / 28 , x D ,
where H : D R . Then, when we considered G 1 γ , G 2 γ , G 3 γ , G 4 γ : D D by G 1 γ ( x ) = ( x + 3 ) / 23 , G 2 γ ( x ) = ( 3 x + 1 ) / 45 , G 3 γ ( x ) = x / 5 , G 4 γ ( x ) = 6 x / 28 , a n d V ( x ) = x , (24) was decreased to the proposed model (5).
Here, G 1 γ , G 2 γ , G 3 γ , G 4 γ : D D were contraction mappings with contractive coefficients κ 1 = 1 / 23 ,   κ 2 = 1 / 15 ,   κ 3 = 1 / 5 and κ 4 = 3 / 14 , respectively. We also obtained points ξ 1 , ξ 2 [ 0 , 1 ] , such that G 1 γ ( ξ 1 ) = G 3 γ ( ξ 1 ) (see Figure 4) and G 2 γ ( ξ 2 ) = G 4 γ ( ξ 2 ) (see Figure 5).
It is clear that V : D D was a non-expansive mapping and κ 5 = 1 . Additionally, for all ζ [ 0 , 1 ] , we obtained Λ 2 : = 6 7 < 1 . It can be seen that all conditions of Corollary 2 were satisfied. Consequently, the proposed model (24) had a solution. Furthermore, when we picked an initial approximation H 0 = I C B S (whereas I is an identity function), the next iteration led to the unique solution of (23):
H 1 ( x ) = 1 14490 2139 x 2 + 3427 x 129 ζ x 2 + 1361 ζ x , H 2 ( x ) = ζ x H 1 ( x + 3 ) 23 + ( 1 ζ ) x H 1 ( 3 x + 1 ) / 45 + ζ ( 1 x ) H 1 x / 5 + ( 1 ζ ) ( 1 x ) H 1 6 x / 28 , H n ( x ) = ζ x H n 1 ( x + 3 ) 23 + ( 1 ζ ) x H n 1 ( 3 x + 1 ) / 45 + ζ ( 1 x ) H n 1 x / 5 + ( 1 ζ ) ( 1 x ) H n 1 6 x / 28
for all n N .

5. Conclusions

Several studies on animal behavior within stochastic learning situations have produced results that are compatible with so-called occurrence investigations. Nevertheless, based on the location of the food source and the side chosen by the animal, such scenarios can be divided into four categories (right reward, right non-reward, left reward, and left non-reward). Based on such circumstances, in this study, we proposed a new stochastic model that can be used to examine a diverse range of animal behavior models under certain conditions. The solutions and stability of the proposed model were investigated using the Banach fixed point theorem. Some examples were also provided to supplement the results. For interested readers, we offer the accompanying open problems.
P r o b l e m 1 :
Is there another method for establishing the conclusions of Theorems 2 and 3?
P r o b l e m 2 :
Is it possible to prove Theorem 2 without using Condition O ˜ 4 ?
P r o b l e m 3 :
What would happen if an animal did not react to any particular trial?
P r o b l e m 4 :
Can we extend the proposed approach to discuss the behavior of animals in situations with more than four choices?

Author Contributions

Conceptualization, A.T., W.A. and Z.D.M.; methodology, A.T., W.A., N.M. and N.F.; validation, A.T., Z.D.M., N.F. and W.A.; formal analysis, A.T. and N.M.; writing—original draft preparation, A.T., N.M., W.A. and Z.D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

N. Mlaiki and N. Fatima would like to thank Prince Sultan University for paying the publication fees (APC) for this work through TAS LAB.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brenner, S. The genetics of Caenorhabditis elegans. Genetics 1974, 77, 71–94. [Google Scholar] [CrossRef] [PubMed]
  2. Broom, D.M.; Fraser, A.F. Domestic Animal Behaviour and Welfare, 5th ed.; Gutenberg Press: Tarxien, Malta, 2015. [Google Scholar]
  3. Marchant-Forde, J.N. The science of animal behavior and welfare: Challenges, opportunities, and global perspective. Front. Vet. Sci. 2015, 2, 16. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Fatjó, J.; Bowen, J. Behavior and medical problems in pet animals. Adv. Small Anim. Care 2020, 1, 25–33. [Google Scholar] [CrossRef]
  5. Berger-Tal, O.; Saltz, D. Conservation Behavior, Applying Behavioral Ecology to Wildlife Conservation and Management; Cambridge University Press: Cambridge, UK, 2016. [Google Scholar]
  6. Luce, R.I.; Suppes, P. Preference, utility and subjective probability. In Handbook of Mathematical Psychology; Luce, R.D., Bush, R.I., Galanter, E., Eds.; Wiley: New York, NY, USA, 1965; Volume 111. [Google Scholar]
  7. Gamson, W.A. Experimental studies of coalition formation. In Advances in Experimental Social Psychology; Berkowitz, L., Ed.; Academic Press: New York, NY, USA, 1964; Volume 1. [Google Scholar]
  8. Gamson, W.A. An Experimental test of a theory of coalition formation. Am. Sociol. Rev. 1961, 26, 565–573. [Google Scholar] [CrossRef]
  9. Estes, W.K. Toward a statistical theory of learning. Psychol. Rev. 1950, 57, 94–107. [Google Scholar] [CrossRef] [Green Version]
  10. Estes, W.K.; Straughan, J.H. Analysis of a verbal conditioning situation in terms of statistical learning theory. Exp. Psychol. 1954, 47, 225–234. [Google Scholar] [CrossRef] [PubMed]
  11. Bush, A.A.; Wilson, T.R. Two-choice behavior of paradise fish. J. Exp. Psych. 1956, 51, 315–322. [Google Scholar] [CrossRef] [PubMed]
  12. Bush, R.; Mosteller, F. Stochastic Models for Learning; Wiley: New York, NY, USA, 1955. [Google Scholar]
  13. Gerlai, R.; Hogan, J.A. Learning to find the opponent: An ethological analysis of the behavior of paradise fish (Macropodus opercularis) in intra- and interspecific encounters. J. Comp. Psychol. 1992, 106, 306–315. [Google Scholar] [CrossRef]
  14. Warren, J.M. Reversal learning by paradise fish (Macropodus opercularis). J. Comp. Physiol. Psychol. 1960, 53, 376–378. [Google Scholar] [CrossRef]
  15. Turab, A.; Ali, W.; Park, C. A unified fixed point approach to study the existence and uniqueness of solutions to the generalized stochastic functional equation emerging in the psychological theory of learning. AIMS Math. 2022, 7, 5291–5304. [Google Scholar] [CrossRef]
  16. Epstein, B. On a difference equation arising in a learning-theory model. Israel J. Math. 1966, 4, 145–152. [Google Scholar] [CrossRef]
  17. Turab, A.; Sintunavarat, W. Corrigendum: On analytic model for two-choice behavior of the paradise fish based on the fixed point method, J. Fixed Point Theory Appl. 2019, 21:56. J. Fixed Point Theory Appl. 2020, 22, 82. [Google Scholar] [CrossRef]
  18. Turab, A.; Sintunavarat, W. On the solution of the traumatic avoidance learning model approached by the Banach fixed point theorem. J. Fixed Point Theory Appl. 2020, 22, 50. [Google Scholar] [CrossRef]
  19. Berinde, V.; Khan, A.R. On a functional equation arising in mathematical biology and theory of learning. Creat. Math. Inform. 2015, 24, 9–16. [Google Scholar] [CrossRef]
  20. Turab, A.; Sintunavarat, W. On the solutions of the two preys and one predator type model approached by the fixed point theory. Sadhana 2020, 45, 211. [Google Scholar] [CrossRef]
  21. Istrăţescu, V.I. On a functional equation. J. Math. Anal. Appl. 1976, 56, 133–136. [Google Scholar] [CrossRef]
  22. Schein, E.H. The effect of reward on adult imitative behavior. J. Abnorm. Soc. Psychol. 1954, 49, 389–395. [Google Scholar] [CrossRef]
  23. Miller, N.E.; Dollard, J. Social Learning and Imitation; Yale University Press: New Haven, CT, USA, 1941. [Google Scholar]
  24. Schwartz, N. An experimental Study of Imitation. The Effects of Reward and Age. Ph.D. Thesis, Radcliffe College, Cambridge, MA, USA, 1953. [Google Scholar]
  25. Grant, D.A.; Hake, H.W.; Hornseth, J.P. Acquisition and extinction of a verbal conditioned response with differing percentages of reinforcement. Exp. Psychol. 1951, 42, 1–5. [Google Scholar] [CrossRef]
  26. Humphreys, L.G. Acquisition and extinction of verbal expectations in a situation analogous to conditioning. Exp. Psychol. 1939, 25, 294–301. [Google Scholar] [CrossRef]
  27. Jarvik, M.E. Probability learning and a negative recency effect in the serial anticipation of alternative symbols. Exp. Psychol. 1951, 41, 291–297. [Google Scholar] [CrossRef]
  28. Turab, A.; Park, W.G.; Ali, W. Existence, uniqueness, and stability analysis of the probabilistic functional equation emerging in mathematical biology and the theory of learning. Symmetry 2021, 13, 1313. [Google Scholar] [CrossRef]
  29. Turab, A.; Brzdęk, J.; Ali, W. On solutions and stability of stochastic functional equations emerging in psychological theory of learning. Axioms 2022, 11, 143. [Google Scholar] [CrossRef]
  30. Turab, A.; Sintunavarat, W. On the solution of Bush and Wilson’s stochastic model for two-choice behavior of the paradise fish approached by the fixed-point method. In Proceedings of the Seventh International Conference on Mathematics and Computing, Shibpur, India, 2–5 March 2021; Advances in Intelligent Systems and Computing; Giri, D., Choo, K.K.R., Ponnusamy, S., Meng, W., Akleylek, S., Maity, S.P., Eds.; Springer: Cham, Switzerland, 2022; Volume 1412. [Google Scholar] [CrossRef]
  31. Debnath, P. A mathematical model using fixed point Theorem for two-choice behavior of rhesus monkeys in a noncontingent environment. In Metric Fixed Point Theory. Forum for Interdisciplinary Mathematics; Debnath, P., Konwar, N., Radenović, S., Eds.; Springer: Singapore, 2021. [Google Scholar] [CrossRef]
  32. Aydi, H.; Karapinar, E.; Rakocevic, V. Nonunique fixed point theorems on b-metric spaces via simulation functions. Jordan J. Math. Stat. 2019, 12, 265–288. [Google Scholar]
  33. Karapinar, E. Recent advances on the results for nonunique fixed in various spaces. Axioms 2019, 8, 72. [Google Scholar] [CrossRef] [Green Version]
  34. Alsulami, H.H.; Karapinar, E.; Rakocevic, V. Ciric type nonunique fixed point theorems on b-metric spaces. Filomat 2017, 31, 3147–3156. [Google Scholar] [CrossRef]
  35. Lakzian, H.; Gopal, D.; Sintunavarat, W. New fixed point results for mappings of contractive type with an application to nonlinear fractional differential equations. J. Fixed Point Theory Appl. 2016, 18, 251–266. [Google Scholar] [CrossRef]
  36. Baradol, P.; Gopal, D.; Radenović, S. Computational fixed points in graphical rectangular metric spaces with application. J. Comput. Appl. Math. 2020, 375, 112805. [Google Scholar] [CrossRef]
  37. Banach, S. Sur les operations dans les ensembles abstraits et leur applications aux equations integrales. Fund. Math. 1922, 3, 133–181. [Google Scholar] [CrossRef]
  38. Ahmad, J.; Ullah, K.; Arshad, M.; de la Sen, M.; Ma, Z. Convergence results on Picard-Krasnoselskii hybrid iterative process in CAT(0) spaces. Open Math. 2021, 19, 1713–1720. [Google Scholar] [CrossRef]
  39. Srivastava, J. Introduction of new Picard–S hybrid iteration with application and some results for nonexpansive mappings. Arab. J. Math. Sci. 2022, 28, 61–76. [Google Scholar] [CrossRef]
  40. Almusawa, H.; Hammad, H.A.; Sharma, N. Approximation of the fixed point for unified three-step iterative algorithm with convergence analysis in Busemann spaces. Axioms 2021, 10, 26. [Google Scholar] [CrossRef]
  41. Rassias, T.M. On the stability of the linear mapping in Banach spaces. Proc. Am. Math. Soc. 1978, 72, 297–300. [Google Scholar] [CrossRef]
  42. Hyers, D.H. On the stability of the linear functional equation. Proc. Nat. Acad. Sci. USA 1941, 27, 222–224. [Google Scholar] [CrossRef] [Green Version]
  43. Ulam, S.M. A Collection of the Mathematical Problems; Interscience Publ.: New York, NY, USA, 1960. [Google Scholar]
  44. Aoki, T. On the stability of the linear transformation in Banach spaces. J. Math. Soc. Jpn. 1950, 2, 64–66. [Google Scholar] [CrossRef]
  45. Hyers, D.H.; Isac, G.; Rassias, T.M. Stability of Functional Equations in Several Variables; Birkhauser: Basel, Switzerland, 1998. [Google Scholar]
  46. Bae, J.H.; Park, W.G. A fixed point approach to the stability of a Cauchy-Jensen functional equation. Abst. Appl. Anal. 2012, 2020, 205160. [Google Scholar] [CrossRef]
  47. Gachpazan, M.; Bagdani, O. Hyers-Ulam stability of nonlinear integral equation. Fixed Point Theory Appl. 2010, 2010, 927640. [Google Scholar] [CrossRef] [Green Version]
  48. Morales, J.S.; Rojas, E.M. Hyers-Ulam and Hyers-Ulam-Rassias stability of nonlinear integral equations with delay. Int. J. Nonlinear Anal. Appl. 2011, 2, 1–6. [Google Scholar]
  49. Turab, A.; Ali, W.; Nieto, J.J. On a unique solution of a T-maze model arising in the psychology and theory of learning. J. Funct. Spaces 2022, 2022, 6081250. [Google Scholar] [CrossRef]
  50. Maze Basics: T Maze. Available online: https://conductscience.com/maze/maze-basics-t-maze-test/ (accessed on 25 May 2022).
Figure 1. Behavior of a rat in a T-maze model [50].
Figure 1. Behavior of a rat in a T-maze model [50].
Mathematics 10 01975 g001
Figure 2. The graph of G 1 γ ( x ) = x + 1 13 (red) and G 3 γ ( x ) = 2 x 13 (blue).
Figure 2. The graph of G 1 γ ( x ) = x + 1 13 (red) and G 3 γ ( x ) = 2 x 13 (blue).
Mathematics 10 01975 g002
Figure 3. The graph of G 2 γ ( x ) = 3 x + 4 44 (red) and G 4 γ ( x ) = 2 x 11 (blue).
Figure 3. The graph of G 2 γ ( x ) = 3 x + 4 44 (red) and G 4 γ ( x ) = 2 x 11 (blue).
Mathematics 10 01975 g003
Figure 4. The graph of G 1 γ ( x ) = x + 1 23 (red) and G 3 γ ( x ) = x 5 (blue).
Figure 4. The graph of G 1 γ ( x ) = x + 1 23 (red) and G 3 γ ( x ) = x 5 (blue).
Mathematics 10 01975 g004
Figure 5. The graph of G 2 γ ( x ) = 3 x + 1 45 (green) and G 4 γ ( x ) = 6 x 28 (orange).
Figure 5. The graph of G 2 γ ( x ) = 3 x + 1 45 (green) and G 4 γ ( x ) = 6 x 28 (orange).
Mathematics 10 01975 g005
Table 1. The four events with the corresponding probability of occurrence.
Table 1. The four events with the corresponding probability of occurrence.
Placement of FoodSide Chosen by RatProbability of Occurrence
A: Left SideLeft Turn (Food Side) ζ x
A: Left SideRight Turn (Non-Food Side) ( 1 ζ ) x
B: Right SideRight Turn (Food Side) ζ ( 1 x )
B: Right SideLeft Turn (Non-Food Side) ( 1 ζ ) ( 1 x )
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Turab, A.; Mlaiki, N.; Fatima, N.; Mitrović, Z.D.; Ali, W. Analysis of a Class of Stochastic Animal Behavior Models under Specific Choice Preferences. Mathematics 2022, 10, 1975. https://doi.org/10.3390/math10121975

AMA Style

Turab A, Mlaiki N, Fatima N, Mitrović ZD, Ali W. Analysis of a Class of Stochastic Animal Behavior Models under Specific Choice Preferences. Mathematics. 2022; 10(12):1975. https://doi.org/10.3390/math10121975

Chicago/Turabian Style

Turab, Ali, Nabil Mlaiki, Nahid Fatima, Zoran D. Mitrović, and Wajahat Ali. 2022. "Analysis of a Class of Stochastic Animal Behavior Models under Specific Choice Preferences" Mathematics 10, no. 12: 1975. https://doi.org/10.3390/math10121975

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop