Next Article in Journal
A Literature Review of Railway Pricing Based on Revenue Management
Next Article in Special Issue
Cooperative Control for Signalized Intersections in Intelligent Connected Vehicle Environments
Previous Article in Journal
Remote Sensing Imagery Object Detection Model Compression via Tucker Decomposition
Previous Article in Special Issue
Identification of Location and Camera Parameters for Public Live Streaming Web Cameras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Evolutionary View on Equilibrium Models of Transport Flows

by
Evgenia Gasnikova
1,
Alexander Gasnikov
1,2,3,*,
Yaroslav Kholodov
4,* and
Anastasiya Zukhba
1
1
Moscow Institute of Physics and Technology, 9 Institutskiy per., 141701 Dolgoprudny, Russia
2
Institute for Information Transmission Problems, RAS, Bolshoy Karetny per. 19, build. 1, 127051 Moscow, Russia
3
Higher School of Economics, 20 Myasnitskaya Ulitsa, 101000 Moscow, Russia
4
Innopolis University, 1, Universitetskaya Str., 420500 Innopolis, Russia
*
Authors to whom correspondence should be addressed.
Mathematics 2023, 11(4), 858; https://doi.org/10.3390/math11040858
Submission received: 29 November 2022 / Revised: 27 January 2023 / Accepted: 30 January 2023 / Published: 8 February 2023

Abstract

:
In this short paper, we describe natural logit population games dynamics that explain equilibrium models of origin-destination matrix estimation and (stochastic) traffic assignment models (Beckmann, Nesterov–de Palma). Composition of the proposed dynamics allows to explain two-stages traffic assignment models.

1. Introduction

The first traffic assignment model was proposed for about 70 years ago in the work of M. Beckmann [1], see also [2]. Nowadays Beckmann’s type models are rather well studied [3,4,5,6,7]. The entropy-based origin–destination matrix models are also well developed nowadays [7,8,9]. Moreover, as was mentioned in [10], both of these two types of models can be considered as macrosystem equilibrium for logit (best-response) dynamics in corresponding congestion games [11].
In this paper, we popularise the results of [10] for English-reading people (The paper [10] was written in Russian and has not been translated yet) and refine the results on the convergence rate. Moreover, we propose superposition of the considered dynamics to describe equilibrium in two-stage traffic assignment model [12,13].
One of the main results of the paper is Theorem 1, where it is proved that the natural logit-choice and best-response Markovian population dynamics in traffic assignment model (congested population game) converge to equilibrium. By using Cheeger’s inequality we first time show that mixing time (the time required to reach equilibrium) of these dynamics T m i x is proportional to ln N , where N is a total number of agents. Note, that in related works analogues of this theorem were proved without estimating of T m i x [9,11,13]. We confirm Theorem 1 by numerical experiments.
Another important result is a saddle-point reformulation of two-stages traffic assignment model. We explain how to apply results of Theorem 1 to this model.

2. Traffic Assignment: Problem Statement

Following [14] we describe the problem statement.
Let the urban road network be represented by a directed graph G = ( V , E ) , where vertices V correspond to intersections or centroids [4] and edges E correspond to roads, respectively. Suppose we are given the travel demands: namely, let d w (veh/h) be a trip rate for an origin–destination pair w from the set O D { w = ( i , j ) : i O , j D } . Here, O V is the set of all possible origins of trips, and D V is the set of destination nodes. For OD pair w = ( i , j ) denote by P w the set of all simple paths from i to j. Respectively, P = w O D P w is the set of all possible routes for all OD pairs. Agents travelling from node i to node j are distributed among paths from P w , i.e., for any p P w there is a flow x p R + (i.e., x p 0 ) along the path p, and p P w x p = d w . Flows from vertices from the set O to vertices from the set D create the traffic in the entire network G, which can be represented by an element of
X = X ( d ) = x R + | P | : p P w x p = d w , w O D .
Note that set X can have extremely large number of paths (routes): e.g., for n × n Manhattan network log | P | = Ω ( n ) up to a logarithmic factor. Note, that to describe a state of the network we do not need to know an entire vector x, but only flows on arcs: f e ( x ) = p P δ e p x p for e E , where δ e p = 1 { e p } . Let us introduce a matrix Θ such that Θ e , p = δ e p for e E , p P , so in vector notation we have f = Θ x . To describe an equilibrium we use both path- (route-) and link-based notations ( x , t ) or ( f , t ) .
Beckmann model. An important idea behind the Beckmann model is that the cost (e.g., travel time) of passing a link e is the same for all agents and depends only on the flow f e along it. We denote this cost for a given flow f e by t e = τ e ( f e ) . Another essential point is a behavioral assumption (the first Wardrop’s principle): each agent knows the state of the whole network and chooses a path p minimizing the total cost T p ( t ) = e p t e .
We consider τ e ( f e ) to be continuous, non-decreasing, and non-negative. In this case x * = ( x p * ) p P , t * = ( t e * ) e E , is an equilibrium state, i.e., it satisfies conditions
t e * = τ e ( f e * ) , where f * = Θ x * , x p w * > 0 T p w ( t * ) = T w ( t * ) = min p P w T p ( t * ) ,
if, and only if, x * is a minimum of the potential function:
Ψ f ( x ) = e E 0 f e τ e ( z ) d z σ e ( f e ) min f = Θ x , x X Ψ ( f ) = e E σ e ( f e ) min f = Θ x : x X ,
and t e * = τ e ( f e * ) [2].
According to [5,13], we can construct a dual problem for the potential function in the following way:
min f = Θ x : x X Ψ ( f ) = min x X , f Ψ ( f ) + sup t R | E | t , Θ x f = sup t R | E | min x X , f Ψ ( f ) + t , Θ x f = sup t R | E | e E max f e { t e f e σ e ( f e ) } + min x X p P e E t e δ e p x p = max t dom σ * e E σ e * ( t e ) w O D d w T w ( t ) ,
where
σ e * ( t e ) = sup f e 0 { t e f e σ e ( f e ) } = f ¯ e t e t ¯ e t ¯ e ρ μ t e t ¯ e 1 + μ
is the Legendre—Fenchel conjugate function of σ e ( f e ) , e E . At the end we obtain the dual problem, which solution is t * :
max t t ¯ w O D d w T w ( t ) e E σ e * ( t e ) .
We can reconstruct primal variable f from the current dual variable t:
f w O D d w T w ( t ) .
This condition reflects the fact that every driver choose the shortest route [5]. Another condition t e = τ e ( f e ) can be equivalently rewrite as f e = d d t e σ e * ( t e ) . This condition with the condition f w O D d w T w ( t ) form the optimization problem (1).
If μ 0 + Beckmann’s model will turn into Nesterov–de Palma model [13,15].
Population games dynamic for (stochastic) Beckmann model. Let us consider each driver to be an agent in population game, where P w , w O D is a set of types of agents. All agent (drivers) of type P w can choose one of the strategy p P w with cost function T p t f x : = T ˜ p ( x ) . Assume that every driver/agent independently of anything (in particular of any other drivers) is considering the opportunity to reconsider his choice of route/strategy p in time interval [ t , t + Δ t ) with probability λ Δ t + o ( Δ t ) , where λ > 0 is the same for all drivers/agents. It means that with each driver we relate its own Poisson process with parameter λ . If in moment of time t (when the flow distribution vector is x ( t ) ) the the driver of type P w decides to reconsider his route, than he choose the route q P w with probability
p q T ˜ x t = P q = arg max p P w ; j = 1 , , J T ˜ p x t + ξ p , j ,
where ξ p , j are i.i.d. and satisfy Gumbel max convergence theorem [16] when J with the parameter γ (e.g., ξ p , j has (sub)exponential tails at ). It means that ξ p = max j = 1 , , J ξ p , j asymptotically (when J ) has Gumbel distribution P ξ p < ξ = exp exp ξ / γ E , where E 0.5772 is Euler constant. Note that E ξ p = 0 , Var ξ p = π 2 γ 2 / 6 . In (2) it means that every driver try to choose the best route. However, the only available information are noise corrupted values T ˜ p . So the driver try to choose the best route focused on the worst forecasts for each route.
One of the main results of Discrete Choice Theory is as follows [17]
p q T ˜ x t = exp T ˜ p x t / γ q P w exp T ˜ q x t / γ ,
where p q T ˜ x t was previously defined in (2).
Note that the described above dynamic degenerates into the best-response dynamic when γ 0 + [11].
Theorem 1.
Let p P x p = N . For all x ( 0 ) X there exists such a constant c x ( 0 ) that for all σ 0 , 0.5 and t T m i x = c x ( 0 ) λ 1 ln N :
P x ( t ) N x * 2 2 2 + 4 ln σ 1 N 1 σ ,
where
x * = arg min x X d / N Ψ ˜ f ( x ) + γ p P x p ln x p ,
Ψ ˜ f ( x ) = e E 0 f e τ ˜ e ( z ) d z ,       τ ˜ e ( z ) = τ e ( z N ) .
Proof. 
The first important observation is that the described Markov process is reversible. That is, it satisfies Kolmogorov’s detailed balance condition (see also [18]) with stationary (invariant) measure
π ( x ) = N ! x 1 ! x | P | ! exp Ψ f ( x ) γ ,
where x X ( d ) [11]. The result of type (4) for x ( ) holds true due to Hoeffding’s inequality in a Hilbert space [19]. We can apply this inequality for multinomial part N ! x 1 ! x | P | ! . The rest part may only strength the concentration phenomenon, especially when γ is small. The Sanov’s theorem [20] says that x * from (5) asymptotically ( N ) describe the proportions in maximum probability state, that is
x * N arg max x X ( d ) N ! x 1 ! x | P | ! exp Ψ f ( x ) γ .
To estimate the mixing time λ 1 ln N of the considered Markov process we will put it in accordance with this continuous-time process discrete-time process with step λ N 1 , which corresponds to the expectation time between two nearest events in continuous-time dynamic. Additionally, we consider this discrete Markov chain as a random walk on a proper graph G = V G , E G with starting point corresponds to the vertex s and transition probability matrix P = p i j i , j = 1 | V G | . According to a Cheeger’s inequality mixing time t mix for such a random walk, which approximate stationary measure π with accuracy ε = O N 1 / 2 (in this case x t mix x ( ) ), is
O λ N 1 h ( G ) 2 ln π ( s ) + ln ( ε 1 ) ,
where Cheeger’s constant is determined as
h ( G ) = min S V G : π ( S ) 1 / 2 P S S ¯ S = min S V G : π ( S ) 1 / 2 ( i , j ) E G , i S , j S ¯ π ( i ) p i j i S π ( i ) ,
where S ¯ = V G / S [21]. Since G and P correspond to reversible Markov chain with stationary measure π that exponentially concentrate around x * N one can prove that isoperimetric problem of finding optimal set of vertexes S has the following solution, which we described below roughly up to a numerical constant: S is a set of such states x X ( d ) that x x * N 2 O N . Since the the ratio of sphere volume of radius O ( N ) to the volume of the ball of the same radius is O ( N 1 / 2 ) , we can obtain that h ( G ) N 1 / 2 . So up to a ln π ( s ) (we put it into c x ( 0 ) ) mixing time is indeed λ 1 ln N . □
Note that the describe above approach assumes that we first t and then N . If we firstly take N than due to Kurtz’s theorem [22] c ( t ) = lim N x ( t ) / N satisfies (for all w O D , p P w )
d c p d t = d ¯ w exp T ¯ p c t q P w exp T ¯ q c t c p ( t ) ,
where d ¯ = lim N d / N , T ¯ p c ( t ) = T ˜ p x ( t ) . Note that Sanov’s type function Ψ ˜ f ( c ) + γ p P c p ln c p from (5) will be Boltzmann–Lyapunov type function for this system of ordinary differential equations (SODE), that is decrees along the trajectory of SODE. This result is a particular case of the general phenomenon: Sanov’s type function for invariant measure obtained from Markovian dynamics is Boltzmann–Lyapunov type function for deterministic Kurtz’s kinetic dynamics [18,23,24].

3. Origin–Destination Matrix Estimation

Origin–destination matrix estimation model can be considered as a particular case of the traffic assignment model. The following interpretation goes back to [10,13]. Indeed, let us consider fictive origin o and fictive destination d. So O ˜ = { o } , D ˜ = { d } . Let us draw fictive edges from o to real origins of trips O. The cost of the trip at edge ( o , i ) is λ i O —an average price that each agent pays to live at this origin region i O . Analogously, let us draw edges from the vertexes of the real destination set D to d. The cost of the trip at edge ( j , d ) is λ j D , minus average salary that each agent obtains in destination region j D . So the set of all possible routes (trips) from o to d can be described by pairs ( i , j ) O D . Each route consist of three edges o i with cost λ i O , edge i j with cost T i j (is available as an input of the model) and edge j d with cost λ j D . So equilibrium origin-destination matrix d = { d i j } ( i , j ) O D (up to a scaling factor) can be find from entropy-linear programming problem
min d 0 : ( i , j ) O D d i j = 1 i O λ i O j D d i j j D λ j D i O d i j + ( i , j ) O D T i j d i j + γ ( i , j ) O D d i j ln d i j .
In real life, λ i O and λ j D are typically unknown. However, at the same time, the following agglomeration characters are available
j D d i j = L i , i O ,
i O d i j = W j , j D .
The key observation is that (6) can be considered as Lagrange multipliers principle for constraint entropy-linear programming problem
min d 0 : d satisfies ( 7 ) , ( 8 ) w O D T w d w + γ w O D d w ln d w ,
where λ i O and λ j D are Lagrange multipliers for (7) and (8) correspondingly. The last model is called Wilson’s entropy origin–destination matrix model [8,9].
The result of Theorem 1 can be applied to this model due to the mentioned above reduction.

4. Two-Stages Traffic Assignment Model

From the Section 2 we may know that Beckmann’s model requires origin-destination matrix as an input { d w } w O D . So Beckmann’s model allows to calculate t ( d ) . At the same time, from Section 3, we may know that Wilson’s entropy origin–destination model requires cost matrix { T w } w O D as an input, where T w : = T w ( t ) = min p P w T p ( t ) . So Wilson’s model allows to calculate d T ( t ) . The solution of the system d = d T t ( d ) is called two-stage traffic assignment model [12]. Following [7,13] we can reduce this problem to the following one (see (1) and (9))
min d 0 : d satisfies ( 7 ) , ( 8 ) max t t ¯ w O D d w T w ( t ) e E σ e * ( t e ) + γ w O D d w ln d w .
The problem (10) can be rewritten as a convex-concave (if τ e ( t e ) 0 ) saddle-point problem (SPP)
min d 0 : d satisfies ( 7 ) , ( 8 ) max t t ¯ w O D d w T w ( t ) e E σ e * ( t e ) + γ w O D d w ln d w .
This SPP can be efficiently solved numerically [7].
Note that, if we consider best-response dynamics from Section 2, with the parameter λ : = λ Beck and logit dynamic with the parameter λ : = λ Wil for the origin–destination matrix estimation and assume that λ Beck λ Wil than such a dynamic will converge to the stationary (invariant) measure that is concentrated around the solution of the SPP problem (11). This result can be derived from the more general result related with hierarchical congested population games [7].

5. Numerical Experiments

The main result of the paper is Theorem 1. The main new result of this theorem is a statement that mixing time of the considered Markovian logit-choice and best-response dynamics T m i x is approximately c 1 + c 2 · ln N , where N is a number of agents.
We consider Braess’s paradox example [25], see Figure 1. This picture is taken from Wikipedia. Here, Origin is START and Destination is END. We have one OD-pair and put d = N , the number of agents. The <<paradox>> arises when N = 4000 . In this case when there is no road from A to B we have two routes (START, A, END) and (START, B, END) with 2000 agents at each route. So the equilibrium time costs at each route will be 65. When the road AB is present (this road has time costs 0) all agents will use the route (START, A, B, END) and this equilibrium has time costs 80. That is paradoxically larger than it was without road AB.
In series of experiments (see Figure 2, Figure 3 and Figure 4) the dependence of mixing time T m i x from ln N was investigated. Details see in https://github.com/ZVlaDreamer/transport_flows_project (accessed on 1 November 2022).
Numerical experiments confirm Theorem 1. Note that in [9] it was described a real-life experiment oraganized with MIPT students in Experimental Economics Lab. The students were agents and play in repeated Braess’s paradox game. The result of experiments from [9] is also well agreed with the described above numerical experiments.

6. Conclusions

In this paper, we investigate logit-choice and best-response population Markovian dynamics converges to equilibrium in corresponding traffic assignment model. We show that mixing time is proportional to logarithm from the number of agent. Numerical experiments confirm that the dependence is probably unimprovable. We also consider two-stage traffic assignment model and describe how to interpret equilibrium for this model in an evolutionary manner.

Author Contributions

Conceptualization, A.G. and Y.K.; Methodology, Y.K.; Software, A.Z.; Formal analysis, E.G. and A.G.; Investigation, E.G. and A.Z.; Data curation, A.Z.; Writing—original draft, E.G.; Supervision, A.G.; Project administration, A.G. All authors have read and agreed to the published version of the manuscript.

Funding

The work of E. Gasnikova was supported by the Ministry of Science and Higher Education of the Russian Federation (Goszadaniye) 075-00337-20-03, project No. 0714-2020-0005. The work of A. Gasnikov was supported by the strategic academic leadership program <<Priority 2030>> (Agreement 075-02-2021-1316 30 September 2021).

Data Availability Statement

Not applicable.

Acknowledgments

We dedicate this paper to our colleague Vadim Alexandrovich Malyshev (13 April 1938–30 September 2022). We express our gratitude to Leonid Erlygin (MIPT) and Vladimir Zholobov (MIPT) who conducted numerical experiments.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Beckmann, M. A continuous model of transportation. Econom. J. Econom. Soc. 1952, 20, 643–660. [Google Scholar] [CrossRef]
  2. Beckmann, M.J.; McGuire, C.B.; Winsten, C.B. Studies in the Economics of Transportation; Technical Report; The National Academies of Sciences, Engineering, and Medicine: Washington, DC, USA, 1956. [Google Scholar]
  3. Patriksson, M. The Traffic Assignment Problem: Models and Methods; Courier Dover Publications: Mineola, NY, USA, 2015. [Google Scholar]
  4. Sheffi, Y. Urban Transportation Networks; Prentice-Hall: Englewood Cliffs, NJ, USA, 1985; Volume 6. [Google Scholar]
  5. Nesterov, Y.; De Palma, A. Stationary dynamic solutions in congested transportation networks: Summary and perspectives. Netw. Spat. Econ. 2003, 3, 371–395. [Google Scholar]
  6. Baimurzina, D.R.; Gasnikov, A.V.; Gasnikova, E.V.; Dvurechensky, P.E.; Ershov, E.I.; Kubentaeva, M.B.; Lagunovskaya, A.A. Universal method of searching for equilibria and stochastic equilibria in transportation networks. Comput. Math. Math. Phys. 2019, 59, 19–33. [Google Scholar]
  7. Gasnikov, A.; Gasnikova, E. Traffic Assignment Models. Numerical Aspects; MIPT: Dolgoprudny, Russia, 2020. [Google Scholar]
  8. Wilson, A. Entropy in Urban and Regional Modelling (Routledge Revivals); Routledge: Oxfordshire, UK, 2013. [Google Scholar]
  9. Gasnikov, A.; Klenov, S.; Nurminskiy, Y.; Kholodov, Y.; Shamray, N. Vvedenie v matematicheskoe modelirovanie transportnykh potokov. In Introduction to Mathematical Modeling of Traffic Flows: Textbook; MCCME: Moscow, Russia, 2013. [Google Scholar]
  10. Gasnikov, A.V.; Gasnikova, E.V.; Mendel’, M.A.; Chepurchenko, K.V. Evolutionary interpretations of entropy model for correspondence matrix calculation. Mat. Model. 2016, 28, 111–124. [Google Scholar]
  11. Sandholm, W.H. Population Games and Evolutionary Dynamics; MIT Press: Cambridge, MA, USA, 2010. [Google Scholar]
  12. de Dios Ortúzar, J.; Willumsen, L.G. Modelling Transport; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  13. Gasnikov, A.V.; Dorn, Y.V.; Nesterov, Y.E.; Shpirko, S.V. On the three-stage version of stable dynamic model. Mat. Model. 2014, 26, 34–70. [Google Scholar]
  14. Kubentayeva, M.; Gasnikov, A. Finding equilibria in the traffic assignment problem with primal-dual gradient methods for Stable Dynamics model and Beckmann model. Mathematics 2021, 9, 1217. [Google Scholar] [CrossRef]
  15. Kotlyarova, E.V.; Krivosheev, K.Y.; Gasnikova, E.V.; Sharovatova, Y.I.; Shurupov, A.V. Proof of the connection between the Backman model with degenerate cost functions and the model of stable dynamics. Comput. Res. Model. 2022, 14, 335–342. [Google Scholar] [CrossRef]
  16. Leadbetter, M.; Lindgren, G.; Rootzén, H. Asymptotic Distributions of Extremes. In Extremes and Related Properties of Random Sequences and Processes; Springer: Berlin/Heidelberg, Germany, 1983. [Google Scholar]
  17. Anderson, S.P.; De Palma, A.; Thisse, J.F. Discrete Choice Theory of Product Differentiation; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  18. Malyshev, V.A.; Pirogov, S.A. Reversibility and irreversibility in stochastic chemical kinetics. Russ. Math. Surv. 2008, 63, 1. [Google Scholar] [CrossRef]
  19. Boucheron, S.; Lugosi, G.; Massart, P. Concentration Inequalities: A Nonasymptotic Theory of Independence; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
  20. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley Series in Telecommunications and Signal Processing; Wiley: New York, NY, USA, 2006. [Google Scholar]
  21. Levin, D.A.; Peres, Y. Markov Chains and Mixing Times; American Mathematical Society: Providence, RI, USA, 2017; Volume 107. [Google Scholar]
  22. Ethier, S.N.; Kurtz, T.G. Markov Processes: Characterization and Convergence; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  23. Batishcheva, Y.G.; Vedenyapin, V.V. The 2-nd low of thermodynamics for chemical kinetics. Mat. Model. 2005, 17, 106–110. [Google Scholar]
  24. Gasnikov, A.V.; Gasnikova, E.V. On entropy-type functionals arising in stochastic chemical kinetics related to the concentration of the invariant measure and playing the role of Lyapunov functions in the dynamics of quasiaverages. Math. Notes 2013, 94, 854–861. [Google Scholar] [CrossRef]
  25. Frank, M. The braess paradox. Math. Program. 1981, 20, 283–302. [Google Scholar]
Figure 1. Braess’s paradox graph.
Figure 1. Braess’s paradox graph.
Mathematics 11 00858 g001
Figure 2. Logit-choice dynamic γ = 0.1 .
Figure 2. Logit-choice dynamic γ = 0.1 .
Mathematics 11 00858 g002
Figure 3. Logit-choice dynamic γ = 0.01 .
Figure 3. Logit-choice dynamic γ = 0.01 .
Mathematics 11 00858 g003
Figure 4. Best response dynamic (as a limit of logit-choice dynamics γ 0 + ).
Figure 4. Best response dynamic (as a limit of logit-choice dynamics γ 0 + ).
Mathematics 11 00858 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gasnikova, E.; Gasnikov, A.; Kholodov, Y.; Zukhba, A. An Evolutionary View on Equilibrium Models of Transport Flows. Mathematics 2023, 11, 858. https://doi.org/10.3390/math11040858

AMA Style

Gasnikova E, Gasnikov A, Kholodov Y, Zukhba A. An Evolutionary View on Equilibrium Models of Transport Flows. Mathematics. 2023; 11(4):858. https://doi.org/10.3390/math11040858

Chicago/Turabian Style

Gasnikova, Evgenia, Alexander Gasnikov, Yaroslav Kholodov, and Anastasiya Zukhba. 2023. "An Evolutionary View on Equilibrium Models of Transport Flows" Mathematics 11, no. 4: 858. https://doi.org/10.3390/math11040858

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop