Next Article in Journal
Rolling-Bearing Fault-Diagnosis Method Based on Multimeasurement Hybrid-Feature Evaluation
Previous Article in Journal
An Intrusion Detection System Based on a Simplified Residual Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Information Gain in Event Space Reflects Chance and Necessity Components of an Event

University of Cincinnati, Cincinnati, OH 45221, USA
Information 2019, 10(11), 358; https://doi.org/10.3390/info10110358
Submission received: 23 October 2019 / Revised: 8 November 2019 / Accepted: 15 November 2019 / Published: 19 November 2019

Abstract

:
Information flow for occurrences in phase space can be assessed through the application of the Lyapunov characteristic exponent (multiplicative ergodic theorem), which is positive for non-linear systems that act as information sources and is negative for events that constitute information sinks. Attempts to unify the reversible descriptions of dynamics with the irreversible descriptions of thermodynamics have replaced phase space models with event space models. The introduction of operators for time and entropy in lieu of traditional trajectories has consequently limited—to eigenvectors and eigenvalues—the extent of knowable details about systems governed by such depictions. In this setting, a modified Lyapunov characteristic exponent for vector spaces can be used as a descriptor for the evolution of information, which is reflective of the associated extent of undetermined features. This novel application of the multiplicative ergodic theorem leads directly to the formulation of a dimension that is a measure for the information gain attributable to the occurrence. Thus, it provides a readout for the magnitudes of chance and necessity that contribute to an event. Related algorithms express a unification of information content, degree of randomness, and complexity (fractal dimension) in event space.

“The discovery of a complete unified theory [of the universe] (...) may not aid the survival of our species. It may not even affect our life-style. But ever since the dawn of civilization, people have not been content to see events as unconnected and inexplicable. They have craved an understanding of the underlying order in the world. (...) And our goal is nothing less than a complete description of the universe we live in.” [1]

1. The Dichotomy of Chance and Necessity

The world is one. Nevertheless, we perceive events as governed either by determinism or by chance. This has generated a dichotomy of descriptive frameworks that are disconnected from each other. While the concept of determinism has been rather easy to grasp as the sequence of cause–effect relationships, the definition of what constitutes chance has been subject to more discussion [2,3,4].
The flip of a coin, often invoked as the prototypical chance event, has a defined cause and could be predictable if the initial coordinates and the impulses acting on the coin were precisely known. The principal impossibility of acquiring this knowledge was formulated in the uncertainty principle [5]. In the same vein, events that are governed by non-linear differential equations are considered deterministic, even though the slightest immeasurable deviations in their initial states can lead to dramatically different outcomes [6]. As such, progression and advanced states of turbulent flow are unpredictable [7].
A distinct characterization of chance is the convergence of two totally independent cause-effect chains of events, termed “absolute coincidence”. In an example given by Jacques Monod [8], a doctor is urgently called to see a patient while a roofer is performing important repair work on the neighboring house. As the doctor is passing the house, the roofer drops his hammer, which hits the physician on the head and kills him. Monod neglected that the proposed cause-effect nature of each individual sequence of events also predetermines the point at which the initially separate developments intersect. Nonetheless, an important question is raised by the model: under what circumstances can two truly deterministic events be independent of one another? If they have a unique point of intersection and defined histories, then the past of each succession of occurrences can be derived by tracing back from the point where they meet. By contrast, the assumption of mutual independence of the chains of events at their origination introduces a dichotomy into the world that is not resolved until the two sequences coincide.
In view of the difficulties to characterize chance in conventional terms, algorithmic formulations have been attempted. According to Andrei Kolmogorov and Gregory Chaitin, a simple string of 0 s and 1 s is random if its shortest description is obtained by writing it out in its entirety (the shortest description of the string is the string itself). This definition implies degrees of randomness (the more compressible the string the less random it is) rather than a dichotomy between determined and undetermined strings of binary digits. It is thus in agreement with the common perception that the throw of a die (with six possible outcomes) is more random than the flip of a coin (with only two possible outcomes). Yet, the model has its limitations, as Chaitin recognized that it is not possible to determine the randomness of a sufficiently long binary string [9,10].
A hybrid of chance and necessity is the law of large numbers. This term refers to a regular behavior of populations, the individual entities of which act in unpredictable manners. The entire science of probability theory (e.g., [11]) has been built on this foundation. The concept applies to temperature, which can be measured precisely, whereas the kinetic energy of the individual particles that generate this temperature cannot. Likewise, radioactive decay can be described by a Poisson distribution, whereas the decay of individual atoms can never be predicted (Supplement note 1). Averages over large numbers are used in ensemble formulations in dynamics and thermodynamics. The feasibility to describe nature in this way has led to the concept that energy contained in the macroscales can be captured with deterministic algorithms while energy contained in the microscales is randomly distributed and can be expressed only as averages over large numbers [12]. While this concept has been useful for a range of applications, it has faced difficulty in nonlinear events, where energy is believed to emerge from the microscales and affect the macroscopic outcome [13]. Furthermore, the magnitude, at which the delineation is made between “microscopic” and “macroscopic” levels is relative to the system under study (in cosmology, individual planets may very well be considered “microscopic”).
Our goal is the elimination of dichotomous descriptions. Every determined event has a degree of uncertainty attached to it, and every chance event has regular elements. Therefore, all occurrences in the world are associated with various extents of randomness [14,15]. We thus require descriptions that account for the intermingling of necessity and chance. The information content of an event, interpretable as the removal of uncertainty (Supplement note 2), is principally quantifiable in the algorithms developed by Shannon [16]. Building on those insights (Section 2 below), uncertainty and information flow have been studied for trajectories in phase space and can be quantitatively analyzed through the application of the multiplicative ergodic theorem, also called the Lyapunov characteristic exponent [13]. In this hypothetical space, the geometry of attractors arises naturally from a law that allows joining, but not splitting of trajectories. At points where trajectories join, the flow is not reversible without ambiguity. However, this description does not satisfactorily incorporate the laws of thermodynamics. To be more inclusive, we must move away from such trajectories to a vector space that is derived from a unified description of dynamic and thermodynamic occurrences (Section 3 below). The application of the multiplicative ergodic theorem to this event space is feasible with the use of the vector norms. It yields a measure for the information evolution of an event (Section 4 below) and lays the foundation for a calculation of dimensionality that is directly inferred from the Lyapunov characteristic exponents associated with the process under study (Section 5 below). This dimensionality gives an estimate of the Kolmogorov–Sinai entropy and is indicative of the information change inherent in the execution of the process. We thus arrive at a measure for the extents of chance and necessity that contribute to an event.

2. Uncertainty and Information Flow in Phase Space

Dynamics has traditionally described motion as a flow along trajectories in a phase space that maps impulses and coordinates. This phase space has as many dimensions as the system has degrees of freedom. The average rate of information production λ ¯ (bits of information per unit time, the term is essentially identical to the Lyapunov characteristic exponent) is a system state function. This quantity is invariant under coordinate transformations. In the one-dimensional case, λ ¯ is a measure for the rate of divergence of nearby trajectories. Stable periodic orbits have a λ ¯ that is negative. At λ ¯ = 0 transients persist indefinitely. The transition of a system from laminar to turbulent behavior is reflected in a change of λ ¯ from negative to positive, corresponding to a change of the system from an information sink to an information source [13]. The information flow of turbulent systems precludes predictability beyond a certain point, when new information accumulates to fully displace the initial data.
Information (H) can be operationally defined as
H =     P i log 2 P i
where Pi denotes the probabilities of individual outcomes. In the case of a completely determined outcome with probability = 1, the information content is 0. The information generated or destroyed by a given iterated map is a topological property associated with the connectivity of that map, which is determined by the underlying physical process. In the example of one-dimensional maps, information is given by the Boltzmann integral
H =   0 1 P ( x )   log 2   P ( x ) dx
where P(x) represents the probability density. In a phase space that describes y as a function of x (y = F(x)), the average information change λ ¯ over an interval is an integral weighted by the probability density P(x)
λ ¯   Δ H average =     P ( x ) log 2 | dy dx | dx
If the probability density P(x) required for equation (3) is not known, λ ¯ can be determined operationally by iteration
λ ¯ =   lim n 1 n   1 n log 2 | dy dx |
When including the time per iteration, T(x), formula (3) becomes
λ ¯ =   | dH dt | average =   0 1 P ( x ) T ( x )   log 2 | dy dx | dx
that is
λ t =   lim n 0 1 n   1 n log 2 | dy / dx | T n
where Tn is the length of time taken by the nth iterate [13].

3. The Unification of Dynamics and Thermodynamics in Event Space

In the laws of classical physics, reversibility (symmetry) has played an important role. While physics has traditionally dealt with such reversible processes, biology has investigated the historical developments of evolution and its consequences over time. Those historical elements have been viewed as undetermined. For long periods of time, no conceptual connection existed between those scientific approaches.
To account for our experience of the world, there has been a need to expand the formulas of dynamics/quantum mechanics to include the irreversible events of thermodynamics. The concept of entropy was developed in the 1850 s by Rudolf Clausius, who described it as the transformation content (dissipative use of energy) of a thermodynamic system or working body of chemical species during a change of state. Much of the mathematics for the field of thermodynamics was then formulated in the 1870 s by Ludwig Boltzmann and J. Willard Gibbs. Thermodynamics has introduced the asymmetry of time irreversibility into physical law. With that, physics has acquired a historical element that does justice to our common experience. The second law of thermodynamics states that a closed system progresses toward a state of maximal entropy. To be consistent with thermodynamics’ second law, proper equations of motion must allow for the existence of an attractor (stability according to the second method of Lyapunov (Supplement note 3)). This is not the case for Newton’s laws or for Lagrangian and Hamiltonian mechanics, but it is achievable through the redefinition of entropy and an internal time in the form of operators
i ρ ˜ t =   Λ 1 L Λ ρ ˜
ρ ˜ = state of the system, L = Liouville operator, Λ 1 = time operator (also written as T) [17]. This description of dynamics in terms of operators dramatically reshapes space-time. Among the derived concepts, some of the most consequential are the microscopic entropy operator M and the time operator T. Therein implied is an internal time, which is distinct from the time concept that in classical or quantum mechanics simply labels trajectories or wave functions. The canonical time, the label of dynamics, becomes an average over the internal operator time (Supplement note 4). The recognition that the direction of time is caused by an infinite entropy barrier against time reversal has unified symmetrical and asymmetrical aspects of nature [17,18].
However, a new challenge has arisen. The reliance on the eigenvectors and eigenvalues of the operators for time and thermodynamic entropy inevitably renders granular our knowledge of the physical world. An assessment of the information flow (more accurately the information evolution) associated with an event now requires algorithms that are independent of trajectories in phase space. To achieve this, a reformulation of the Lyapunov characteristic exponent needs to be applied to the vector-based event space.

4. The Multiplicative Ergodic Theorem in Vector Space

To quantify information loss or information generation in the restructured space-time (Section 3), the application of a Lyapunov characteristic exponent ( λ ¯ , described in Section 2) requires modification. In addition to its use with differentiable dynamical systems, the multiplicative ergodic theorem has applications to algebraic groups. The idea involves extending the theorem to local fields [19]. The theorem defines
χ ( x , u ) =   lim n I n   log   T x n u
x : M R   { } is a τ-invariant measurable function such that x + L 1 ( M , ρ ) ; T : M M m is a measurable function to the real m X m matrices, T x n = (τ n−1x), …, T(τ x), T(x); τ : M M is a measurable map preserving the f-invariant probability measure on M; ρ (M) = I. When applied to the operator-based equation of motion in formula (7), the notation becomes
λ ¯ =   lim n Ι n   log   Λ 1 L Λ
and the vector norm serves as a measure for the progression of an event, assessing whether it acts as an information sink or as an information source.

5. Dimensionality and Entropy

An occurrence in phase space, state space, or event space is characterized by its occupancy of the attributed space, the number of coordinates of which is determined by the complexity of that occurrence. A fractal dimensionality can be calculated that is tied to the information flow (or information evolution) of this process in that hypothetical space (Supplement note 5). Thus, fractal dimension can become reflective of certain characteristics that are attributable to the occurrence as it acts in abstract space, such as the evolution of information.
From the application of the multiplicative ergodic theorem (Section 2) to event space (Section 3), we have obtained a Lyapunov characteristic exponent that assesses the evolution of a process independently of trajectories (Section 4). On its basis, a dimensionality of the occupied event space can be calculated, the magnitude of which equates to a function describing the generation of information and thus to the removal of uncertainty (a characteristic element of chance). The Lyapunov dimension DKY represents an upper bound for the information dimension of the system (Supplement note 6). According to Pesin’s theorem (Supplement note 7) [20], the sum of all the positive Lyapunov exponents gives an estimate of the Kolmogorov–Sinai entropy.
D KY =   k +   i = 1 k λ ¯ i | λ ¯ k + 1 |
k = maximum integer such that the sum of the k largest exponents is non-negative, λ ¯ = Lyapunov characteristic exponent. In conjunction with the derivations above, the insertion of the Lyapunov characteristic exponent, calculated for event space, into formula (10) achieves the connection of a process, the execution of which impacts internal time and thermodynamic entropy, to the fractal dimension of the event space occupied by the occurrence, and to its information entropy.

6. Implications for Epistemology

The unity of experience causes us to perceive the universe as one entity. If this is true (we have no reason for doubt), all things and all occurrences must in some fashion be connected. By contrast, the traditional, reductionist description of nature has typically categorized observations and created opposites (such as matter versus empty space, particle versus electromagnetic wave, determined versus chance events, etc.) that are seemingly unconnected, thus generating sub-entities of the world that have been treated as mutually detached. Practically, such reductionist categorizations have achieved a quasi-linearization of descriptions [21], thus gaining tractability at the expense of excessively simplifying our perceptions of nature. This thinking has divided the physical world into basic entities (such as space, time, matter), the separate identities of which have been considered to be absolute. Only early twentieth century has research dented the segregation of the physical world into those absolute, often opposing categories (Supplement note 8).
Among the most excruciating of the opposites, which need to be addressed by anti-reductionist studies, is the separation of causative events from chance events. An entirely deterministic world view makes human decisions futile and leads to fatalism, whereas a stochastic view eliminates the need for decisions due to the random nature of future events, consequentially leading to nihilism. Currently, the most prevalent way of dealing with this conflict between two opposing but equally undesirable philosophical models is the perception that some events are subject to cause–effect relationships (determined), while others are stochastic (random) in nature. If there exists a formal link between determinism and indeterminism neither of the two components can retain its character (Supplement note 9). This dualism in the description of the world, which inevitably creates two domains that are mutually exclusive opposites, is a construct of the human mind that clashes with our unitary experience. Here, we offer algorithms to aid in the breakdown of the distinction between necessity and chance. For a physical process, its extent of randomness, equivalent to its information evolution, can be estimated by applying the Lyapunov characteristic exponent to event space in a manner that corresponds to the calculation of the dimensionality in fractal geometry and gives an estimate of the Kolmogorov-Sinai entropy.
Our natural perception that we exist in a space that comprises three axes (“dimensions”), height, depth and width, may have evolved more to aid our survival than to support our insights into the workings of nature. Therefore, physical descriptions have frequently taken recourse to hypothetical spaces. As such, we utilize state space for modeling the possible configurations of a system and phase space for modeling the possible progressions of an occurrence. Through their assignments of precise coordinates, both hypothetical spaces represent idealizations. Any actual event is bound to have a measurable and finite extent, both in time and in physical space. Its coordinates cover a range. Furthermore, the properties of time differ from the properties of space, most importantly because of the irreversibility of the former. While event space connects the execution of a process to the progression of time and to entropy generation (with time reversal being prevented by an infinite entropy barrier) [17], this space is non-continuous through the action of operators (Table 1). Answers are constrained to functions of eigenvectors and eigenvalues.
The achievements of twentieth century research include reunifications of multiple entities that had been parsed by reductionism. The price for these syntheses has been to accept the existence of principal limitations to our knowledge [22]. Among these limitations are the losses of absolute time and absolute space [18,23]. Until those descriptions, time had been thought to be universal. Physical explanations were consistent with the existence of timelines, wherein time was a parameter. With the advent of the theory of relativity, rates of time were described as running variably in dependence on relative motion. Space and time became merged into space-time, wherein the older depictions of timelines became replaced by orbits. In this framework, time morphed into a coordinate. Yet, the impossibility of time reversal was poorly accounted for. The later unification of dynamic and thermodynamic algorithms required a further reshaping of space-time. It now understands processes as having an internal time tied to occurrences within these processes [17,24]. Conventional time has become an average over the internal time. This internal time now embodies the characteristics of an operator and has lost continuity as well as symmetry (Supplement note 10).
It has been hypothesized that the unity of nature is reflected in the relationships among structures rather than in the structures themselves [25]. In fact, every occurrence in the world impacts space and time [18,23]. Through its execution, information about the past is lost, and—in non-linear systems—new information is generated [13]. The evolution of information resulting from the implementation of a process is a measure for the extent of undetermined (chance) features associated with it. With the mutual relativation of space, time and events, there has arisen a unification of information content, degree of randomness, and complexity (fractal dimension) in event space. These entities have become distinct aspects of the same phenomenon and can be mathematically expressed through the same algorithms.
‘No man ever steps into the same river twice, for it is not the same river and he is not the same man.’ Hērákleitos.

7. Supplement

Note 1 The implications of particle physics and quantum mechanics for our understanding of chance and causation have been discussed extensively [26,27,28]. Although the present investigation does not attempt to delve into these specialties, it is important to note that the event space analyses by Prigogine [17] were intended to integrate thermodynamics into both classical mechanics and quantum mechanics. The central importance of Prigogine’s work for the present study should also make it relevant for epistemological considerations pertaining to quantum mechanics.
Note 2 Randomness is often explicated in terms of unpredictability [29], that is, uncertainty about the future course and outcome of an occurrence. While this is a reasonable premise, here, we seek to replace semantic definitions of chance or necessity with algorithms. Such algorithmic descriptions measure—in event space—the information evolution and dimensionality that are associated with the occurrence. These entities are unambiguously defined, and they reflect on the extent of chance contributions to a process.
Note 3 While both terms have been derived from the lifelong research commitment to stability theory by Aleksandr Mikhailovich Lyapunov, “stability according to the second method of Lyapunov” is distinct from the “Lyapunov characteristic exponent”. Lyapunov’s second method estimates regions of asymptotic stability by autonomous nonlinear differential systems. It identifies attractors, which may be interpreted as states of maximal entropy. By contrast, the Lyapunov characteristic exponent [30] expresses the rate of exponential divergence from perturbed initial conditions. It is a readout for the complexity of a system.
Note 4 An operator time Λ 1 (now written as T, Λ 1 T ) is closely related to the microscopic entropy operator M. M can be represented as the product of the operator time T and its Hermitian conjugate T^. This is always feasible because M is positive.
M = T ^ T
T has as eigenvalues the possible ages a system may have. A given initial distribution can generally be decomposed into members having different ages and evolving at variable rates. Thus, a novel meaning for time emerges that is associated with evolution. In general, a distribution has no well-defined age but may be expanded in a series of functions having a well-defined age. An excess with respect to the uniform equilibrium distribution may have a well-defined age. It is then an eigenfunction of T. Hence, ordinary time becomes an average over the operator time.
M is defined as a microscopic entropy operator that does not commute with the Liouville operator L. The commutator
i   ( LM ML ) = D < 0
defines the microscopic entropy production and leads to a form of complementarity. With the introduction of the operator M, classical dynamics can no longer be limited to the study of orbits. Rather than considering functions of coordinates and momenta, it evolves into the investigation of the time evolution by distribution functions (bundles of trajectories). With the introduction of the operator M, this description in terms of distribution functions becomes basic, and no further reduction to individual trajectories or wave functions can be performed. Once T is known, M can be an operator that is a decreasing function of T. Obtained is a Lyapunov function (according to the second method of Lyapunov) that assumes its minimum value at microcanonical equilibrium. Starting from M, a non-unitary transformation Λ obtains a universal Lyapunov function. The transformation Λ depends on T, which itself is related to L through the commutation rule [17,18].
Note 5 Mandelbrot [31] characterized dimensions that are not whole numbers. A fractal dimension is a ratio that provides a statistical index of the complexity attributable to a process, a state, or an event. The coverage of an assigned space determines the dimensionality
log ε N   = D =   log   N log   ε
similarly expressed in the “box-counting” dimension
D 0 =   lim ε 0 log   N ( ε ) log 1 ε
where N stands for the number of units (the minimum number of cubes to cover the set), for the edge length (the scaling factor), and D for the fractal dimension.
The assessment of complexity (assignment of a fractal dimension) by an occurrence in phase space, state space or event space enables a direct connection to the associated information flow through the information dimension,
D 1 =   lim ε 0 log   p ε log 1 ε
D1 considers how the average information, needed to identify an occupied box, scales with box size, p is a probability. Closely related is the Rényi dimension of order that is defined as
D α =   lim ε 0 1 α 1 log   ( i p i α ) log ε
where the numerator is the Rényi entropy of order α. The Rényi entropy is a generalization of the Shannon entropy. In the limit that approaches 1, the Rényi definition of entropy converges to the Shannon definition. An attractor, for which the Rényi dimensions are not all equal exhibits multifractal structure. Thus, the Rényi dimension connects uncertainty to the dimensionality of the space in which it is measured.
Note 6 It has been pointed out that drawing an inference from the presence of positive global Lyapunov exponents to the existence of on-average exponentially diverging trajectories is invalid. This has implications for defining chaos because exponential growth parametrized by global Lyapunov exponents turns out not to be an appropriate measure [32]. The calculation of an information dimension from the positive Lyapunov exponents of a system relieves the reliance on global Lyapunov exponents.
Note 7 The entropy of a Borel probability measure that is invariant under a diffeomorphism f is a measure of the overall exponential complexity of the orbit structure of as seen by this measure. This concept originated from an adaptation of a notion in information theory to the theory of dynamical systems. The content of the Pesin entropy formula is that in the case of a smooth measure the entropy is given precisely by the total asymptotic exponential rate of expansion under the diffeomorphism, that is, the sum of the positive Lyapunov exponents of the measure. This formula has been refined to give analogous information about invariant measures that are not absolutely continuous when the expansion rates are weighted suitably. This involves connections between entropy and fractal dimension (http://www.scholarpedia.org/article/Pesin_entropy_formula).
Note 8 The roots for unifications of distinct entities do go back to James Clark Maxwell (1865), whose formulation of the classical theory of electromagnetic radiation brought together electricity, magnetism, and light as different manifestations of the same phenomenon. However, broader syntheses in the early twentieth century had a more fundamental impact, largely through the theory of relativity.
Note 9 No process is ever fully determined or fully undetermined. Elements of chance and necessity are variably associated with all occurrences in the inanimate and animate world. There are parallels between the above description and ϵ-machines. Many scientific domains face the problem of detecting randomness and patterns. A realistic description of nature must incorporate determined and undetermined components under one umbrella. ϵ-machines include terms for the reconstructed state space of the process under study and the induced state-to-state transitions. This equates to measuring intrinsic computation in processes [15].
Note 10 While it is beyond the scope of the present analysis, it is noteworthy that the definition of time as an operator, obtained from the unification of dynamics and thermodynamics [17,18], should alter the geometry of the Einstein field equation
G μ ν +   Λ g μ ν   =   8 π G c 4 T μ ν
that is central to the general theory of relativity [23]. Now, the time component of the field becomes discontinuous. Its reversion is disallowed.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Hawking, S.W. A Brief History of Time. From the Big Bang to Black Holes; Bantam Books: New York, NY, USA, 1980. [Google Scholar]
  2. Gigerenzer, G.; Swijtink, Z.; Porter, T.; Daston, L.; Beatty, J.; Krüger, L. The Empire of Chance: How Probability Changed Science and Everyday Life; Cambridge University Press: Cambridge, UK, 1989. [Google Scholar]
  3. Bennett, D.J. Randomness; Harvard University Press: Cambridge, MA, USA, 1988. [Google Scholar]
  4. Popper, K.R. The Open Universe. An Argument for Indeterminism; Rowman and Littlefield: Totowa, NJ, USA, 1982. [Google Scholar]
  5. Heisenberg, W. Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik. Zeitschrift für Physik. 1927, 43, 172–198. [Google Scholar] [CrossRef]
  6. Lorenz, E.N. Deterministic nonperiodic flow. J. Atmos. Sci. 1963, 20, 130–141. [Google Scholar] [CrossRef]
  7. Favre, A.; Guitton, H.; Guitton, J.; Lichnerowicz, A.; Wolff, E. Chaos and Determinism. In Turbulence as a Paradigm for Complex Systems Converging toward Final States; The Johns Hopkins University Press: Baltimore, MD, USA, 1988. [Google Scholar]
  8. Monod, J. Zufall und Notwendigkeit. Philosophische Fragen der modernen Biologie; Deutscher Taschenbuch Verlag: Munich, Germany, 1985. [Google Scholar]
  9. Chaitin, G. A theory of program size formally identical to information theory. J. ACM 1975, 22, 329–340. [Google Scholar] [CrossRef]
  10. Chaitin, G. The Unknowable; Springer: Singapore, 1999. [Google Scholar]
  11. Klenke, A. Probability Theory: A Comprehensive Course; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  12. McKelvey, B. Thwarting Faddism at the Edge of Chaos; European Institute for Advanced Studies in Management Workshop on Complexity and Organization: Brussels, Belgium, 1988. [Google Scholar]
  13. Shaw, R. Strange attractors, chaotic behavior, and information flow. Z. Naturf. 1981, 36, 80–112. [Google Scholar] [CrossRef]
  14. Beltrami, E. What is Random? In Chance and Order in Mathematics and Life; Copernicus: New York, NY, USA, 1999. [Google Scholar]
  15. Crutchfield, J.P. Between order and chaos. Nat. Phys. 2012, 8, 17–24. [Google Scholar] [CrossRef]
  16. Shannon, C.E. A mathematical theory of communication. Bell Syst. Technol. J. 1948, 27, 379–423 and 623–656. [Google Scholar] [CrossRef]
  17. Prigogine, I. From Being to Becoming. In Time and Complexity in the Physical Sciences; W.H. Freeman and Company: New York, NY, USA, 1980. [Google Scholar]
  18. Prigogine, I. Dissipative structures, dynamics, and entropy. Int. J. Quantum Chem. 1975, 9, 443–456. [Google Scholar] [CrossRef]
  19. Ruelle, D. Ergodic theory of differentiable dynamical systems. Publ. Math. IHÉS 1979, 50, 27–58. [Google Scholar] [CrossRef]
  20. Pesin, Y.B. Dimension Theory in Dynamical Systems. Contemporary Views and Applications. Chicago Lectures in Mathematics; University of Chicago Press: Chicago, IL, USA, 1997. [Google Scholar]
  21. Milanowski, P.; Carter, T.J.; Weber, G.F. Enzyme catalysis and the outcome of biochemical reactions. J. Proteom. Bioinform. 2013, 6, 132–141. [Google Scholar] [CrossRef]
  22. Weber, G.F. Dynamic knowledge—A century of evolution. Soc. Mind 2013, 3, 268–277. [Google Scholar] [CrossRef]
  23. Einstein, A. Die Grundlage der allgemeinen Relativitätstheorie. Annalen Phys. 1916, 49, 769–822. [Google Scholar] [CrossRef]
  24. Gialampoukidis, I.; Antoniou, I. Entropy, Age and Time Operator. Entropy 2015, 17, 407–424. [Google Scholar] [CrossRef]
  25. Eigen, M.; Winkler, R. Das Spiel; Piper: München/Zürich, Germany, 1987. [Google Scholar]
  26. Bub, J. Quantum probabilities as degrees of belief. Stud. Hist. Philos. Mod. Phys. 2007, 38, 232–254. [Google Scholar] [CrossRef]
  27. Wallace, D. Quantum probability from subjective likelihood: Improving on Deutsch’s proof of the probability rule. Stud. Hist. Philos. Mod. Phys. 2007, 38, 311–332. [Google Scholar] [CrossRef]
  28. Oreshkov, O.; Costa, F.; Brukner, Č. Quantum correlations with no causal order. Nat. Commun. 2012, 3, 1092. [Google Scholar] [CrossRef] [PubMed]
  29. Berkovitz, J.; Frigg, R.; Kronz, F. The Ergodic Hierarchy, Randomness and Chaos. Stud. Hist. Philos. Mod. Phys. 2006, 37, 661–691. [Google Scholar] [CrossRef] [Green Version]
  30. Oseledets, V.I. A multiplicative ergodic theorem. Characteristic Ljapunov, exponents of dynamical systems. Trans. Mosc. Math. Soc. 1968, 19, 197–231. [Google Scholar]
  31. Mandelbrot, B. The Fractal Geometry of Nature; W.H. Freeman and Company: New York, NY, USA, 1982. [Google Scholar]
  32. Bishop, R. Chaos. In The Stanford Encyclopedia of Philosophy; Zalta, E.N., Ed.; Center for the Study of Language and Information: Stanford, CA, USA, 2017; Available online: https://plato.stanford.edu/archives/spr2017/entries/chaos/ (accessed on 18 October 2019).
Table 1. Comparison of phase space and event space.
Table 1. Comparison of phase space and event space.
Phase SpaceEvent Space
measure of motionorbitsoperators
irreversibilitytrajectories can merge, not splitinclusion of dynamics and thermodynamics
informationinformation flowinformation evolution
information loss or gainlaminar flow - information sink, turbulent flow - information sourceergodic process - advanced state is nearly independent of initial state, emergence - self-organization
rate of changeLyapunov characteristic exponent multiplicative ergodic theorem for vector spaces
dimensionattractor geometry defines fractal dimensionLyapunov dimension is a measure for (information) entropy
limit to knowledgeuncertainty principle limits accuracy of trajectory measurementeigenvectors/eigenvalues principally constrain knowledge about the system

Share and Cite

MDPI and ACS Style

Weber, G.F. Information Gain in Event Space Reflects Chance and Necessity Components of an Event. Information 2019, 10, 358. https://doi.org/10.3390/info10110358

AMA Style

Weber GF. Information Gain in Event Space Reflects Chance and Necessity Components of an Event. Information. 2019; 10(11):358. https://doi.org/10.3390/info10110358

Chicago/Turabian Style

Weber, Georg F. 2019. "Information Gain in Event Space Reflects Chance and Necessity Components of an Event" Information 10, no. 11: 358. https://doi.org/10.3390/info10110358

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop